About Lawrence Rabiner
Lawrence Rabiner was born in 1943 in Brooklyn, New York. He received the BS degree and the MS in EE simultaneously in 1964, and the Ph.D. in 1967, all from MIT. He was a "co-op" at Bell Labs at Whippany and Murray Hill, N.J. between 1962 and 1964, where he worked on digital circuitry, military communications, and the study of binaural hearing. He subsequently became a regular staff member of the Laboratories. His Ph.D. thesis and some of his early work at Bell Laboratories was in the field of speech synthesis and since 1967 he has worked on digital filter design, spectrum analysis, implementation of digital systems, random number generators, and other aspects of signal processing. He is currently Speech and Image Processing Services Research Vice President at AT&T Laboratories. He has co-authored four major books in the signal processing field, Theory and Application of Digital Signal Processing (1975), Digital Processing of Speech Signals (1978), Multirate Digital Signal Processing (1983), and Fundamentals of Speech Recognition (1993). He has written or co-authored over 300 articles, including many on speech recognition and speech synthesis, and has been the recipient of 25 patents. He received the IEEE's Group on Audio and Electroacoustics' Paper Award (1971), the Achievement Award (1978); the Emanuel R. Piore Award (1980), the ASSP Society Award (1980); the Centennial Medal, and the SPS Magazine Award, (1994). He is a fellow of the IEEE (1976) [fellow award for "leadership and contributions to VLSI technologies"] and the Acoustical Society of America, served as editor of the ASSP Transactions, and is a former member of the IEEE Proceedings editorial board. He has also been active in the Signal Processing Society and its predecessors, acting as its vice president (1973), and president, (1974-75).
This interview is primarily Rabiner’s historical perspective on the growth of DSP as a field and the subsequent industrial applications of research. Rabiner believes that research truly continued independent of applications, and credits the excellent peer relationships fostered through the Acoustical Society and Arden House Workshops as pushing research into the advances so commonly found today. The bulk of the interview features Rabiner’s historical perspective on the growth of DSP as both a discipline and influence upon the technologies of today, although the latter part of the interview consists of his opinions of where the field has gone in recent years. Rabiner provides excellent coverage of the earliest years of DSP research and of the influential researchers who have molded the field.
About the Interview
LAWRENCE RABINER: An Interview Conducted by Andrew Goldstein, Center for the History of Electrical Engineering, 13 November 1996
Interview # 319 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineers, Inc.
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.
It is recommended that this oral history be cited as follows:
Lawrence Rabiner, an oral history conducted in 1996 by Andrew Goldstein, IEEE History Center, New Brunswick, NJ, USA.
Interview: Lawrence Rabiner
Interviewer: Andrew Goldstein
Place: Murray Hill, New Jersey
Date: 13 November 1996
Education; development of digital signal processing field
Let's start with your own career and work in signal processing and move into your perspectives on other developments in the field. Tell me about your background.
I went to MIT as an undergraduate and a graduate student, from 1960 through 1967. I got a B.S. and a M.S. in Electrical Engineering and a Ph.D. in Electrical Engineering. As an undergraduate I was part of a cooperative program in Electrical Engineering and I began work at Bell Laboratories in 1962. My first association with signal processing itself came in 1963, when I was looking for an assignment for a Master’s thesis as part of that program. I met Jim Kaiser, who described the kind of work he was doing in designing digital filters. I did not work in that field for my Master’s thesis, but I got interested in it. I finished my Masters and I went back to MIT to work on a Ph.D. I met Ben Gold, who gave a lecture at MIT in 1964 on this nascent field of signal processing and what it was all about. I got really excited, and I began work on a Ph.D. on speech synthesis.
It was clear to me and to most people that in applications fields like speech processing or radar processing or sonar, the techniques based on analog signal processing were okay, but hardware oriented. They didn’t take advantage of the computers starting to become pervasive in industry. People were looking around to say “How can I take advantage of these computers” that were now available to virtually everybody, the mainframes and the mini-computers. Ben Gold and Charlie Rader at Lincoln Labs and Jim Kaiser and Roger Golden at Bell Labs started looking at ways of digitizing analog systems so you could take advantage of the computer, rather than building huge amounts of hardware.
Also, people were looking at this concept of digital signal processing as an entity by itself, not dependent on converting an analog signal to a digital form by sampling. They said “What if you had a digital sample, what would you be able to do”, if you never had to go back to the analog world once you had a bunch of numbers to represent a signal. I shared an office with Ben Gold in 1964 and we talked. I began work on a Ph.D. in Speech Synthesis. A key question was how do I use digital technology to build a speech synthesizer, to build it so it actually modeled the physical phenomenon that I was working on.
That phenomenon in this case was modeling the way the human vocal tract produces speech sounds so that if you actually had the right control you got the right output, and not an approximation to the analog system. I got into the field that way. I worked with Ben Gold on two or three ideas and we published them very early as offshoots to my Ph.D. work. I fully entered the field when I graduated in ‘67 with my Ph.D. and I came back to Bell Labs with a permanent job. I worked in a laboratory under Jim Flanagan. We were looking at a whole range of speech processing applications.
As I said, the key issue in signal processing was doing it right, doing it properly. There was enough uncertainty about how to do it that once you decided what you wanted to do, you had to make sure that you built the system right. As a result, from 1967 on, I got involved with digital filters, digital filter design, spectrum analysis, implementation of digital systems, random number generators, all of the aspects of signal processing.
That’s interesting. It sounds like the motivation to do digital signal processing came from the availability of these computers. You don’t describe a pressing problem that was computationally intense. Rather the realization that these computers are available leads people to ask “what are we going to do with them?
No, it was not quite that way. There were real problems. Speech processing and radar were the two key fields, because the signal processing computations were sort of doable with machines. That drove you. In the past people had worked in these systems by building analog systems. They actually built components. They built discrete boards. They built systems. They filled rooms with racks of equipment. If you visited any major lab in the early 1960’s, you might see an entire rack of components put together as a speech synthesizer or a vocoder. If you had your idea for a new vocoder, it might be two or three years between the idea and the realization. You were always cutting corners and not doing things exactly. You may have had an idea that was a highly nonlinear system. At that time no one knew exactly how to build these highly non-linear systems, so you’d use all sorts of tricks.
The real issue was, now that the computer was becoming available, wasn’t there a better way: to simulate the system, rather than build it? The idea was that if you built it, it was a real system and if you simulated it, you get some idea how it works. Simulation had a negative feature that when it was all done you could only try it on a few things. It was slow. You couldn’t speak into the system. You would record a file, wait a week or two, or an hour or two, and the processed signal would come out.
The advantage was that the time to get that first realization of how the system worked decreased from years to weeks, sometimes as little as hours. It was a reduction of an order of magnitude, sometimes two orders of magnitude. As a result you could try many more ideas. Some that were really bad could be blatantly thrown away, but some ideas turned out to be really good.
Bell Labs research
Were you trying to develop general approaches to digital signal processing, or were you working on specific projects?
It was a mixture of both. At the start you were working on a specific project. The questions that I was working on were “How could I synthesize speech from text? How could we build new forms of a phase vocoder or a formant vocoder?” This was a very focused project. However, once you realize that each project will take weeks or months instead of years, you could then say “How do I build a more general speech processing system so I can imbed the next vocoder or synthesizer in it.” This kind of approach ultimately led to all the tools that we take for granted today for spectrum analysis or filtering or FFT’s. They have become tools in a toolbox, so that we can think generically about these things. In the early days, no one even knew what the tools were or what the algorithms were or anything like that.
How did that work administratively at Bell Labs? If you’re working on a specific project and you make some headway and you’re ready to generalize, what becomes of the project? How does that continue in development?
<flashmp3>319 - rabiner - clip 2.mp3</flashmp3>
In the 60’s and the 70’s at Bell Labs, the projects were not like development projects. We worked on projects looking twenty years into the future. So the attitude was, if you were building a synthesizer you realized the synthesizer wasn’t the problem. The problem was building the digital filtering necessary to realize the resonant filters needed in the synthesizer. You could afford to say “that’s my dominant problem” and spend the next two years on studying how to build good digital filters that model specific effects. When it's all done, you have a new component, and you go back to the problem. Since the projects were not development projects where there was a real schedule and you had to do it the best you could, we could afford to go off on a lot of sidetracks. In fact, almost all the work in digital signal processing that I’ve ever done was as a sidetrack.
For example, we worked on something called the “chirp-z transform”, which was an interesting spectrum analysis algorithm. It’s a variation on how to do a spectrum analysis on a contour that’s not the unit circle. We needed it to build a formant vocoder. The formants are the resonance of the human vocal track. If you use the unit circle, the resonance would often get lost in noise or measurement error if the bandwidth is too wide. If you could get a little bit off the unit circle and get a little closer to where the resonant pole is located-- in pole-zero modeling--you could actually sharpen the response. You effectively took the band-width, which might be very big, and compressed it. The narrower the band-width, the sharper the resonance.
The algorithm came because we were trying to figure out how could you do that. We came up with a really ingenious theoretical procedure, which turned into the “chirp-z transform." It has been used in radar, and in sonar, and in molecular spectroscopy. It has all sorts of applications, but it started because we had a specific operation. We were working on a formant vocoder, and every once in a while the band-width was so broad that it was hard to tell whether it was a resonance or just noise.
So the answer is, everything played into itself. Nobody started out saying “I want to figure out how do I build the best digital filters in the world.” Nobody said “I want to figure out how to do a spectrum analysis algorithm or build an equivalent to the Fourier Transform.” It started off “I need a spectrum analysis algorithm because I’m trying to get a better spectrum”, or “I need a way of building better digital filters, because I’m building a vocoder and I need to segment speech into bands.” Because you needed it, and because the time-scale for our research was much broader than it is today, you could afford to go off and say “The next two years of my life I’m going to study that problem. I really believe I have the insight and the desire to do it.”
That was the golden age of signal processing, when people would just decide what the key problems were, and work on them. A lot of it got fed to universities, which are naturally fertile grounds for taking a problem and making it real. For example, Ben Gold was at Lincoln Lab, an industrial lab affiliated with MIT. He took a year’s sabbatical at MIT in 1964 or 65. He and I shared an office and we kind of cross-pollinated. He was the mentor and I was the student, but he had all these great ideas and we both needed to use them.
So Gold was your advisor?
No, he just shared an office with me.
Education in signal processing
Let me step back to when you were a student. Was the notion of signal processing a concept that existed in people’s minds. What did it signify if it was?
I think it existed from 1964 on, which was my first association with people that you would consider signal processing types. It was inherently there in the earliest views of the world: we always thought, you build it or you simulate it. The building it was the analog hardware, the big racks of equipment. The simulation was the computer. We started to realize that simulation was really the manipulation of signals. That’s what a signal processor is: manipulation of signals, linear or non-linear. For the most part it’s linear, because the theory explains that well. We can carry linear signals through from end to end and we know all sorts of nice properties about them. Non-linear signals are kind of a scary thing. The concept of signal processing was always in the background. It came to the foreground so early in the game that if you think back, you can never quite say when you started thinking of it as signal processing as opposed to simulation of systems.
Let me be sure I understand. You’re saying now that this digital experimentation was considered as signal processing. I was asking whether people ever spoke of signal processing in the analog domain?
Signal processing became a digital concept.
So people don’t talk about it until it’s realized digitally?
Yes. In fact the term signal processing is always synonymous with DSP--"digital signal processing." You never heard the term ASP--"analog signal processing." It grew out of the term simulation. Simulation itself is a digital embodiment of an analog process, that’s the whole concept. After a while you realized you could have a digital simulation of a digital process. What are you really doing then? You’re just taking a signal and you’re just processing it.
So prior to that, what we would now call signal processing techniques were imbedded in the specific technologies, the specific applications in which they were realized and never pulled together as a general or universal field of study.
That’s my perception of it, for the most part. I took a course at MIT on filter design. They called it Filter Design. They didn’t call it Analog Filter Design. It was RLC’s and passive networks. No one thought of calling that analog signal processing. Ernie Guillemin wrote Synthesis of Passive Networks, a classic book on network theory. People wrote books on filter design, but no one called them analog filter design. No one thought in terms of signals because they were always embodied. There was always that resistor, capacitor, inductor kind of concept, active elements to build active filters, but no one ever thought of them as being integrated in the field. Not even spectrum analysis or radar. I can’t even think of a book that tied them all together creating the digital era. There were books on each of these things and there are books on digital filtering alone today. In general, most books in the field are on signal processing. That’s the strength of it.
Once things become digital, once simulation becomes an issue, is that reason enough that people begin to organize their thoughts into a field called signal processing or is it just a coincidence?
No, I don’t think it’s a coincidence. They have to do that. When I was an undergraduate we took a course called Circuits and Systems. Systems was really digital, we learned Fourier analysis and we learned all of the same concepts. The idea of tying it all together with Circuits and Systems was the key. Knowing about systems did you no good unless you knew something about the circuits that could actually build them. Otherwise it became a math concept. In fact, I learned Fourier analysis at MIT in a math course, and then I learned it again in Bill Siebert’s Circuits and Systems course. The first time it was a math entity, but as soon as you brought circuits in, it became physical.
When you went digital, the analog of the circuit is simulation. As a result, it became signal processing all up and down, because simulation is just programming in a sense. We moved beyond talking about Programming and Systems. It was a natural outgrowth of looking at digital systems and simulations to think of signal processing. That’s the whole concept of the digital system. In the digital system there’s a signal and you’re going to process it. All the rest of it just followed.
Signal processing pioneers and application-driven research
You mentioned Ben Gold and other pioneers. What stimulated their interest?
Everything in the early stages had to do with applications. Here’s some exceptions. For the most part, if you look at the key contributors to the field in the early years, they all had a speech background or a radar background. The exception was Jim Cooley with the FFT. If you look at the people at Lincoln Labs or MIT or Bell Labs, the centers of the whole thing, all of them were driven by an application. They asked, "how do I do it faster, more accurately, more correctly. How do I actually learn something so I do it better?" They were all driven by specific applications. It wasn’t driven by the academics. There were some exceptions, but they were few. It was mainly driven by real problems. Nobody said “Gee, this new field is coming up. This is really fertile ground for proving theorems and doing all sorts of neat stuff and learning the theory and training a bunch of Ph.D. theses to look at second, third or fourth level problems.” It wasn’t driven that way.
Al Oppenheim had gotten his Ph.D. in the math area at MIT, on Homomorphic Systems Theory and asked, "What good is this, what’s the value of it?" He came across all the papers that people had been looking at about non-linear processing of a specific type. He figured out that speech is the right place to do that. Homomorphic analysis was basically a separation technique in the cepstral domain. It’s in the log Fourier transform domain, what we call the cepstrum domain, which someone had thought about theoretically. As a mathematician, Al had the tools now to make it go, but it was driven by the fact that speech is one of the few signals that separates in that domain nicely, so you can do filtering in that domain.
Can you tell me what some of these applications were?
Applications were vocoders, speech synthesizers, speech recognizers.
I’m aware that the Bell System was using vocoders, but who was using these other systems?
Nobody was actually using them. This was driven by pure research at the time. The military was certainly the driver of vocoders. The military believed they needed secure digital communication. Vocoders, by digitizing the signal, provided the opportunity and they needed to be secure at low band-width. They had all these channels that were not high band-width. They were worried about the “Doomsday” effects in wars, where you can only communicate at very low frequency radio channels. So they were always interested in building vocoders, and they wanted more and more, and better and better. That was driving a lot of the Lincoln Labs research.
The Bell Labs research, was driven by the same kinds of questions. How can we provide speech front-ends to our customers? How can we make speech services do something? We worried about band-width being available. No one knew whether we were going to have microwave transmissions or fiber optics. In the early 60’s and 70’s, the question was how to get enough copper under the ground to handle all of this explosive growth of telephonic communication. The answer was to compress the speech signal. That’s a vocoder. It’s voice compression and coding. What drove Bell Labs for the longest time, was how to compress speech effectively.
But why synthesis? Well, we wanted services. We knew someday we’d like a machine to read a typed number and say the address associated with it. Or one where you could speak the name and get the phone number associated with it. Why do you need a person to read it to you, why not a machine? It would save some human time and let the human go on to the next problem. We knew they would need machines speaking to people and we knew someday you would be speaking to a machine to get information out. So that’s what drove us. There were also the radar applications. How do I build more accurate, better radar systems? That’s what drove almost all of this work.
Were the radar problems at any level analogous to the speech problems?
I think they were totally analogous in the sense that they needed the same fundamental components. They required that we be able to analyze a spectrum, to filter out signals, and to deconvolve, or separate, signals.
The frequencies are very, very different.
The frequencies are different and the kind of signals that you handle are different. One is human-generated from a human vocal track. One is generated by machine. You’re looking for blips. There are different noise spectrums you have to deal with, and in the range, but they’re very, very fundamentally the same. In fact, if you look at the first book that Ben Gold and I did, we laid out all this theory and then we said “Now let’s see how you’d apply it to speech.” One chapter is entirely on that. One chapter is on radar. The fact is that we built this whole foundation all the way up to a certain point, and there was nothing that you couldn’t understand in the next chapter on speech, the following on radar. So I wrote the one on speech and Ben wrote the one on radar. We showed that if you understood filtering, spectrum analysis, finite word length, all of the pieces that make up signal processing, you could go into any application with it. It’s your toolbox. It’s your theoretical and real toolbox. You can build real systems.
Digital signal processing milestones
Acoustical Society workshop; Fast Fourier Transfer
What are some of the milestones in the development of the digital signal processing techniques? Something like the FFT is a conceptual milestone. Are there hardware milestones?
Actually, there are two cross-milestones. There are both accomplishment milestones and event milestones. The events are not quite what you might think. One of the event milestones, and I mentioned this in my original paper, is that there was an Acoustical Society workshop in Boston. Things were starting to happen. The head of the committee at the time, was a guy named Bill Lang at IBM. He was studying noise--noise in systems and noise in rooms, acoustic type noise, which was part of the Audio and Electro-acoustics purview. He saw the times moving. He knew about the Fast Fourier Transfer work that Jim Cooley had done with John Tukey at Bell Labs. FFT was generating a tremendous stir; it was one of the key algorithms in history (because it takes an n2 computation and reduces it to nlogn). This doesn’t sound like much until you think of n being a million, and the log of a million is 20. So you are knocking off six orders of magnitude in computation and it says that if you go to ten million, it only goes up by a factor of 10, it doesn’t go up by a factor of 100, as an n2 goes up. It opened up all sorts of stuff.
So Lang was hearing some of the buzz over the fact that people were learning how to simulate analog systems in digital form. The workshop occurred in Boston. The two centers of major activity were MIT and Bell Labs, and it was an Acoustical Society meeting, it wasn’t even an IEEE meeting. At the time the Acoustical Society was kind of the center for speech processing work because there was no speech in the IEEE, so the society brought the speech people in to hear about these neat techniques.
It created a marriage made in heaven. It made a lot of people aware of a new way of doing the kinds of things they want to do. It was a tremendous opportunity. That meeting was the mid-60’s (we can look up the date), and it led to the changing of the IEEE committee to one that eventually became the DSP group. The Group on Audio and Electro-acoustics had all these technical committees, and one of these committees became the DSP technical committee. They weren’t called that at the time. They basically said “Okay. The times they are changing. Time to do something.”
Arden House workshop; defining the field
So they planned the first Arden House workshop on the Fast Fourier Transform. They got a lot of the right people on that committee: Charlie Rader was on the committee, I was on the committee, Howard Helms from Bell Labs was on it and Jim Kaiser was on it. Ken Stieglitz from Princeton, who was one of the very early key contributors to the field, and Dave Bergland from Bell Labs were both on it. Put all these members together, and they knew everybody. As a result, we were able to compile a list of all the people worldwide who we thought were working on FFT and making contributions. We also made an announcement through IEEE auspices.
We wound up at Arden House in New York, which holds about 100 people. We filled the auditorium. We were hoping to get maybe 30 or 40 people. It was a new field, discussed only by word of mouth. The publication by Cooley and Tukey was really erudite and dense, hard to get into, and it was barely hot off the presses. Those things take a while until their message gets out. Yet we held this meeting, and it was exciting and dynamic. There were a hundred or so people attending. There were probably 60 or 70 papers, and every one of them was based on some new idea or going in some new direction. It was tremendous.
This committee met every six weeks, except during the summer. We met that way from 1967 until I left the committee in the early 80’s. Thirteen years or so. We held four Arden House workshops. We put out books. We put out reprint books, because we knew that even though these papers were out there, it took somebody to organize them. This was the period when the first book came out by Gold and Rader in the late 60’s. The next pair of books came out by Oppenheim and Schafer and myself and Ben Gold in 1975. We felt that since the books were starting to come out, we had better preserve the original source material. It was archivally important. The committee wrote a paper on terminology. We put out two books of reprints. We put out two books of the literature in the field. We put out one book on DSP code.
It’s probably the most productive committee I’ve ever served on, focused and productive. We defined the technology and the field. We put out the key books. Even though committee members wrote a high percentage of the papers, every paper that got in the reprint books went through review after review. We would split hairs about whether this paper was better because it’s more historic or this paper is better because its more up to date and more factual. We were unbelievably careful. Probably we made mistakes, but in hindsight, anyone’s going to do that. But these are the events that built up the field.
At this critical moment, when you were defining the field, what were some of the controversies?
Some of the obvious ones. For example, what’s the role of hardware? Another one is “what’s the role of implementation?”. Another one is “what’s the role of finite word length effects?”. Why are they issues? The theory of how do you design a digital filter is really eternal. How you do spectrum analysis is eternal. But how you implement changes.
We started seeing that. Remember, along with this digital signal processing theory and revolution came the huge Moore's law revolution in processors. People started realizing this. Early in the game, when we started building these eight-bit implementations, we were really pushing it. A lot of people said “Here’s the nice digital filter design technique, but you have infinite precision coefficients. How do you build them with eight-bit coefficients? Because that’s all I can give you." There were other people who understood that someday we would get sixteen bits. It might in be a thousand years. Sixteen seemed almost infinite. Today we work with 64 bit processors.
These issues came up. Did we really want to devote space in a book like this to something that we knew was going to change? Did we really want to go out and talk about hardware implementations? Did we really want to talk about what it means to have an oscillator or a frequency synthesizer (which is really what the digital equivalent is)? What did it really mean? We talked about memory. Some things were really good because they used little memory. We talked about building 200-point filters and said "That is inconceivable because that is just too much processing." Today, we talk about 10,000-point filters and say “That’s a fraction of a percent of a modern DSP's capacity, so why worry about it”.
These were the issues that we worried about. The fundamentals and the theory were no problem. The things we split hairs on were about organization and presentation. If the books are there and we assume people are going to read the books, do we really want to supplement the books? Do we want it historical? We worried about all the aspects, and there’s no answer to these questions. You make decisions, and you try to serve the field as best you can.
Industrial and academic reception; publications
You’ve said something about the supply side for all of this activity in DSP. How about the demand side? You pointed out that these books were very successful and that there was a lot of interest. How do you account for that?
Well I think that’s obvious. This was a revolution. We were the pioneers and the evangelists of a revolution when we first started. A lot of people came in kicking and screaming, like any revolution, DSP tore away the underpinnings of people’s livelihoods. We developed the books and supporting material that was used in courses. It became easier for somebody who was not a creator of the technology to give a course on the field. Oppenheim and Schafer’s book was out there, and all the material you might want to read in support of that book is sitting in two books you buy from IEEE. Each one has maybe 40 or 50 articles. You don’t have to go trace down 50 articles from the book's footnotes. The key ones are there. It was easier to teach, so most of the committee went out and gave courses. We proselytized.
There was insatiable demand because as hardware started following Moore's law, the stuff that the books talked about became reality very quickly. People realized that if you went digital, the exponential revolution was working for you. If you went any other way, the best you were going to do was keep costs flat, and probably costs were going to keep increasing with inflation. Digital was the most crazy thing. When you have a three percent inflation vs. a fifty percent per year deflation, guess which wins? So, we had a waiting audience saying “Show me the light. Make it real.” As people started building ASICS, DSP chips, microprocessors, DSP was implemented and realized.
When you say a waiting audience, do you mean excitement on the part of students, or young professors, or people in the industry?
All of the above. If you look the Signal Processing Society, which was called the ASSP at one point, and the Group on Audio and Electrico-acoustics, there’s a little curve of the membership. This isn’t just the students. The thing about students, students are a forty year population until you start seeing their effect. They’ve got to go out, start using it, start building things. Professors are a twenty year effect. People in the field are a zero year effect. We’re not a twenty year effect. The field just started growing rapidly. We used to have less than a thousand members. The last I looked there were about 15,000 people in the society. We’re one of the largest societies in IEEE.
It’s the right thing with the right economic dynamics. When people start seeing it, the demand is there. People and companies jumped in: TI, Motorola, Rockwell, all jumped in really quickly. Academia jumped in really quickly. Al Oppenheim had his book out there. Ron Schafer had a book out there. Ben Gold and Charlie had a book out there early, but, like all early books, it was a little too early. Even in ‘75, when the books came out, I think the basics were pretty well developed. But at one point, we tried to count the books that came out in DSP from 1975 to 1995. They’re uncountable! They’re in the hundreds. I remember sitting with Hans Schuessler once at an ICASSP, and going through the book fair, and every vendor must have had at least ten current books on DSP, and that’s not counting books that have stopped selling. It’s twenty one years after ‘75, when those two books came out from Prentice Hall. There have been hundreds of books in that period. There was a tremendous pent-up demand for this stuff. People went out and did things with a computer, and after a while it became digital hardware and then it became real. In fact, Andy Grove of Intel worried about how many micro-processors we would consume per day.
I was struck by how it seemed that most of the direction that you’ve described came from the research side rather than from industry.
I think this is one revolution that was completely orchestrated by the research side. I think of the DSP committee that we were on, and the people on it. I can’t think of a single person who came from industry who stayed on in the early years. I mean AT&T is technically an industry, but it was really Bell Labs, which was the research side.
I have a feeling that if you looked at the DSP committee right now you’d find that it’s dominated by industry. I left it in the early 80’s and I think Jim Kaiser left a few years after that. All the people I know were in academia or industrial research. There were just no Rockwells or TIs. TI has had people since the 80’s, since they started getting really big into it. But, what really pushed them in was the DSP chip. Once the chip was available, it was able to achieve real-time performance in the things we wanted to do in speech processing in the early 80’s. By 1988, almost all of speech processing was totally consumed within the DSP chip's capability.
Now, we’re in the video range. Not all video is in there, but by the year 2000 we believe that even the most advanced video processing algorithms will be included within the DSP range. Right now, almost all speech is within the Pentium range. A Pentium chip that you can buy these days for $75. You don’t need the top of the line DSP anymore. In fact, right now we talk about whether we want to use the cheapest 8386 chip for $.50, because it still has the capacity to do most things, or do you really want to spend $3 on a DSP that does the same thing. These are crazy numbers! We used to hope we could get under $1000.
You buy speech functionality and now it's like programming, because of the DSP chip and because we have the theory and the algorithms established. You don’t think about it. You know it and you go on to the real problems.
You used the term evangelizing before. I wonder if you could say something about your motivation for this evangelism. Was it intellectual excitement?
We were all pioneers in a field. The most exciting thing for a researcher to work on is something that nobody has ever done before. It’s a lot less fun to find out that you're the second person to came up with cold fusion. You don’t want to be the second person working on something that’s new and exciting. The evangelism came from the fact that we, the IEEE DSP committee, had an opportunity to go out and mold, shape, and make this work.
We didn't own any monopoly. Anybody could come in on this. But we had a concentration of people in those early years who did all the work, who managed all the work, who knew all the work. These were others, great people who did some really neat stuff. Tom Parks and Sid Burrus at Rice. I can name lots of examples. John Markel out on the West Coast. But they were isolated.
Here we were, this group of fifteen or so people meeting every six weeks, and not your usual wasteful BS meeting. We discussed everything. In fact at some meetings we even said “What’s the hottest thing that’s going on right now?” We were excited, we went out, and I’d be willing to bet from 1967 until 1980, if you look at the outside courses taught, these thirteen people probably taught an unbelievably disproportionate share of them. They wrote a disproportionate share of the new technical articles and contributed a disproportionate share of the material.
We were all evangelists. We fed on each other. Every six weeks you came back and the excitement level was always at a pitch. I’ve been on committees where after two years you barely see a written report. In the thirteen years I can think of, we had five books, a terminology article, and four successful workshops. These are not just “We organized a workshop,” either. Anybody can do that. We created workshops. Every one was orchestrated. We didn’t just say “Send us your papers and we will act as the paper keeper.” We orchestrated. We said “What are the hot topics? Who should be giving those talks?” Every single session had the key people doing the work talking about the hottest things. That’s exciting.
I remember once organizing a workshop for the National Academy of Sciences on Human-Machine Voice Interaction. I got a group of the top people in speech that I could think of in the world as a steering committee. I put it together and I said “If you had your dream choice of thirty or so speakers for a three day meeting on a bunch of topics, who would they be?” And we chose them. When you choose people who are world leaders in the field, who are really good, you’re lucky if you get a fifty percent hit rate. They have to be willing to do it and travel out to California at the right time. We had a 100% hit rate. That's exciting. Why? We had the right people, and it was absolutely an elite event. The National Academy of Sciences of the United States was sponsoring it. Everybody knows how prestigious they are, etc. This is what it was like then. We would say “Who is the right person?”
You know at the Arden House conference, one of the first people we wanted was Dick Garwin, who was one of the people who was a catalyst of the FFT. He was at IBM. He found John Tukey. John said “I’ve got these ideas that I’m developing and I need some bright young mathematician to help me out on some of this.” Garwin went to Cooley and they talked about that. The first thing we said is “Everyone should understand the history of the FFT.” Dick Garwin came to our meeting. He was, a senior executive at IBM, and he came to a damn technical meeting. He probably hadn’t been involved in that in years, but he poured his heart out for forty-five minutes, telling us how Cooley, a young Ph.D. mathematician with no background in speech got together with John Tukey, a world famous statistician.
Every time we invited somebody they came. If we asked somebody to come from Germany, because they had been doing some neat stuff in digital filter design, they came. It didn’t make a difference if it was in the middle of the academic year. They came. Everybody knew that this was a group of people who were making it happen: inventing the future, creating it, envisioning it. It didn’t mean we were right about everything, but when you have enough good people together you tend to be right.
I think really good people feed on each other, they challenge each other. I used to go away from these meetings and put my job aside, and go work on the terminology paper. I would work around the clock for eight or ten weeks because we knew we needed another version of something. People worked like the Dickens on this, unpaid. We had a love of what we were doing, and this concept that “we were the pioneers.” We had an opportunity to do something you don’t often get a chance to do—to define a field.
Research and applications; decimation interpolation
Was all this activity enough to sustain your interest, or was there a desire to see these techniques and technologies realized in commercial systems?
It was too early. People like Ben Gold were continually building these very fast machines. He built one called the FDP, the fast digital processor, and also the Lincoln digital processor. He was like that. He said, “I have to see this become real. I have to be able to talk into that machine, and it has to use all this neat processing and do the right kind of thing.” Certainly he was one of the people that really moved that field forward. The answer for the most part, however, is no. I think the excitement was enough by itself. We all knew that the time was not ripe for it to go out in a commercial world.
It is today. My company makes a lot of money using signal processing technology in speech processing, in DTMF generation. We wouldn’t consider anything but digital signal processing. In fact, no one would. IEEE just made this Kilby medal for signal processing. But at that time, I don’t think commercial application was a motivating factor. In fact, until AT&T divested itself from the RBOC's, there wasn’t the economic pressure to get something out there. I always wanted to make sure that the hardware that I was working on was always getting better and better, so that we were always in the game there and ready to go. But most of the applications I told you about were fifty years in research. The vocoder was invented in the 1930s by Homer Dudley. He also invented the voder, which was the first speech synthesizer, and there’s a history of those going back to the 1800s.
Most of the applications work was evolutionary. DSP was revolutionary. Very honestly, when you’re part of a revolution, it’s much more exciting. It’s self-sustaining and feeds on itself. Every six weeks I’d come into that room and somebody would have another neat piece of that puzzle, and everybody would be fascinated by it. Look at the contributions: the CZT, number theoretic convolution, number theoretic transforms. They all happened there.
Were these different contributions to a unified field? Were these papers citing each other and building on each other’s results or were they independent spears thrown out.
Probably a little bit of both. There was a core of work that everybody took advantage of. As you move out from the core, though, you realize that there’s some really independent work. For example, for some of the work we were doing, we needed good random number generators. People had made some evolutionary progress on those. But when we went digital, we said "why don’t we throw away all these things?” So that was something that had nothing to do with the core, but it builds from the core. That’s one of your “spears thrown out”.
In fact, for a while I remember working with Charlie Rader. We studied random number generators in the digital form, and we came up with some neat ones. The question was "how can we take advantage of our new digital knowledge? Every once in a while, people would ask how to do something that was unthinkable in the old analog world. And all of a sudden you say, “Now is it not only thinkable, it’s doable! So let’s try doing it.”
A great example, that turned into a field unto itself, is decimation interpolation. How would you take a digital signal and change its sampling rate digitally? The obvious thing is go back to the analog world and re-sample it. Everyone had been doing that for years. There’s a million things wrong with that, from every point-of-view. I started this with Ron Schafer. We wrote a paper called the Digital Signal Processing Approach to Interpolation. It’s the neatest thing in the world because if you don't have to go back to analog, the whole world changes for you. Here are a bunch of numbers, and I want to represent them in a different sampling space. It’s a neat theoretical problem and there are some really neat solutions and I wrote a book with Ron Crochiere called Multi-rate Digital Signal Processing.
That’s the key for a lot of people. You never even know where it’s going to be used. Compact discs sample at 44.1 khz, a crazy rate. Digital audio tapes sample at 48 khz. There’s no nice relationship between them, but there’s a way of using digital techniques to go from any rate to any other rate, and we described it in our book. When I went to Sony in Japan, one of the key guys in their DAT technology said, “We have built in the CD your algorithm that lets us to go out and provide a digital signal for DAT rates” because when people record digitally, they’re going to put it on a DAT, because there is no CD recording.
Isn't that an example of an application problem providing the motivation for doing this research?
But it wasn’t the motivation for the work.
So your work came before?
My work came completely before. We didn’t know why anyone would want to do this, it was just that, digitally you could do it. When one of these clever people in Japan read the book, he said “I can build that in there.” I went to a show on image processing, and one of the hardest things in image processing is changing your sampling rate. You go from 512 by 512 to 603 by 607. There’s no relationship between those numbers. They handed me a free copy of their software, and said “I’m using the algorithm in your book and it works perfectly.” They took a picture and said, “You tell me any dimension by any dimension.” Most algorithms show this horrible distortion all over, because it’s really sloppy. These guys read the book and said it’s perfect and they were selling software.
For the history of technology, then, when you’re looking at research that is pursued just for its intrinsic interest and not motivated by any specific application, the real question is what steers the researcher in a given direction.
I think there's a synergy, a symbiosis there. The early research most of the people did was pushed by a real application: vocoding, synthesizers, recognizers, etc. Then you say to yourself, "what is the real roadblock?" And you might answer "I need filtering algorithms.” We used to use filtering and we got the best we could, but after a while you said, “I want to be sure that I did this right.” That's where you take your diversion. Then while you’re working on filters, you see a lot of key problems that don’t relate back to the original one, and you work on those, because you say, “These are part of the filtering problem, even though I don’t know why anyone needs it or wants it.”
When we first did interpolation it was because we actually had a real problem: speech is sampled typically at 8 khz, but we wanted to be able to use other sampling rates within the Bell system at the time. There were always nice integer relationships between 8 to 16 khz, or 8 to 6.67, but they might not be nice, like 4 to 3 relations, but they were nice integer relations, and those are a neat thing to solve. But we said, “What would happen if the relationship was so complicated that you really can’t express it.” Pi to e relations, for example. And we said, “Oh, it’s just an interesting problem, let’s try it.” And we came up with a really neat solution. Not in my wildest dreams did I think anyone could ever use it. It was motivated because it was part of the problem we were solving, and it was just one of those interesting off-shoots, and you say to yourself, “I won't devote a year to solve it, but you know, is it worth a week?” It’s one of those decisions you make. I mean, why did people want to solve Fermat’s last theorem? There’s a lot of neat off-shoots of it.
But the answer is that one feeds the other. The applications drove us into looking at spectrum analysis, they drove us into looking at filtering techniques, they drove us into looking at interpolation. But once you get in there, it’s a whole new world. You ask yourself, “Should I stop at the piece I need to solve that problem and go back?”, because then it's a diversion, "or is it more important to help establish that field before going back.” I always took the second tack. "If I don’t solve it now, I’ll never come back and do it later." Maybe somebody else will, and of course that's what usually happens. That didn't mean we solved everything, but if we were aware of it, we certainly made the effort.
For example, I like to think of my life. I joined Bell Labs permanently in '67 with a Ph.D. and for the next thirteen years of my life it was all DSP work. Even though I was ostensibly in a speech group working on synthesis and vocoding and speech recognition, I primarily did DSP work. Because every time I tried to do anything that I really wanted to do, I was always hit with a DSP problem. I always felt it was more important to solve those problems right.
You mentioned the shaping effect of the DSP Committee. Was the committee membership stable?
It was stable, but changing. It was stable in the sense that people who got on early very rarely left the committee. Still, we grew. In the early days we were probably about six or seven people, and we grew to probably about fifteen to twenty at the end. Al Oppenheim graduated some Ph.D. students who joined us. Ron Schafer joined us. Ron Crochiere joined in the later years. Harvey Silverman from Brown, who had worked with Jim Cooley at IBM, joined us. A number of people came on. Cliff Weinstein, who was one of Al Oppenheim’s students, joined us. But we didn’t grow to thirty, and hardly anybody ever left. There was just too much excitement: it was the right place to be. So the answer is, yes it was stable, but it was not moribund.
Who were the most influential members?
That's like asking who’s the quarterback. I was president for a couple of years, Charlie Rader was the president for a couple of years, Jim Kaiser was a key contributor. I think everybody really was. The person who became the head of the committee was just the person who assumed that responsibility. When I was not head I never thought of myself as a lesser member or more. It was just a group of people who got along well, respected each other tremendously, and knew that we had the opportunity of a lifetime. We left every committee meeting with a lot of work for each person. You never had to say, “Damn it, you have to get this done, we’re meeting in six weeks.” Everybody just knew it. Everybody knew they’d be letting down their colleagues, friends and themselves. It’s not often you feel that way for an unpaid, voluntary sideline. And everybody just did it.
You alluded to the influence the committee had in defining the field. Was there always consensus about the field and those issues?
I think there was consensus in the sense that whenever anyone had a real issue it got raised, it got resolved, it got thought about and it never lingered. We never formed armed camps. It was the most congenial committee that I think I could ever be on. It was something I would have never given up until I realized that things change. I can’t think of a committee anyone would stay on for thirteen years voluntarily, and I’ve been on lots and lots of committees, even ones I’ve liked. After a few years you say, “It’s time to move on.” We had problems: we were not without debate. But we debated, and we came to resolution.
What were some of the issues? What was up for grabs in the shaping of the technology?
Some of the issues were: was the FFT as much of a contribution as people made it out to be. There was the IJ Good algorithm that had preceded it and there was a whole history that people started tracing, long before the Cooley and Tukey presentation. Are we making too much of that? Should we be going into directions like number generator algorithms? What defines the field and what defines the fringes?
You settled these questions in the context of putting together a conference and figuring out who to invite?
We would debate it, we would talk about it. We would say, “How does it fit in? Is it core is it not core?” You know, like any good field, the off-shoots are almost "N Sigma" away. If you want to go to six sigma this fits in--if not, it doesn't. Another big controversy occurred when people proposed these number theoretic transforms. Was that a math topic, looking at our stuff and just applying a lot of fancy math? That’s what it looked like to a large extent. Or, there was a class of implementation of digital filters that supposedly had all sorts of neat flat signal properties. Are they real or not? You see there were some phases that some of the committee went into, and a lot of people wouldn’t buy into them. We would resolve them exactly as you said. We would say, “Okay, let’s find out everything we can about it, and let’s see if interest really explodes in this, or is it just so and so’s interest and nobody else is gonna jump into it.’ “ The good stuff, it explodes. Everybody jumps in.
It was like a really good, working family. We yelled, we screamed, we debated, but we all came together. I don’t think anyone said, “I quit this committee” or got really mad at anybody. We were all friends. We used to meet at the IEEE building in New York. We’d go in every six weeks and I wouldn’t miss that for the world. I can’t imagine anything that would take precedence over that, except maybe a major event in my family, and that doesn’t happen during work days.
Can you tell me something about the channels of communication for the results people were coming up with? What were the important publications to read? What were the places to publish? Or was it all by conference?
<flashmp3>319 - rabiner - clip 5.mp3</flashmp3>
No. We tried to use the IEEE ASSP Transactions. We encouraged that in every way we could. We tried to make sure the editors were people who understood the field well. In fact, usually it was one of us editing. We tried to make sure the editors were real editors and not just order-takers. So for example we’d ask the editors to come to our meetings and we'd ask, “Who are you inviting to give tutorial papers? What do you think of the key...”. We published our own stuff, the books, the IEEE terminology paper, the program book.
When we started publishing the literature it served as a sanity check for us. Are we really covering the field right? Are we not missing things? We were worried the most about whether we were too inbred. As I said, MIT, Lincoln Labs and Bell Labs were the heart of it. We did everything we could to start adding, but meeting every six weeks on the East Coast, made it hard. We brought in some California people from time to time. They didn’t stay; they weren’t the key people in the field, and that’s just the reality of it. But we worried about being inbred. We all knew each other, we all worked together, and it does lead to being inbred. There’s both good and bad to that. We worried about it, but we never really resolved it, and the bottom line was we convinced ourselves we had all the right people all the time. "Stop worrying about it." That was the bottom line. We didn’t have any of the West Coast people. Well, who were the West Coast people that we didn’t have? You could name names, but they just weren’t of the same stature in the field.
Again, I’ll give you this list, but I thought to myself, “What are the workshops?” I gave you the history paper, and Ron Schafer gave a historical talk at the last ICASSP, so you can get a hold of that. Peter Lewis and Peter Welch were never on the committee, but they worked with Jim Cooley. But everyone else was on the committee that I named, and in fact I named the committee because I added that afterwards.
For Theory and Application of Digital Signal Processing, my first book with Ben, we spent the better part of a week trying to figure out what the unified view of the field was. What’s an off-shoot and what’s not, and how to organize the book, so that it will help you. Then I have the early DSP books, the Rader and Gold, the Oppenheim and Schafer, my book with Gold and the IEEE. Ken Stieglitz and a guy named Allen Gibbs in Australia had actually written papers even before the early 60s on digital linear time and invariant systems. It’s the fundamental basis for DSP, and in fact every textbook has digital systems as a kind of front end. There were people who worked on digital filter design, analog design methods, digital design methods. I even brought back some of my earlier things I looked at.
There’s a guy at GE named Marcel Martin who really just thought of numerical filtering. He didn’t call it digital, he said numerical filtering. He was taking lists of numbers and figuring out how to process them. The Stock Market tickers, that’s a numerical filter. He didn’t think of digital filters, he didn’t think of signals. He said, “If I have bank rate, failure date, and I don’t want to look at any of the specifics, I want to look at the trends.” He used numerical filters. There was a guy named Tony Constantinides in England who looked at those and the signal flow designs. This was in the paper I mentioned.
Then there’s the FFT methods, spectrum analysis, the chirp-z transform work that was done, spectral analysis based on FFT, number theoretic FFT and non-linear methods. Finite word length effects, that’s one of the things that we argued a lot about. What if you get finite register, round off noise? Why’s that two-dimensional or multi-dimensional? We were speech and radar people and these are all kind of one-dimensional, but all of a sudden the oil exploration people started putting in sensors and looking 3-D. We brought some of these people in, but we never quite made that through. Then there was hardware, but hardware goes off in its own direction.
Wouldn’t the hardware implementation be closely related it is to the finite word length effects.
Totally related, but it also had to do with how to build a structure that you can imbed this thing in. So for example, Leland Jackson went out and formed his own company to actually build digital filters because he had studied all of the hardware implementations. They wrote some classic papers in there. People looked at how to build hardware for the government, because the government needed huge FFTs for some of its work. People built fast processors. So this is the list I made for you.
This is really useful. Thank you.
You can have that.
Evolutionary vs. revolutionary research; circuit design
Earlier you talked about evolutionary vs. revolutionary developments, the extent to which digital signal processing was simulating analog systems versus realizing systems that had no power. What was the relative weight of those two different strains.
Well, the evolutionary part was the fact that the applications that drove us were evolutionary. Most of the people looking at them started looking at the pieces of it and tended to start off in the evolutionary mode until they realized they weren't getting anywhere. They switched into the revolutionary mode because they found out about the FFT algorithm, which changed the whole concept of being able to do spectrum analysis.
For a long time spectrum analysis and digital filtering were identical. Digital filtering involved designing a bunch of filters so that you could put a signal through and find out the spectrum in a band. You’re passing the speech signal through a bank of filters, and that gives you the spectrum. The FFT is equivalent to filtering, but with a certain kind of filter. What the FFT introduced is that you could do it numerically. You can control properties of it. You can control all sorts of neat ways of looking at the signal in a way that’s unthinkable just by filtering techniques.
The FFT algorithm itself was just a numerical algorithm; it was a way of computing the discrete Fourier transform. It completely changed thinking in a revolutionary way. And once you started seeing that, you started saying to yourself, “My God, there’s a whole world of things I can do that were unthinkable in the old world", and that’s revolutionary.
Same thing with digital filters. All the early work in digital filters involved taking an analog thing and digitizing it? That was very clever. Evolutionary. But somebody said, “When I’m all done, I can have this class of filters that has no inverse back to an analog. What does it mean?” So you say, “Now, let me start again. Let me throw away all of these ground rules, and develop a true digital design method.” All of a sudden, the analog field almost went away, overnight, because digital filters do exactly what you want. These filters were always approximations, and analog approximations are what you want, and then the digital approximation of the analog. But the analog didn’t do what you wanted, and the digital of the analog didn’t do it either. It didn’t even model it. This one did exactly what you wanted, with no approximation up and down the line.
Now, because nothing’s perfect, it eliminated two degrees of approximation. You could say to yourself, “What if I don’t want a flat digital filter, but I want one with a minimum signal output for a given filtering characteristic?” That's unthinkable in the analog world, but totally thinkable in this.
So the thing that drove most people into this was evolutionary, the approach that they took in looking at it was pseudo-evolutionary, until they started getting this epiphany. Then they realized, “My God, it’s different,” and became totally revolutionary. And why is it revolutionary? Because you have nothing to fall back on.
What’s the effect of this on the circuit designer, someone who puts together comb filters or other classic filters?
It’s hard for me to say, but I had a lot of friends who were into circuit design. When I first started doing this, they were called analog circuit designers, and they knew how to do these things. I remember a project in the mid-to late-'80s. DSP had changed that field so radically, that we needed somebody who knew how to do analog design, and such people were almost extinct. Everyone was doing digital design, and in fact, the project was looking for somebody who could mix a little bit of analog with digital, and there were so few people that they treasured them. Yet they were retraining them not that many years before because they thought no one was going to need analog design anymore.
So circuit designers had to learn very quickly. Of course, the whole digital world changed anyway. Meade and Conway published their book on VLSI Design for idiots, or whatever the right term was. I remember having a Meade-Conway course in the mid-to late-70's on how to learn the rules and submit a design and get the digital circuit working and returned to you, no cost and no time. Everybody who was a designer took the course and they converted almost overnight. It was a total change for everybody in the design area.
Well, certainly in their career trajectories, but do any of the filter concepts cross over?
By the time that it got to the circuit design people, they didn’t have to worry about that. They learned that as something fundamental, just like they learned VLSI. But they started using poly-cell design. So you need some filters in there and get some poly-cells popping in there and go on there.
I always tell people to understand the FFT you program it once and throw it away, then pick up a good program that someone has written. Why do you want to spend two years of your life writing a really good one, when you can write a really lousy one in a day or two, get it working, understand everything you have to understand to use it all your life and then throw it away? You can’t have a better situation. So I think that’s the situation there, but you should ask the circuit designers. Ben Gold could give you a much better view on that one.
Maturation of the digital signal processing field, 1980s
We’ve been talking about topics pulled out from the last twenty or thirty years. Can you give me a sense of the real contour of the development in this area, particularly over the last few years?
I think the technology is so mature, that it’s second nature to everybody. Everyone you hire is absolutely steeped in the technology, knows it inside out, and uses it without thinking.
Did this maturing take place in the early 80s?
Probably in the early-to mid-80's. When people come in now, they are just so good with it, it’s just fundamental. It’s being taught to undergraduates, throughout almost the entire world. People don’t even think about it, it’s like programming. When I graduated MIT, there was one programming course that was offered, and I took it. I taught myself most of the programming. Today, kids come in there and they know Perl and C&C++ and Basic and all the variant languages. They know them. They know all the languages, you don’t even think about it. You just make the assumption that any EE/CS student knows DSP. And if they don’t know it, then they’re in deep trouble.
So the answer is, the field has matured. It’s just pervasive. It’s a digital world. No one would even think of saying anything but that. The CD changed the music revolution. Nobody buys records anymore; records are gone. Even cassette tapes are going. There are, what, 560 million CDs sold a year and probably a fair number of cassettes. But I haven’t bought a cassette since I bought a CD player. Why would I do that? It’s crazy. It’s throwing away your money for something that’s guaranteed to degrade over time, versus something that will last forever if you treat it well.
You’ve identified some issues that had to be addressed, like finite word length. Which issues become important as the external technology progresses? Which ones come up?
They’re all so well resolved, I don’t think any of them are in that class. I’m probably not the right person to talk to. You probably want to talk to an academic on this. Most fields go into what I consider to be the first level of research: How do you define the field? How do you define the key issues? How do you get the solutions to the key ones? Then there’s the second level. Those are the interesting off-shoots, for example how do you do spectrum analysis in this environment. Still interesting, but down to a second level. Then comes the third level, the realm of interesting Ph.D. theses, like what’s the difference between numerically exact solutions and integer linear programming. They're of interest to a small group, but probably no one will ever use them. Finally there’s the fourth level of stuff that nobody cares about. It’s fill in Ph.D. work, MS work. Why people do it is beyond me.
I think the first level problems were solved in this field by the mid-80's. The second level problems were almost all solved by the mid-80's. There’s still some second level problems, and third and fourth, that people are working on, and it’s not that it’s not interesting. I probably shouldn’t say this because it's elitist, but I have not been excited by any of the articles in ASSP on these problems, in forever. So much of this work is third and fourth level that it’s hard to find the second level problems that are exciting. I’m at the point in my life where I’m not going to read that anymore. In the 1960s and 70s, every journal is read, devoured, pursued and I knew every article as though it were my own. That is how much the world was changing.
When we were talking about literature and channels of communication, you said people were trying to publish in ASSP Transactions. What outside literature was crucial to read?
There was a signal processing magazine that didn’t come until '84, but ASSP dominates the field. One way to look at it is to look at those first two reprint books. What percentage of those articles came from ASSP? It’s probably in the 80 to 90% range. The Cooley/Tukey paper was published in another journal, it was a math journal.
There was nothing in SIAM or other journals?
No. The reason is that the field was mavericked and unwanted by everybody, except one group in the IEEE that said, “We want it. We're going to make it our own.” If we sent our papers into a lot of journals, they’d say, “This doesn’t fit our view of what we do. Send it somewhere else.” TheIEEE Proceedings took some: in the '70s there were two special issues in the proceedings, one on DSP and one on speech processing, but for the most part ASSP Transactions won big.ICASSP came out in 1976, and it’s the field. We createdICASSP. It grew out of a speech conference we used to have, and the whole field is in there. Between ICASSP and ASSP Transactions, you have everything. In fact, my view is that the reprint books and the current books in the field have everything. ICASSP gives you good fill-in.
Influential figures in DSP
Can you suggest specific subjects we haven't covered?
Well, it’s tough, everyone is such a different personality. Ben Gold is the godfather of the field, no other way to describe it. He’s the idea person. He’s the “what if?”. He’s the person who saw everything. He’s the person who envisioned everything. He’s the person who said, “Someday there are going to be little chips that are going to do this, but I’m going to build the big processors.” The thing is, how did he have such vision, such clear view of the future long before Moore’s Law was even thought of? He just knew, and always drove us. Charlie was the algorithm person. If it was neat, mathematical, challenging, really pushed you, he was the mathematician par-excellence who did it.
Jim Kaiser had a very, very broad role. He would delve into a bunch of topics as he saw them. He did some neat thinking about a lot of the topics that turned out to be significant in the field. He was also one of those people who was a cheerleader on a lot of the things. He didn’t worry about whether he had done it or not, he would just push the project. He could see the direction. When it was really presented to him and he could see it he would just jump in. He didn’t have to say, “I have to do it myself.” He’d say, “Have to keep pushing that kind of stuff.” So he had that effect.
Jim Cooley was just absolutely the grand person on the FFT. He proselytized, he published, he spread it and he did everything he had to. It was a one time thing. Al was an engineer with his Ph.D. in the math area. He would do something and ask, "what the hell can I do with it?” He found something and really made it go. He’s been the academic leader of the field for its entire history. He probably made the best and broadest contributions that anyone could. He trained most of the people who are leaders in the field, throughout the entire period. That includes Ron Schafer, who was his first student, and just goes on and on.
Ron was Al’s first student, and he picked up on that. He worked with me for about five years, and then joined Georgia Tech. You might ask, “Why did you go into academia when you were in the middle of the greatest industrial research laboratory in the world, where you could do anything?” The answer is he’s a born teacher, and he saw the opportunities in getting the teaching material out. Having just finished the book with Al, he had the opportunity to train generation upon generation of students. I think he has the best of both worlds. Al’s been academia all of his life. Each of these other people have been in industrial research. Ron started and got a Ph.D. with the best, came and worked at the best lab ever, and then went off to the academy. He has the broadest outlook on both worlds of anybody you’ll ever meet.
Hans Schuessler’s been in academia all of his life. He came late to the game. He’s a traditional analog circuits man. He saw that things were changing around him and decided he'd better learn it. Not only did he learn it, he started asking the right questions. His view was, “If you can’t build it, what does it mean?” He view was it didn’t make a difference what it took in digital hardware, or how expensive it was, he just built it. Because that’s the proof in the pudding for the German system.
Does going through that stage bring up any, point out any problems.
He had a unique brand of research because of exactly that. He had direct insights into all of the problems that come when you actually try to build these systems. He fit into the mainstream with a lot of the things he did but he also went off in some very directions because of what he had to do to build.
My background is the Bell Labs side. Tom Stockham was around at the time the FFT came out. He had it explained to him by Charlie Rader, and said, “If this really is as good as you say, it changes convolution, which is thought of as a basic filtering process, into something I can do by transforms. When you convolve two signals in one domain it’s the product of their transforms. Why can’t I take a product of transforms and inverse it and do convolution?” That idea came from Howard Helms and Tom Stockham, and that’s their major contribution to the field. Again, it was a one-time idea, but it was a brilliant insight at exactly the right time and place.
Ken is the other person I’d call a father of the field. Ben is in his 70s, Ken is in his late 50s or early 60s. He’s an extremely bright and extremely energetic guy who realized that there was this framework that had to be understood. He worked on that as an undergraduate, for his Ph.D., and has carried it throughout his academic career at Princeton. I talked about Howard's contribution, and Dave Bergland's. Those are the contributions that I would ask each of these people about.
- 1 About Lawrence Rabiner
- 2 About the Interview
- 3 Copyright Statement
- 4 Interview
- 4.1 Education; development of digital signal processing field
- 4.2 Bell Labs research
- 4.3 Education in signal processing
- 4.4 Signal processing pioneers and application-driven research
- 4.5 Digital signal processing milestones
- 4.6 Research and applications; decimation interpolation
- 4.7 DSP Committee
- 4.8 Evolutionary vs. revolutionary research; circuit design
- 4.9 Maturation of the digital signal processing field, 1980s
- 4.10 Journals
- 4.11 Influential figures in DSP