# Oral-History:Alfred Fettweis

## About Alfred Fettweis

Alfred Fettweis was born in 1926 in Eupen, Belgium to German-speaking parents. His earliest interests were in math, science and radio. He was drafted into the German army in 1942 and manned anti-aircraft batteries in Aachen, Germany during the war. While serving in the German army, he became involved in radar research. He was captured by the British and held captive for six months. Soon after graduating with a degree in Electrical Engineering in 1951, Fettweis joined the International Telephone and Telegraph Company (ITT), an affiliate of AT&T, where he was involved in carrier telephony and line transmission work. Fettweis used insertion loss method five years before its perfection at Bell Labs and argues that Germany pioneered insertion loss technique because of a lack of quartz crystals. In 1956 he discovered the sensitivity property, later published by Orchard. Fettweis was quick to use new technologies to create filter designs. He develops the wave digital filter in 1969, while teaching at Bochum University in Germany. After retiring from Bochum, he became a visiting professor at Notre Dame.

The interview begins with a highly detailed biographical statement by Fettweis, where he recounts his early years, his early schooling and his involvement with the German army in World War II. Fettweis relates a great deal of information regarding his early work with radar and his subsequent university education. This interview yields a wealth of information concerning the creation of various filter types and the creation of the wave digital filter in particular. He believes his early papers in this field were neglected by the IEEE Transactions because of an inherent anti-European bias and suggests that the Transactions' editor was also working in this field. Fettweis provides a unique perspective on the growth of the field and relates his early affiliations with the IEEE and the Circuits and Systems Society, and believes his work serves as a bridge between the Audio and Electroacoustics Society and the Circuits and Systems Society. The interview closes with Fettweis explaining his current interest in music coding.

Other interviews detailing the emergence of the digital signal processing field include C. Sidney Burrus Oral History, James W. Cooley Oral History, Ben Gold Oral History, James Kaiser Oral History, Wolfgang Mecklenbräuker Oral History, Russel Mersereau Oral History, Alan Oppenheim Oral History, Lawrence Rabiner Oral History, Charles Rader Oral History, Ron Schafer Oral History, Hans Wilhelm Schuessler Oral History, and Tom Parks Oral History.

## About the Interview

ALFRED FETTWEIS: An Interview Conducted by Frederik Nebeker, Center for the History of Electrical Engineering,

24 April 1997

Interview #338 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineers, Inc., and Rutgers, The State University of New Jersey

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Alfred Fettweis, an oral history conducted in 1997 by Frederik Nebeker, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.

## Interview

INTERVIEW: Alfred Fettweis

INTERVIEWER: Frederik Nebeker

DATE: 24 April 1997

PLACE: Munich, Germany

### Family, childhood, and education

**Nebeker:**

Can you tell me where and when you were born, and a little about your family background?

**Fettweis:**

Yes, I was born near the Belgian-German border in Eupen, which is one of the regions in Europe which has shifted back and forth between countries. It’s a small area that became Belgian by the Treaty of Versailles, and so my family is on both sides of the border. I have quite a bit of family in Belgium and also quite a bit of family in Germany. I was born after World War I, and thus I was a Belgian citizen from the beginning. My parents were born German and became Belgian after World War I. But the area is still German speaking today. German is an official language in Belgium, as you might know. There’s only a little over half of a percent of the Belgian population speaking German, about sixty thousand, and I’m one of them.

I got my education in Belgium, although during the war my home area was again annexed by Germany. Shortly after the war, I started a university education in Belgium in Louvain. I took the courses in French. There were two sections at the University of Louvain. Today these are two separate universities. At the time it was one university with two language sections, one in French and one in Dutch. Those are the two major languages used in Belgium, and I took all my courses in French.

**Nebeker:**

Was that an engineering degree?

**Fettweis:**

Yes, electrical engineering was the subject, and I studied it between ‘46 and ‘51. I graduated as what they call and "Engénieur Civil Electricien" in French. “Civil” does not mean Civil Engineering; in the French sense it’s civil as opposed to military engineering. One of the best-known old technical universities in France is the École Polytechnique, which today is still a military school, dating from shortly after the French Revolution. That’s why in France "Engénieur Civil” is as opposed to the military degree. In English, it also has something to do with it, because most of the military engineers were construction engineers. They did the work on roads and bridges, etc. Those who did this kind of work for civilian purposes were then called civil engineers.

### ITT employment

**Fettweis:**

Anyway, that is the legal degree. I graduated in 1951 and then I joined what used to be ITT, the International Telephone and Telegraph Company. The company had a major subsidiary in Antwerp called the Bell Telephone Manufacturing Company. It has the old name Bell Telephone, but it is ITT, not AT&T. The old ITT largely traces its origin through AT&T. In the ‘20s, due to anti-trust laws, the AT&T had to sell all its foreign holdings.

**Nebeker:**

Right, so the foreign business was ITT.

**Fettweis:**

That's what really started the ITT, which had subsidiaries all over the world. Only the headquarters was in America. During the war they again started having some labs and manufacturing plants in the States. These were staffed by engineers who had moved from Belgium and France, and other countries occupied by the Germans. They had fled to America to evade the occupation. The company also started to supply markets from America, which formerly had been supplied from Belgium and France. Within ITT, Belgium had supplied mainly the telephone equipment for Latin America. In France the company focused on microwave engineering. We would say today that they were radar people, and they developed the aircraft navigation systems used by the American forces during the war. I worked in Antwerp starting in ‘51.

### Engineering education

**Nebeker:**

Were you trained as a communications engineer?

**Fettweis:**

No. There was no such specialization in Belgium. You see, in the Belgian university, engineering education was always very broad, much broader than in other European countries. Even electrical engineering was relatively unspecialized. Today it’s somewhat more specialized, but still far less than in other countries. I’m not too well informed about the latest developments.

At the time, we had five years of lectures to get the degree. The first two years, what they call the "candidature," is like the freshman and sophomore years in an American school. The only subjects are mathematics, physics, and chemistry, with almost nothing in engineering, and the classes were taken largely in common with the mathematicians, the physicists, and the chemists. That means that all the mathematics courses we had in engineering, for instance, were at the same time courses for mathematicians and physicists. So we really had mathematics training which was nearly identical, at least during the first four terms, to that of the mathematician. This does not exist any more, to my knowledge. They now have a separate mathematics curriculum for engineers.

The third year curriculum was also the same for almost all the engineers, and only your last two years, the fourth and fifth years, were in the chosen area of specialization. Electrical engineering thus started to have its own courses only in the fourth year. But even then, in the fourth and fifth years, two-thirds of all courses were common to all engineers.

**Nebeker:**

I see.

**Fettweis:**

Two-thirds of one year of a total of five is very little specialization. Therefore there was no such thing as communications engineering. The electrical engineering courses we had covered primarily power engineering, including electrical machines and electricity distribution; there was one course in electronics. But I had a strong interest in electronics already, and tried to orient myself more to electronics.

**Nebeker:**

What did you hope to be doing as an engineer? Did you want to go into radio or TV?

**Fettweis:**

Yes, I was thinking of going into radio and so on.

### World War II; military assignment

**Nebeker:**

So you were telling me that during the war that part of Belgium was again part of Germany.

**Fettweis:**

After the Battle of Stalingrad, which was the beginning of the end, so to speak, for Germany, the government decided to mobilize all resources. The high school students born in 1926 and 1927 were drafted. I was born in 1926. We were taken to the anti-aircraft batteries near our hometowns. My hometown is close to Aachen. From the center of Eupen to center of Aachen is just over ten miles. It was almost like a suburb of Aachen, but Aachen was German and Eupen was Belgian.

Anyhow, Eupen is a small town, so the anti-aircraft batteries were in Aachen, and I was drafted to Aachen. Our classes continued: the teachers came to us to teach. I must confess that I didn’t learn much because we got so little sleep. Aachen is right on the western border and there was practically always something going on in our sector.

**Nebeker:**

So were there aircraft overhead often?

**Fettweis:**

Even if they were not nearby and the civilian population was not on alert, the aircraft battery had to be on alert. Many nights we had to be up for four hours or so. The next morning the teachers would come to teach us, but I was sixteen years old at the time and I just couldn’t take it. I was so overwhelmed that I practically fell asleep in the presence of the teacher.

**Nebeker:**

Were you actually operating one of the radar sets for a boundary?

**Fettweis:**

I was not directly involved with the radar. I was with what they called the Kommandogerät, working with electromechanical analog computers. We got the data from the radar sets, or we could also get the data optically. Then we computed the setting of the elevation and the timing for the fuse, which was critical for the range.

**Nebeker:**

So you operated the Kommandogerät?

**Fettweis:**

That was my assignment at the time.

**Nebeker:**

I see.

**Fettweis:**

Then came in the spring of ‘43. I had been with the anti-aircraft group for only a few months when there was a decisive technological shift in the war. The British first, and later the Americans, came out with centimeter radars. The H-2-S equipment of the British, which was a 9-centimeter (s-band) radar, was fitted on board airplanes. It had rotating antennas, and had to have a very short wavelength to put the rotating antenna on board of a plane. Then the Americans came along with the (x-band) 3-centimeter radar, as we called it. That’s when they started to be able to make bombing raids at night, or even through cloud cover, because they could see the picture of the ground on board of their planes. They were also able to practically eliminate submarine warfare. That was a real turning point in the war.

In Germany they had radar sets, but only in what we called the decimetric range, around 50-centimeters. All work in the centimeter range had even been officially forbidden. When they realized that the breakthrough had come in the centimetric range, the German authorities decided to do more work on radar development, which they had largely neglected. As part of this program, they asked the high school students who manned the anti-aircraft batteries if they were interested in radar and high frequency electronics. We had to pass an entrance exam in mathematics and physics, and were selected from all over Germany. We were in a camp where we had half the day school training and half the day electronics training.

**Nebeker:**

Is that where you met Manfred Schroeder?

**Fettweis:**

Yes, he was there at the same camp. Actually, because this was a selective group from all over Germany, there are quite a few current university professors who were in that group, not only Manfred and myself, but many others. Many other leading people in industry who are involved, for instance, with filter design. For example, Friedrich Künemund, the former head of filter development at Siemens. He was in the 1927 conscription class, and thus came a year later, like Carl Kurth, who used to be the main filter designer at Bell Labs in North Andover, Massachusetts.

**Nebeker:**

Yes, I know of him, yes.

**Fettweis:**

He was also in the same group.

**Nebeker:**

Is that right?

**Fettweis:**

He was born in ‘27, but I was in the 1926 shift. There were two such shifts, one for people born in ‘26 and one for those born in ‘27. After we finished with our training, then the other ones came. Carl Kurth was also in there. So that at one time, the head of filter development at Siemens in Germany and the head of filter development at Bell Labs in the U.S. were both from this same group. You may have heard of Rudy Saal?

**Nebeker:**

Yes.

**Fettweis:**

Actually, when we had the IEEE ISCAS in Munich in ‘76, he was the general chairman, and I was the technical program chairman. Rudy Saal was one of our technical teachers. He is a few years older.

**Nebeker:**

He was a teacher there.

**Fettweis:**

He was a teacher there. So you see, many of these people are, in a sense, related. From a scientific point of view, they are a select group. When I look back at this today, I can see that the main reason for this training was that some people among the German authorities tried to save the intelligent young people as much as possible from combat duty. This purpose became even clearer later. We had this technical training, and after that we were drafted into the air force, where we got some military training, and some more technical training on radar sets in the army. Then we were sent for technical work to repair radar sets. I worked on the night fighter radars, so I was stationed on airfields.

**Nebeker:**

You were a radar technician who worked on the sets?

**Fettweis:**

I was a technician, for two months. I repaired the radar sets of the night fighters. I was stationed at the airfields in the north of Germany, near Hamburg, and later, further up in Schleswig which is close to the Danish border. We were all assigned to different places, and then we were all called together again. In March of ‘45, we were sent out again to different places, except for forty or forty-five of us, who were supposed to be the best ones in the course. We were selected and started an advanced electronics radar course in March of ‘45. It was so close to the end of the war. This shows you very clearly there were people who tried to save what they could save. They wanted to make sure that the brightest ones would not be lost for the future. It’s like what you mentioned for Manfred Schroeder. It was a decisive experience for me also. The war was brutal, but the technology was fascinating. That is the problem.

**Nebeker:**

Did you become a Belgian citizen right after the war?

**Fettweis:**

Yes.

**Nebeker:**

But you had been a Belgian citizen, of course.

**Fettweis:**

According to international law, I have always been a Belgian citizen, but we didn’t know that. The authorities in power were the Germans, and they considered us to be German citizens. That’s why they drafted us like they would have drafted any German citizen. So we thought we were German citizens. International law, of course, has never recognized the annexation by Germany, therefore, according to international law, I have always been a Belgian citizen. Not only according to Belgian law, but according to international law.

Then, I was taken into captivity. I was taken prisoner of war by the British. I was a prisoner of war for six months. Up in Schleswig-Holstein, there was one of these big prisoner zones. There were 700,000 German prisoners in that same area after the war. It took four months before the Belgians finally came to look for their citizens among these prisoners. The French had done it right away, for the ones from Alsace and Lorraine. You know that Alsace-Lorraine had a history similar to my region, except that Alsace-Lorraine had been French before 1870 or ‘71. My hometown had never been Belgian before then; it was German. Belgium was a country that had existed only since 1830. Before that, my region was shifted around many times, but it’s too complicated to talk about all that. Anyhow, the Alsatians had the advantage that there were quite a few leading French politicians from Alsace-Lorraine, like Robert Schuman, who was a Prime Minister and Pflimlin, who was the later Mayor of Strasbourg, the most important Alsatian town. Immediately after the German capitulation, they looked for the Alsatian citizens to bring them back home. We had no one in Belgium to speak in our favor of us.

**Nebeker:**

So it took four months?

**Fettweis:**

Yes, four months, and then we were taken to an English prisoner of war camp in Belgium, near Brussels. There we were interrogated, so it took quite some time before we were released. In November ‘45, I was finally released and came back to my hometown.

### Post-World War II engineering education; higher education in Belgium

**Nebeker:**

Then, as you told me, you entered the engineering school.

**Fettweis:**

I entered engineering school the following year, in the fall of ‘46. It was the engineering school of the University of Louvain in Belgium. There was a problem because I had a German high school degree, which was not accepted in Belgium as a valid degree. In Belgium, if one has a high school degree and wants to study engineering, one has to take an additional exam in mathematics, a quite serious exam in mathematics. If you did not pass the exam, you could not become an engineer, but you could still become a mathematician or a physicist. This is a challenge for anybody good in mathematics. He wants to show that he can do it. He wants to pass that exam.

That is the reason why so many Belgian engineers are so strongly mathematically oriented, or why often the students most gifted in mathematics go into engineering. This is why quite a few Belgian professors of mathematics and theoretical physics are engineers by training. That includes the most famous Belgian mathematician, de la Vallée Poussin, who at that time was the oldest living Belgian engineer. He’s a famous mathematics professor. There are quite a few Belgians among the mathematically oriented scientists in the IEEE because of this inclination towards mathematics. I am myself very mathematically oriented. So I can be part of the proof of this.

Anyhow, I took this entrance exam. If you didn’t have a valid high school degree, you could combine this entrance exam in mathematics with an examination in the humanities. This combined exam then replaces the high school degree. That is how I got the entrance acceptance. I was very inclined to physics, and my dream as a young man had been to study physics in Göttingen, which before the war was the most important school of physics in the world. That is exactly what Manfred Schroeder did. But due to the political situation, the border was closed. We were not even allowed to go across the border into Aachen. For instance, 1950 was the first time that the occupational authorities allowed people to cross the border. So there was no contact with the Germans. It would have been impossible to study in Germany. Therefore, without thinking too much about it, I geared myself up to do electrical engineering, without ever forgetting my liking for physics.

**Nebeker:**

Do you think it was the wartime radar experience that made you decide on electrical engineering rather than some other type of engineering?

**Fettweis:**

I believe so. I must say, I had also been drawn quite a bit towards civil engineering. If it hadn’t been for the wartime experience, I might have taken civil engineering. I’m happy today I took electrical engineering, so it’s not that I regret this. Whether it would have been better that I had taken physics, I don’t know. I’m satisfied with the career I had, and therefore, I don’t think about what might have been better. I have had a nice career, and therefore I think it was good the way it unfolded.

In Louvain, we had these first two years, which were oriented towards mathematics. I liked the physics very much. In the third year I was disappointed by the technical courses. I had liked the rigor of mathematics, and the fundamental approach of physics, but the engineering courses were often less precise. I was somewhat disappointed. I thought about switching to physics, but I took advice from a professor in theoretical physics, who was also an engineer by training. He said, “If you really like physics, stick with engineering,” because in Belgium at that time, as a physicist there were very few opportunities. However, I did take all the theoretical physics courses taught to the physicists. I did my diploma work, what would be the equivalent to a master's thesis in America, with another professor in theoretical physics. The diploma work was on wave propagation and mathematical equations. My advisor was Charles Manneback, who I have admired, very much so.

**Nebeker:**

Did you think that you might become a professor of engineering, or do research in engineering?

**Fettweis:**

You see, that was something I would have liked to become.

**Nebeker:**

But there was no real possibility?

**Fettweis:**

No real possibility. I would have had to become a professor teaching in French, which was not my native language. I had learned it in school, but the courses were to be taught in French.

**Nebeker:**

These courses were all in French, is that right?

**Fettweis:**

Yes. I was more fluent in French then than I am today. But still, I knew that they would demand very proper French. I did not think at that time that I could have become a professor in French-speaking Belgium.

The second reason is even more important. In Belgium, there was very little research going on at the universities. Most of the professors did not have a doctoral degree, and there were no assistantships available, or only a very few. For instance, this professor of theoretical physics, Manneback, would have liked me to become his assistant. However, being a theoretician, he was not entitled to an assistant. There was no budget available for him. Therefore, he couldn’t hire me.

In Belgium, a university career like it existed in America or in Germany did not exist at that time. One was dependent on accidental situations. You had to graduate just at the right moment. There was no possibility to go to a different university. It was a very special situation in Belgium. Louvain, where I studied, was the Catholic university. It had more students than the rest of the Belgian universities combined. Slightly more than half of the Belgian students were studying in Louvain.

At that time there were four universities: Louvain, Brussels, Ghent, and Liege. The country is divided first into two languages. The main languages are French and Dutch. The German part is insignificant. Next, the country is divided into Catholics and non-Catholics, largely Free Thinkers. Therefore, all of these four universities have their own reason for being. You have a Catholic University in Louvain with two language sections, and the Free Thinking University in Brussels, again with French and Dutch sections. And then you have the two government universities, one for the Dutch speaking in Ghent, and one for the French speaking in Liege. You are expected to belong to one of these categories, and that determines where you study, where you can make a career, and so on. So the opportunity existed only at that university where you studied.

At Louvain, there was one professor of electrical engineering in the French speaking section, and another one for the Dutch section. They were closely cooperating. But the professor in French was so young that I would have no chance to become his successor. The question simply didn’t arise. It’s a completely different system from Germany, and other places.

### ITT employment; carrier telephony

**Nebeker:**

Now, when you got this job with the ITT company, what were you hired to do?

**Fettweis:**

That was also interesting. We worked on wire transmission, and I got into carrier telephony.

**Nebeker:**

They were building equipment for the Belgian telephone system?

**Fettweis:**

Primary for Belgium, yes. They were the suppliers of carrier telephone equipment for the Belgian Telephone Company. They did not do all the development on their own, because they were part of the ITT system. The major company for these activities within the ITT at the time was Standard Telephone and Cable in London. Later it was sold by ITT when ITT was still a communications company, and today it is part of Northern Telecom. We had close association with the STC, the head of the development lab for carrier telephony at the Belgian ITT subsidiary was Vitold Belevitch, who is one of the giants in our field. He is well known in the IEEE. He got the Circuits and Systems Society award a few years ago, and other awards.

**Nebeker:**

So he was the head of the development crew?

**Fettweis:**

He had got his doctorate with Manneback. At that time, few people in Belgium had ever gotten a doctorate degree in engineering. For instance, I graduated from Louvain. Although Louvain was training half of all Belgian engineers, only four people had ever got a doctorate degree in any field of engineering, from mining to electrical engineering. Of these four, three had got their degree, with the theoretical physicist, Manneback. One of them was Belevitch. Manneback was in close contact with Belevitch, and he arranged that I would join his lab. That is how I got to work with Belevitch.

**Nebeker:**

I see.

**Fettweis:**

He is a brilliant scientist, and a very demanding person. You really have to be good to work successfully with him. I certainly appreciate very highly that I had the chance to work with him.

**Nebeker:**

Is that how you got into filter design?

**Fettweis:**

Exactly, but Belevitch, in turn, got into filters through Wilhelm Cauer. This has to do with the ITT again. The war played a big role in these circumstances. You may have noticed that two years ago there was a commemoration for the fiftieth anniversary of Cauer's death. The Circuits and Systems Society had asked me to write a brief biography about him. It appeared in the April 1995 issue of the Transactions on Circuits and Systems. You may know that one of the awards of the Circuits and Systems Society is the Guillemin and Cauer Award. It’s named after Guillemin, who was an MIT professor in circuit theory, and Cauer, who is here in Germany.

Cauer was working for the ITT in Berlin. He was really the most famous circuit theorist in Europe before the war. At the beginning of the war, the ITT decided to be “smart” by transferring this Antwerp Company from a direct subsidiary of the ITT in New York to a subsidiary of the ITT in Berlin. This way, when the Germans entered Belgium, this was considered German property, not enemy property. Due to this, there were during the war close contacts between this Berlin ITT Company, which was called Mix & Genest, and the company in Belgium. Mix & Genest was a former German company that was taken over by the ITT. This company in Belgium was founded, in fact, by the Western Electric Company in 1882, I think. The company in Antwerp was later sold to ITT, in the '20s, due to the anti-trust ruling I mentioned before.

Anyhow, that is how Cauer had close contact with Antwerp, and with Belevitch. Belevitch is also an absolutely brilliant person. He finished his high school two years ahead of time, and then the university, and so on. He was very young when he entered industry: despite the five years it takes in Belgium to get an engineering diploma, he was twenty-one when he got his university degree. He was born in ‘21, so he is now seventy-six years old. I am seventy, so he is not that much older than I am myself. Due to the cooperation between Mix and Genest, Cauer had apparently been in Antwerp several times. Belevitch personally met him, and thus got into filter circuit theory.

**Nebeker:**

And was that for carrier telephony?

**Fettweis:**

Yes. That’s how Belevitch got into circuit theory, through his contacts with Cauer. Then I came in via his channel through Manneback.

**Nebeker:**

How long did you work in that area?

**Fettweis:**

I actually worked almost twelve years, but with interruptions. I entered in October ‘51. Then in January ‘54, when I had been with the company for a little over two years, I was sent to the ITT labs in Nutley, New Jersey. I don’t even know what has happened to these labs today. Do you know Nutley, New Jersey?

**Nebeker:**

Yes, I know where Nutley is.

**Fettweis:**

If you find out something, I would like to know what happened to them. Most of the ITT companies have been taken over by Alcatel. The whole telecommunications branch of ITT is now Alcatel. The ITT labs used to be called Federal Telecommunications Laboratories, when I was there. Later they changed the name to ITT Laboratories. I’m not sure if they’re still owned by ITT, or what has become of them.

**Nebeker:**

What did you work on there?

**Fettweis:**

I worked on what we would call today electronic switching. Those were the early days of electronic switching. There was not a commercial basis for it yet.

**Nebeker:**

What was the application in prospect?

**Fettweis:**

To make telephone switching equipment for central switching stations. But they were not up to that point yet. The purpose was to determine toll charges by electronic means, thus to record the time of the beginning and the end of a call, to compute the duration and distance, and finally to determine the charge to the calling party. That was for long distance communications. In ‘54, even in America, there was no automatic long distance dialing; you had to go through an operator. Even in Nutley, which is practically in the New York suburbs, to call into New York you had to go through an operator. You couldn’t dial directly. Our focus thus wasn’t electronic switching yet; it was long distance dialing.

**Nebeker:**

So you were working on that kind of an apparatus, to dial automatically?

**Fettweis:**

An apparatus for long distance dialing. The switching itself is, in principal, the same as for local switching. But in local switching, registering the call charges and so on, is simple. For long distance, it’s far more complicated. Especially since according to American law you have to properly register the beginning and end of every call. That is not the case in Germany or in most European countries.

We wanted to solve the problem by electronic means, thus by the same type of technology as you could use for switching or in computers. The technology was based on gas discharge tubes. These had been developed by STC, London, i.e. by the English ITT. They had built a computer on that basis, and you practically could see what was happening. You could see a tube turning on, so you could actually see the zero and one states. This was a relatively simple, voluminous, and very slow technology. We could not operate beyond 30 khz.

**Nebeker:**

Did you get such a device working?

**Fettweis:**

We got it working, but not reliably. I don’t know if they ever put it into production. I don’t think so; I stayed there for only two years.

**Nebeker:**

Was that like from ‘53 to ‘55?

**Fettweis:**

From the beginning of ’54, for two years, and then I was sent for three or four months to an ITT subsidiary in Chicago to study transistor applications, because it was not long after the introduction of the transistor. Then I returned to the company in Belgium. Belevitch had left the company by that time. The ITT company in Belgium had actually developed a computer for the Belgian government, which had been financed it. It was no more than a single piece, and they claimed it was the first computer built in Europe. I’m not sure if this is correct, because Zuse in Germany had made computers during the war already, and continued to develop them soon after the war. But it was one of the very first computers; there is no question about that.

Belevitch had been the head of the transmission lab working on carrier telephony, but he was also involved with the development of that computer. Because of his brilliance they wanted his cooperation. Also, Manneback was assigned by the Belgium government research organization to supervise, as a physicist, the development of that computer from the applications point of view. He represented the people who wanted to use it. That is another link between my own career and Belevitch's. I was not personally involved with these activities. But I had gotten involved with a related technology in America. In Antwerp they used a different technology, not gas filled tubes.

When that computer was finished, the Belgian government set up a company to operate it. The capability of the computer was far less than that of a simple PC today, but it was the central Belgian computer at the time. There was only that one. Belevitch was put in charge of operating it, and that’s why he had left the Antwerp Company. I had been working with him just a little over two years, in fact, before I went to America. On my return, I was put in charge of filter development, which is analog signal processing. Filters, amplifiers, equalizers, and modulators: they’re all signal processing devices.

**Nebeker:**

And all of this was for carrier telephony?

**Fettweis:**

Yes. But carrier telephony is now passé, it’s old technology, the new technology for long distance is PCM.

**Nebeker:**

But that company was developing new equipment for this?

**Fettweis:**

Yes. And therefore we had to develop the filters. In carrier telephony the filters were the essence of it. The most challenging technology was the filter design. It was very important.

**Nebeker:**

Was that work at the forefront of carrier telephony?

**Fettweis:**

Yes, I would say it was at the forefront, insofar as we used the modern filter design methods. The classical way of designing filters was the so-called image parameter method. Shortly before the war and early during the war, a new approach was worked out that the Americans called the insertion loss method. It was triggered by Norton at Bell Labs, who influenced many people, and was developed primarily by Sidney Darlington, also at Bell Labs. It was also by Cauer and Piloty, and Bader, here in Germany. The design of filters by this method was numerically very demanding, so you had to calculate very intensively. You had to solve higher order algebraic equations with extreme accuracy. You needed many digits to finally produce parameters that had just the accuracy you needed for the inductors and capacitors to be implemented.

**Nebeker:**

Were such calculations feasible before the electronic computer?

**Fettweis:**

They were feasible, but very, very tedious. That was the problem. They were feasible in principal, and the method had been developed simultaneously, more or less, in America and Germany. My personal feeling is that Darlington was really somewhat earlier than the others, although there was no real communication between the two sides. All the knowledge was available at Bell Labs. Bell Labs did not use the method for their own filters, because of the complicated calculations. I know this because I remember when Bell Labs finally used the method. I remember seeing the ads from Bell Labs in '63 or '64 in IEEE publications. They showed beautiful filter curves and were advertising that they were now able to use these advanced methods because computers had become available. So, it was around ’63, ‘64 that Bell Labs finally used this approach. But here in Germany, these methods had already been used during the war.

**Nebeker:**

Were you using them in the late ‘50s?

**Fettweis:**

Yes, that was my major project. I had to apply this method for designing filters. Belevitch told me how to do it. I started in the spring of 1952, thus well before I went to America, and continued after I returned. But the reason why this method was used quite early in Europe has to do with the war. Many of developments are indeed influenced by specific circumstances. I remember while I was in America, attending a conference at the Polytechnic Institute of Brooklyn.

**Nebeker:**

Was it was the symposia they had?

**Fettweis:**

The Brooklyn symposia, exactly. Ernst Weber was behind these activities very strongly. There was Guillemin, who is often considered the father of circuit theory in America, because he wrote all these basic books. He is the one who introduced the systematic teaching of these theories at MIT, long before the war. Guillemin got his doctorate here in Munich, with Sommerfeld in theoretical physics. Sommerfeld was a theoretical physicist. Guillemin's first publication, his dissertation, was thus the fruit of his association with Sommerfeld. Also, the one theoretical physicist that Manneback admired the most and always talked about, was Sommerfeld. He had a close association with Sommerfeld, and that confirms how so many developments in our field somehow trace back to Sommerfeld.

**Nebeker:**

So, filter people trace back to Sommerfeld.

**Fettweis:**

They trace back to Sommerfeld’s theoretical physics, right here in Munich. Sommerfeld was not only a brilliant physicist, but also an admirable person. He opposed the Nazis very strongly, and got into trouble with them. He was in many respects a most remarkable person, and has been a focus, so to speak, for circuit theory. For many leaders in the field in America, as well as in Europe, you can trace their origins back to Sommerfeld.

Anyhow, I was talking about Guillemin. He took part in a panel discussion at the Brooklyn symposium I mentioned; I don’t remember exactly when it was. The question was put, “Why, in America, don't they use this advanced filter design method, while they have are using it in Germany?” Guillemin answered that it was due to Cauer’s book. Cauer had indeed written a famous book on the modern methods of circuit design, i.e. on circuit theory and synthesis. It was in German, of course. Only relatively few copies had become available, because it got destroyed by bombing. As far as I know, it was reprinted but destroyed again. Then the manuscript of the second volume got burned in a bombing raid. So it had never been really available. The original first volume was reprinted in America. During and shortly after the war, as you may know, they reprinted quite a few books that were of public interest. They copied them in America and printed them there. The copy of Cauer's book that I had bought at the time was an American one. The text is in German, but it is printed and published by Edwards Brothers in Ann Arbor, Michigan.

Guillemin’s theory was that Cauer's book was so important, because there was no equivalent book in English at that time. That was the reason he gave, but I would definitely say it’s incorrect. Belevitch has been very much involved in these activities, so most of the historic information I have from him. By the way, if you ever have the chance to talk to Belevitch, you should probably try. The real reason is different. The most critical filters in carrier telephone equipment are the so-called channel filters. These filters, in America, were built as crystal filters. They used quartz crystals in order to get the high Qs, i.e. the high quality factors, and the needed frequency stability. The Bell Labs technology was based on crystal filters, as was the technology of STC in London, in England.

At the time there were no synthetic crystals. Crystals therefore had to come from natural sources. Filter crystals are different than oscillator crystals. You needed much larger crystals, about two inches long, maybe half an inch wide, I don’t know exactly. These were relatively large size crystals, and they had to come from natural crystals which were much larger than the filter crystals themselves. There was only one supplier of that type of natural quartz crystal in the world, Brazil. So all these crystals came from Brazil, worldwide.

During the war, the Germans and the rest of the European continent were cut off from the crystal market. Therefore, they had to find a different way to get around the problem. They used a different way of modulating, adopting a pre-modulation stage where they could have the main filtering at a lower frequency. They didn't work at frequencies of, say, twelve channels in the range of 60 to 108 kHz, like the Americans. At first, they used a lower range of only 4 to 8 kHz. Later, they used three channels in the range of 12 to 24 kHz, but I think that came after the war. During the war I think it was 4 to 8 kHz. Then after pre-filtering, they went up to higher frequencies. But the critical filtering was done in a low frequency range to get sufficient stability. Also, the relative bandwidth is then lower, so that you don’t need as high quality factors.

I don’t know how familiar you are with these problems. The losses in the inductors are the main problem. The smaller the relative bandwidth, that is the bandwidth divided by the center frequency, the more difficult it is to get the same quality. It’s really the loaded quality factor, which is the ratio of the reactance value divided by the resistance of the coil, multiplied by the relative bandwidth that counts. That is the critical point. They tried to get to a lower frequency range to drastically simplify the filter design problem. The objective was to get the maximum out of what you possibly could obtain from inductors and capacitors. That’s why they used these advance design methods.

**Nebeker:**

So they wouldn’t have done that if they didn’t have to get around the crystal problem?

**Fettweis:**

This, I don’t know. You could have used these advanced design methods for crystal filters. But Darlington did not examine the specific needs for crystal filter design by his method. I don’t know exactly why. The point is, here in Europe, they started with this different approach. They continued after the war, because they had found advantages to it.

**Nebeker:**

What was this approach called?

**Fettweis:**

It's called the insertion loss method. We like “effective loss method” here in Europe, because what counts is not really the insertion loss. For traditional reasons, which would be too complicated to go into now, Americans used the concept of “insertion loss,” while prefer “effective loss.” They are not exactly the same, but they differ by just a constant, so that’s not very important in practice. Some people simply call it modern filter design. But “modern” is relative; it continuously changes. Later, some companies switched from inductors and capacitors to mechanical filters, which Siemens, in particular, developed. Mechanical filters have transducers that convert electrical to mechanical vibrations and then back to electrical, because mechanical vibrations have much higher quality factors and stability. But crystal filters are also in a sense mechanical: the vibration is mechanical. You don’t need a separate transducer, because the transducer is incorporated into it, so to speak. You can do it locally, so you can have electrical components and crystals connected together.

The real point is, you have three electric components in the simplest equivalent circuit of a quartz crystal, that is one inductor and two capacitors. There is a relationship between them. Therefore, you don’t have full designing flexibility. That is one of the difficulties for applying the advanced methods to the traditional American way of building channel filters. More precisely, it’s the ratio of the parallel to the series capacitance that is relatively high in an actual crystal. You can always make it larger by putting another capacitance in parallel to it, but you cannot make it smaller

Anyhow, I should not go into deep detail, but you are particularly interested in how I came to signal processing. Indeed, Belevitch gave me the task of designing filters in a somewhat higher frequency range, using the modern method. I had to fight with these numerical intricacies. It was very difficult, of course. It took me two months working from morning to night, calculating on an electromechanical desk calculator, to design the first filter. Then I got assigned a young girl who had only eight years of grade school education, but nothing else. And she then did the work for me. I had to divide up all the work into small steps that she could understand. She did not know what a zero of a polynomial equation would be, or anything like this, of course. So I did at that early stage something like what you do with a computer.

**Nebeker:**

Programming.

**Fettweis:**

It was kind of programming. That’s why I feel that I did, in a sense, do some kind of "pioneering" work in programming!

At the time, we did not have ferrites yet, so we had to use regular iron cores, i.e. cores made of powdered iron. The quality factor of the resulting inductors was relatively low. There was one important point in Darlington’s theory that was not in Cauer’s. He had shown how you could take into account losses when you designed filters. This is relatively important for understanding my contribution to signal processing. That’s why I think I should mention it.

**Nebeker:**

Good. Yes.

**Fettweis:**

That made the calculations even more complicated. Darlington had called the approach the “pre-distortion method.” I designed the filters with pre-distortion. That means, instead of having zero or close to zero loss in the pass-band of the filter, we needed ten decibels of so-called basic loss. The poorer the coils are, the more basic loss one needs. We needed ten decibels, which was quite a bit. But the filters worked perfectly in the lab. Then I was sent to America.

**Nebeker:**

So that all occurred before ‘54?

**Fettweis:**

Yes. That was when Belevitch was still there. In the middle of ‘56, I returned from America. The filters had gone into production while I was in the States. When I came back, Belevitch was not there anymore. When I was greeted by my colleagues at the labs, they were personally very friendly to me, but said, “Never these filters again. We had so much trouble in production.” For the young engineer I was, that was a terrible blow. I had put enormous effort into designing these filters. I had done everything I could to do a nice job. It was quite a disappointment to hear, “Never again.” They told me the filters were too critical and said, “With the old-fashioned image parameter method, we never had such problems. We never want these modern methods again.”

That’s when I started thinking about what the reason for these troubles could be. In ‘56, I came across the property that answered that question. I did not publish my findings at the time; I thought it was important, but I didn’t think it was new. I was a young engineer, and I thought that the old filter designers were well aware of it. Ten years later, Orchard published a result equivalent to mine. He used to be in England, but came to America and has been at UCLA for some thirty years. He was initially with Lenkurt in industry, and then joined UCLA. The property is often called Orchard's Theorem, although some people call it the Fettweis-Orchard Theorem. There’s a handbook on circuits and filters that came out quite recently. Wai-Kai Chen is the editor. It’s called the Fettweis-Orchard Theorem in that handbook.

Anyhow, I realized that this sensitivity problem was related to basic loss. Zero loss in a passive circuit is something you can never go below, because it would mean you have an active device.

If all components are passive, the effective loss can never become negative. You need amplification to get a negative effective loss. So if you have passive devices, at any frequency where the loss reaches zero, that’s rock bottom. That means, mathematically speaking, that the sensitivity is zero. If you change a component, if you lower it or make it larger, you cannot get below that value, and therefore the derivative is zero, so you have zero sensitivity.

That is a marvelous property of a filter. It’s astonishing that filter designers had not been aware of this, because that property is absolutely fundamental, in my opinion, to the development of carrier telephony, which is essentially a filter-based approach. The fact that we have been able to use such beautiful, sharp filters is essentially due to this sensitivity property. This means that if you design a filter to be a nice filter, you are automatically very insensitive to the parameter influences. It is a fantastic property.

Suppose you have a filter of degree ten, you may have in that filter, let’s say, twenty components. With a filter of degree ten, you can have zero loss at five different frequencies in the pass-band. Now, the sensitivity at any of these five frequencies is zero, and this with respect to any of the parameters. You have twenty parameters times five, that is a hundred conditions you impose on the sensitivities to be zero. Now you have only twenty parameters in your circuit; how can you satisfy a hundred conditions? In addition, you don’t want to waste your freedom completely for getting good sensitivity coefficient; you want to use it to get a good filter curve. Now, the amazing thing is you get the good sensitivity free of charge. Just design the filter to be a good filter, and these one hundred conditions are automatically fulfilled. That is a fantastic property of passive circuits.

That’s what I came across at that time. After this terrible disappointment of my early career, I was, of course, very happy about this discovery. But since filters had been around for forty years already, and since this property is just as valid for image parameter designs as for insertion loss designs, I thought all the old filter people must be aware of this property. I did not even think of publishing it. I had never published a paper anyhow. Then I said to my boss, “Let’s redesign the filters.”

In the meantime ferrites had become available. They had a much higher Q factor. I said, “Then we don’t need pre-distortion anymore because the filters will have a good response without it. We can forget about pre-distortion and thus get beautiful sensitivity, even have better than for image-parameter filters.” But my boss at the time didn’t want to hear anything about it. It took me two years before he finally gave me a green light to redesign the filters. Everything went beautifully. It went perfectly, no problems at all. This taught me that passivity and losslessness are fantastic properties, and that’s why it’s so important for my career. If you want to have low sensitivity or high stability, high robustness, I would say, you should rely on the passivity and losslessness properties.

**Nebeker:**

Did this design go into production?

**Fettweis:**

Yes, of course it went into production.

**Nebeker:**

Was it a great success?

**Fettweis:**

It was a great success. There were no problems. The former problems had been caused not by the insertion loss method but by the use of pre-distortion. At ten dB basic loss, the filter response behaved like floating, it could easily go up and down, but in the new design the response was held firm at the zero level.

Then we went into crystal filter design. That’s why crystal filter are important for me. We started learning how to design crystal filters by the insertion loss method. That was one of my early contributions to the design of filters. We designed them by the insertion loss method, and they went into production. They were also successful.

**Nebeker:**

Were these filter designs taken up by other companies as well?

**Fettweis:**

I don’t know. At ITT in Raleigh, North Carolina, which I think now is also Alcatel, there was Erich Christian, who was originally from Austria, who designed crystal filters using ideas similar to mine. But he did it independently. I don’t think he was triggered by my paper. I indeed wrote an important paper on it, but I published it in a Belgian electronics journal, and therefore it has never been widely circulated. I don’t know if anybody else has done anything like this. But the point was that we in Antwerp, due to our association with STC, London, had followed the crystal technology. I think that crystal-based channel filters were produced mainly by Bell Labs, and thus by Western Electric, by the STC, London, and by the ITT company in Antwerp. Almost everybody else followed the road inaugurated in Germany, which did not use crystals. It was inaugurated due to the war conditions, and after the war they continued on this different road.

**Nebeker:**

The increase in computing power made it easier to design these filters, I imagine.

**Fettweis:**

Much easier. We had computed the crystal filters on the computer that Belevitch was running. We did not have a computer at the company where I was working, so we had to go to an outside computer to do computations. That was expensive and thus meant that the results had to be correct on the first try. We could not afford any redesigns.

**Nebeker:**

So, it was still very much constrained by the calculations?

**Fettweis:**

By calculations and computer power, which was fundamental. That’s the time when I also got involved in electronic switching. The company in Antwerp, and the ITT in general, had been the pioneers in electronic switching systems. The idea was to use PAM, Pulse Amplitude Modulation, because PCM would have been too expensive. PAM switching needed conversion from continuous time to discrete time, and back to continuous time. But if you do a normal PAM sampling, and reconvert the sampled signal to a continuous signal, you lose an enormous amount of energy, because first you pick out only a thin slice, and later spread it over the whole time range. This energy loss would have required amplification, but amplification was much too expensive in both equipment and power consumption.

So, one needed to do the conversion in a lossless way. A method for achieving this was invented at Ericsson in Sweden. They invented an approach to go from continuous time to discrete time, PAM, and back to continuous time, in a passive, lossless fashion, without amplification.

**Nebeker:**

Could you write their names, just the last names?

**Fettweis:**

Yes, they are Håård and Svalla from Ericsson. Their idea was reinvented at least three times, by Cattermole at S.T.C, by French at the British Post Office, and by Lewis at Bell Labs. The Bell Labs people called it resonant transfer. We can argue about the name. It’s not really correct. It’s not really a resonant phenomenon, but it’s something related to resonance phenomena. So, maybe resonant transfer isn't totally far fetched. It consists of circuits that operate simultaneously in continuous and discrete domain. This kind of equipment is described by a combination of differential equations and difference equations. The question arose, “How does one design filters for this purpose?” Filter theory was available only for just continuous domain. For the new situation, you could not apply the standard results. At the company in Antwerp, they asked me to join the recently formed switching lab to handle their transmission problems. This was to be my second assignment, not for computer design like it had been for Belevitch, but for resonant transfer in electronic switching.

There’s a book on PCM by Cattermole, and also an early book on transistors. He’s a very clever person. He had elaborated on a first theory of resonant transfer. Cattermole laid out a good theoretical basis, and I started with his work and built a more complete theory of resonant transfer circuits. That’s what I used later for my doctoral work. Belevitch, who was an adjunct professor at Louvain University, was the official advisor. I had done the work 100 percent on my own, and I had seen Belevitch only twice in the course of my studies: once to ask if that kind of subject would be acceptable, and the second time when I had a number of results together and asked him if that was okay. I explained it for an hour to him. He did not read my dissertation before I had officially submitted it. He said, "Just submit it." He has said himself that he was not really an advisor in the sense that he discussed the subject with me. But that’s not important. I appreciate him very highly.

Anyhow, once again passivity came into play. They had been working very strongly in Antwerp, for instance, on electronic switching systems designed on the resonant transfer basis. Similar work was also going on at Ericsson, at Bell Labs and here at Siemens in Munich. There was a major problem in it: crosstalk. Something one had not expected. You had pulses that were of the order of, let’s say, one hundred microseconds apart. We used 10 kHz sampling frequency. Others used 8 kHz sampling frequency. The pulse width was narrow, so you could get frequencies up to the megahertz range.

As you might know, one of the most critical issues in telephone equipment is indeed crosstalk, especially what we call “intelligible cross-talk.” Intelligible crosstalk is when you can really listen in and understand the other conversation. In carrier telephone equipment you primarily have unintelligible crosstalk, for instance between signals in oppositely oriented spectra. Then you cannot understand the interfering conversation, so that requirements are not nearly as severe as when you can understand it. Intelligible crosstalk is something you really have to avoid, because otherwise the telephone company cannot guarantee confidentiality of a conversation.

In PAM switching systems, everybody uses the same frequency range and parts of the same switching equipment. The unavoidable couplings at the higher frequencies will produce intelligible crosstalk, so you must have 80-decibel suppression. They couldn’t achieve this under the most extreme conditions. That is really what stopped the development of PAM. They realized that the only way to do electronic switching in the time domain is to use PCM, and they had to wait until transistors were cheap enough to build competitive PCM equipment. That was the breakthrough for electronic time domain switching. Earlier electronic switching used space domain multiplex, as they called it: the switch itself was electronic, but there was no sampling process. Time domain multiplexing became feasible only through PCM.

I myself, however, had been working on resonant transfer for PAM electronic switching. Someone else with interest in this was Poschenrieder, who was a famous filter man at Siemens who later moved up into top management. He was probably the filter man who moved up highest in the hierarchical ladder of any company. He had got hold of my dissertation and had realized that some of my results, although obtained from different equations, were practically identical to what people used in microwave filtering. He realized that resonant transfer was a way of building electronic filters that had the overall behavior of passive circuits. He published his results, and I then showed in a paper how you could refine his approach and apply it more generally. How you could design a broad range of filters based on principles derived from resonant transfer ideas but mathematically equivalent to the transmission line filters that microwave people used. Of course, this was still all analog technology, and in fact was closely related to what later became known as switched-capacitor filters. Then digital filters came around, and I thought I really should learn something about them.

**Nebeker:**

This was in anticipation of the PCM?

**Fettweis:**

No, that was when I was already at the university. Maybe I should mention that first. I was working in industry in the area of transmission, and primarily carrier telephone equipment, but at the same time on resonant transfer circuits. We also tried to combine switching with the carrier telephone system. From sampled signals, you can indeed recover continuous signals in a higher frequency range. And there was thus the question of how to design the filters if they had to be band-pass filters instead of low-pass filters. This was a very challenging time, and I was involved with all these activities. While I was doing this work, Belevitch was not my boss anymore. He was running the computer center I have mentioned.

At the time, this center was important. The computer was relatively primitive, but originally it was the only one in Belgium, so it was something of central interest. In the meantime, computers had become commercially available, in particular due to IBM and some others. So more and more people got their computers, and the computing center lost its reason for being. Then Philips offered Belevitch the opportunity to set up a research lab in Belgium.

**Nebeker:**

A new research lab?

**Fettweis:**

A new research lab. It was a branch of the Eindhoven Research Lab. Belevitch took the offer. He moved with practically his whole team. His people became the core of this new Philips Research Lab. Belevitch asked me to join. I was supposed to become his deputy for running the lab. And I had, in fact, accepted. I had given my resignation to the company in Antwerp, and they were upset at that I was joining their big competitor, Philips, in Brussels.

There were also private matters involved. Although it takes a little bit of time, I should mention this, because it’s quite important in my scientific life. When I was in America, I met my wife, who is from Detroit. We met in New York in ‘55. We got engaged while I was still in the States, and then I returned to Belgium, she came later to join me. We got married in my hometown, Eupen, in the beginning of ‘57. We have been happily together ever since. We have five children, and ten grandchildren. That is why I have a very strong link to America, because I have an American wife, and I also lived in New York, not too far from where you are living.

We lived in Antwerp in Belgium, which is Dutch speaking, with a number of people speaking French. At the company, there was always this problem of two languages, Dutch and French. But my wife was English speaking, and I was German speaking, so we had the problem of being faced with four languages. My wife has always spoken English to our children, from earliest age on, and I have always spoken German to them. We thus had two languages at home, plus two other languages outside of the home. That has been a major factor in my career.

**Nebeker:**

Yes. I see.

**Fettweis:**

Brussels held an interest for us. At that time my oldest child was approaching school age. In Brussels there was the European school of the European community, and a German school sponsored by the German government. The European school had a German section. Anyhow, our children could have gotten an education in German. English was not feasible at that time. England was not yet part of the Common Market. Therefore, you had the option in the European schools to study in French, German, Dutch, or Italian, the original four languages of the European Community. Of these, our choice was German, so we thought of sending our children to the German section of the European school. That possibility existed in Brussels, but there was no such school in Antwerp. That was another reason why we thought it might be a good choice to go to Brussels.

Belevitch and I were supposed to start setting up the lab together the first of January ‘63. Actually, it was the second of January, because the first is a free day. As I mentioned, I had already resigned from my job in Antwerp. Actually, when I joined the company in Antwerp in 1951, they told me I could do my doctoral work while I was with the company. But I was always involved in more urgent matters, and I never really could sit down to do it properly. But while I was preparing to join Belevitch, I received an offer for a full professorship at Eindhoven in Holland, at the technical university. I had learned Dutch because I had been working in Antwerp, and I had given lectures in Holland a few times in Dutch at meetings. Actually, at one of these meetings I spoke for the first time in public about the sensitivity property I mentioned before. That was still eight years before Orchard published it. It was a two-hour lecture in a summer course on what used to be called modern filter design methods. That’s how the Dutch had come to know me, and knew that I could speak Dutch well enough to take a professorship in Holland.

My wife and I found that offer attractive, and I talked to Belevitch about it. Of course, he was disappointed that I wouldn’t come. He still wanted me to join him at least for a short time. I was supposed to start in Holland in September ‘63, so he said he would let me finish my doctorate and then I could go to Eindhoven. But then the company in Antwerp told me, “If you stay with us, you don’t have to work for us anymore; you can just concentrate on writing your dissertation, as long as you don’t go to join our competitor.” I asked, “What about the salary?” They even gave me a good raise. This made us decide, “Okay, let’s do that.” That way we didn't have to move to Brussels for an interim period.

My daughter had to start school in September. Eindhoven is close to the Belgian border. About thirty miles from Eindhoven is the Belgian town of Moll, where the Belgian Atomic Research Center and a branch of the European Atomic Research Center are located. They also have a European school there. I could commute between Eindhoven and Moll. So we had solved our school problem. My children went to the German section of the European school in Moll. That’s how I came to Eindhoven. Once in Eindhoven I continued to advise the company in Antwerp. I continued consulting on resonant transfer.

That’s when, as I mentioned before, Poschenreider had written to get my dissertation. I very much liked my work in Eindhoven. Of course, I had to commute between Eindhoven and Moll, which was not so nice. Then I got the offer for a professorship in Bochum in Germany. The offer came through two people who were teaching at universities in Germany but whom I knew from that radar training during the war. One was Saal, who had been one of our technical teachers. The other one was a friend of mine with whom I had even roomed. I thought seizing an opportunity like this would make things easier for us.

Anyhow, that’s how we finally came to Germany. When I arrived in Germany, integrated circuits were just beginning to be developed. Digital signal processing wasn't visible: nobody I knew talked about it yet. I started to plan a scientific career of research based on resonant transfer, and on other insights I had obtained from I; also, on how you could integrate the filter with the modulator to design carrier telephone systems as a unit. In the mid-1950s, I had developed a theory for this. It was not so easy, but it did work. I had made a full analysis of how you could reduce the analysis to a finite number of equations and still be rigorous. I had obtained realizability conditions. So here was a really interesting research program I had planned, but it was still in the classical mode of thinking about carrier telephone systems.

**Nebeker:**

Were these still analog circuits?

**Fettweis:**

They were analog circuits, but with electronic switches. But then digital filters came. You probably know Jim Kaiser and his famous paper on digital filtering. I studied that very thoroughly, and I told myself that there were problems of accuracy, and so on. I asked myself, “Can we get the passivity and losslessness properties into these algorithms?” How could we do it? How can you get power, energy, loss, and losslessness into an algorithm? I was thinking that we indeed have resonant transfer, which is described at least partly by difference equations. For digital filters, there are only difference equations. How can we do something in the digital domain, which is the equivalent of resonant transfer filters?

Before I even thought of the resonant transfer, I tried to do it on the basis of voltage and current, and then realized that is feasible only with very mediocre properties. You had to do it in a different way. So I tried to convert the equations of the resonant transfer into equations that would be realizable by digital algorithms involving multiplications, additions, and delays, i.e. the kind of operations that are commonly available in digital signal processing. That’s how I discovered these wave digital filters. I first obtained them by means of resonant transfer. I never published how I did that. It was much more complicated than the approach I did publish, and I did not even mention that I had obtained the result originally from resonant transfer. But, in fact, I did. The resonant transfer work was fundamental to getting to this signal processing method.

Afterwards I realized that you could obtain the same results in a much simpler fashion by using an approach different than resonant transfer. It was far simpler and far more general to do it in a direct way. This involved using what are called “wave quantities,” or simply "waves." Waves are related to something called a scattering matrix in electrical engineering. The scattering matrix originally came from physics and was used intensively in the early ‘30s by Heisenberg for studying particle scattering. Physicists working during the war at the MIT Radiation Lab introduced the scattering matrix into microwave engineering. Once again, therefore, radar played a big role.

**Nebeker:**

I see.

**Fettweis:**

Independently of this, in circuit theory, Belevitch hit upon the scattering matrix in Antwerp, and that was an essential part in his own dissertation. He developed circuit theory, not microwave engineering, on the basis of the scattering matrix. Unfortunately, in the U.S. this approach is little used. In America they have a traditional method, like Darlington used to do it, of describing the insertion loss method. The scattering approach is a much easier way, and some people in America know and appreciate it. In particular, at the Brooklyn Polytechnic they realized the importance of the scattering approach and have been promoting it strongly. Carlin, who later went to Cornell University, was particularly important in this.

This is one of the problems in the U.S. I hope you don’t mind if I say this. They have difficulty adopting ideas that come from outside the country. That is one of the reasons that they haven’t adopted the metric system yet.

**Nebeker:**

It’s sort of the “not invented here" syndrome.

**Fettweis:**

Yes. The scattering matrix in microwaves is absolutely fundamental and is always used in American textbooks. It was introduced into this field in the States. However, there may be a chapter in some American books on circuits on the scattering matrix, but they do not systematically build the whole theory on the basis of the scattering matrix in the States.

**Nebeker:**

I see.

**Fettweis:**

There is the famous book by Belevitch on classical circuit theory, as he calls it, which was written in Belgium and published in America. But traditions are different. In Belevitch’s approach, it’s an analysis tool and a tool for the synthesis of circuits. It’s to simplify the theory. The solutions you arrive at are the same, but the theory becomes much simpler if you use the scattering approach as Belevitch does. I have gone a step further. Using scattering ideas, I arrived at my wave digital concept. Waves are defined as linear combinations, in the simplest cases, of voltage and current at a port. We call them waves because they are closely related to physical waves. It's as if you had infinitesimal transmission lines in front of the ports, with physical wave propagating along these lines. That is how waves at ports are related to physical waves, although you can formally define them without thinking about physical waves at all. But it is fundamental that you have something incident, and something reflected. If you have voltage and current, which comes first? It's like the "chicken and egg" problem.

At a usual port, you cannot say the current creates the voltage or the voltage creates the current. Both are there on an equal footing. But in wave quantities, you can say there is an incident wave and there is a reflected wave, or a transmitted wave, or even more generally, a scattered wave. You have something that is arriving and something occurring afterwards. This is very important for digital signal processing, because digital signal processing, as opposed to analog signal processing, requires sequential ordering. You must carry out operations in sequence. You must have a result, and then you can carry out the next operation, and the next one, and so on. The Kirchoff equations in a circuit are instantaneously satisfied, so to speak. There is no sequential ordering.

It's a matter of computability, in other words. You have an algorithm, which is by definition computable, so you must be able to assign a sequential ordering in which the operations are to be carried out. I realized this only later, I must confess, but it is an important point behind these wave digital ideas. On the one hand, you can carry over passivity and losslessness. On the other hand, you can ensure computability owing to the fact that we base the analogy not on voltages and currents but on waves. Those are really the two most important aspects.

Although I did not have the solution immediately, I was quite optimistic that I could also solve the limit cycle problems, or more generically, as I call it now, the "robustness" problem in algorithms. This problem is due to numerical inaccuracies. You have all kinds of disturbing effects in a digital algorithm, and I knew that the stability method of Lyapunov is by far the most effective, but you need the Lyapunov function, and usually one doesn’t have one. But in a passive circuit, stored energy is a natural Lyapunov function. It is a function that can only decrease, thus never increase, due to passivity. Therefore, at a minimum, there must be a stable point, because if it moves out of the minimum, it will always return to the minimum due to dissipation. This natural stability allows you to ensure absence of limit cycles.

The robustness problem is far more general: how does the actual algorithm differ from the ideal one, which has infinite precision? It is much more complicated than the so-called zero input limit cycle problem, but it is related to it. If I started from a passive circuit, I was quite confident that I could get access to a Lyapunov function and therefore also solve the stability and robustness problem. That is an essential advantage of wave digital filters.

**Nebeker:**

When did you first publish or present this?

**Fettweis:**

I first discovered it in ‘69. In the meantime I had become a consultant to Siemens. It must have been in late summer or early fall of '69 that I gave an internal talk at Siemens about the approach. Siemens later patented it.

**Nebeker:**

What was Siemens' interest in it? Did they have a particular application in mind?

**Fettweis:**

Siemens was very strong in filtering, because filtering was the basis of communications. At the time they didn’t have an immediate interest in it.

**Nebeker:**

So it was in the communications area that they saw potential?

**Fettweis:**

They realized that it was something fundamental in the communications area. That’s why they patented it in twenty countries. Only a few inventions get patented in that many countries, because then you cover practically all major industrial countries and all major users.

A few days after the patent was filed I gave the first public talk about it in Erlangen, at Schuessler's seminar. Now this is a very touchy subject I must confess. I submitted a paper on this to the Transactions on Circuit Theory, now called Transactions on Circuits and Systems and they rejected the paper.

**Nebeker:**

On what grounds?

**Fettweis:**

They said it needed experimental verification before they could publish it. It’s one of the classical examples of what can happen to a truly fundamental paper. It's certainly the most fundamental paper I have written, no question about it. It was published here in Germany, but not by the IEEE, and the reason was that they indeed rejected it.

**Nebeker:**

Do you think that it was because it was regarded as too fundamental?

**Fettweis:**

Yes. If you write a paper in line with the present way of thinking, in which you enhance the work of some others, people appreciate it. If you come up, like I did, with a completely different way of looking at digital signal processing by going back to classical circuits and showing how you can carry ideas from classical circuits over to digital signal processing, then you go against the trend. People were largely thinking exclusively in terms of digital signal processing. They felt they could forget the classical area. In the new field they were doing things completely differently. This paper went against their line.

There were other reasons, too. I’ve never found out exactly what happened. Digital signal processing had originated in America, and I came with a completely different approach to it. Schuessler has always worked within the American mainstream of thinking. He was the one who implemented digital filters very early, probably before the Americans did. My work, however, has been quite different. There are also Americans who have had a similar experience. I’ve been told, for instance, that the paper Richard Bellman originally submitted on his very famous dynamic programming method was also refused.

**Nebeker:**

Well, it’s always risky for an editor to accept something that’s out of the mainstream.

**Fettweis:**

That’s what I was going to say. It can also happen to an American author who takes a fundamentally different approach. Anyhow, the paper was eventually re-printed by IEEE, together with several other papers of mine, in one of the IEEE Press books edited by the IEEE Group on Audio and Electroacoustics. Later, as you probably know, I was asked to write an invited tutorial paper in the IEEE Proceedings. It was published about ten years ago, and it is almost sixty pages long. It was almost as long as a book, because the Proceedings use large pages with a very dense way of printing. I spent the summer of ‘74 at Bell Labs in North Andover, Massachusetts, where we did some designs.

There are many different aspects to wave digital filters, which we have gradually solved. The fact that they did not jump on the approach in America was also an advantage. We had more time, so we could be more relaxed to work out some of the details ourselves. If, for instance, the Americans had really adopted the paper, then many people might have worked on the subject. If it had gotten published in America, then maybe we would not have had the time to do so much of the work ourselves, because others would have done it sooner. So, everything has two sides.

We solved the robustness problems. We established different structures. We generalized our approach to several dimensions. In multi-dimensional signal processing you can do filtering in time and space, for example. We showed that you can do this based on wave digital ideas, and preserve all the nice properties. These properties all carry over to any number of dimensions. But you should always start from a classical circuit, called the reference circuit. Then, by certain rules, you transform the reference circuit into a digital algorithm, a wave digital filter. You do that also if you have several dimensions. Then you start from a multi-dimensional circuit, for instance a multi-dimensional Kirchoff circuit with inductors, capacitors, and so on.

What does it mean, a multi-dimensional Kirchoff circuit? It means an abstract representation by a structure like for a classic circuit, but instead of a single independent variable, there are several independent variables. From the outside, so to speak, it looks exactly the same as in the one-dimensional case. It is represented by inductors and capacitors, but there are usually relatively few components in it. It’s not a discrete model of a distributed system, like the way the famous Gabriel Kron had done before the war. His model was one-dimensional, but it extended into all directions, so there were many components. Here, it’s really a compact representation. But it's multi-dimensional in the sense that there are several independent variables. In the constant case, you can again analyze it in the frequency domain, but you have several frequencies, one associated with each one of the different dimensions. But you must have a multi-dimensionally passive multi-dimensional circuit. You don’t have physical concepts of passivity anymore. So, it’s kind of a mathematical generalization of the one dimension case.

One problem is the need for a multi-dimensional reference circuit. There was practically nothing known on the synthesis of multi-dimensional circuits. A general multi-dimensional circuit theory, for instance a comprehensive insertion loss theory, is not feasible due to some basic mathematical difficulties. It has to do with the fact that a polynomial in several variables cannot in general be factored, while as you know a polynomial in one variable can always be factored into elementary factors.

**Nebeker:**

Yes.

**Fettweis:**

That is why a general theory is not feasible, and has never been found. I am convinced it’s not feasible. That’s why we could not draw from an existing body of knowledge. We had to develop what we needed ourselves, i.e. a theory oriented toward specific applications. We have found very attractive solutions for specific classes of applications, and I am therefore quite convinced that for every class of applications there exist specific multi-dimensional structures which meet its needs. But not many people seem to be aware of this work. It is not well known. From a circuit theoretical point of view, it’s very attractive.

We have also extended the approach to adaptive circuits. Adaptive filters are very important, because it is much easier to make digital algorithms adaptive than analog circuits. That’s why digital adaptive signal processing is far more important than analog adaptive signal processing. It is feasible to do this with wave digital filters, with considerable advantages concerning the stability and robustness. We have done some work on it, and it works very nicely, but there is still be quite a bit to do.

**Nebeker:**

I assume you have had a group at Bochum.

**Fettweis:**

Yes.

**Nebeker:**

Where else have people worked on the wave digital filters?

**Fettweis:**

There are groups at universities in Germany and Sweden. There has been a good amount of activity in Delft and Eindhoven in the Netherlands, in Mons and Leuven (Belgium), Budapest, Belgrade, Poznan, Nenchâtel, Vienna, Tampere, Sevilla, London, Edinburgh, Tokyo, Sapporo, Shanghai, Cairo, etc. In England, for instance, there’s Lawson, who with several of his students has done quite a bit on it. Work is also being done in industry, e.g. at Siemens. In the United States and Canada, there have been, to my knowledge, relatively fewer activities, but I should mention Notre Dame, Stevens Institute of Technology, also Stanford, Minnesota, Montreal, Victoria, etc.

I have to jump ahead a little bit. I am retired as a professor because we have to quit at sixty-five. That’s mandatory for us, so I retired on the first of March of ‘92. I’m still an emeritus professor, and I have an office at the university. But I was at Notre Dame University as a visiting distinguished professor, from ‘94 to ‘96. I taught there for four semesters. During that time, somebody from Bell Labs told me that Bell Labs are using wave digital filters in the military domain because they are more robust, and thus more insensitive to all kinds of disturbances.

**Nebeker:**

Forgive me for interrupting, but I’m eager to get sort of your general views on that field. Is it your view that eventually this is going to be the dominant, or a dominant way of viewing filters, or that it will only be in certain application areas.

**Fettweis:**

The major drawback is that the theory is more complicated. Many people are afraid of the theory. You could overcome this for the most part if there was somebody putting out suitable programs, possibly commercially. Then people could design wave digital filters by just manipulating a few buttons. We have developed programs of that sort at Bochum, but we have not commercialized them. At the university we practically have no way of preserving the continuity, and that’s one of the major problems. You have to have maintenance for computer programs, like a car has to be maintained. At the university we can't do that, because the graduate student is gone when he is finished with his doctorate, and the next one has to do something else. There is no continuity other than my own. That is one of the major difficulties of university training. Siemens uses wave digital filters quite a bit, in their PCM equipment, for instance. Codecs made by Siemens, and other companies, even in America, use wave digital filters for doing the filtering. I was in close contact with Siemens, and they owned the patents, so they have easy access. They have the tradition. But others sometimes have had difficulty in finding out how to do it. If you have uncritical applications, it is not necessarily an advantage to use this method. In uncritical applications, nothing serious can happen anyhow. If, for instance, you just analyze a piece of speech, you can use almost any approach on the computer. But it is a different story for equipment that is in long-term operation, because you can for instance have disturbances from the outside induced into it. You can get into a state of instability which is not easily vanishing. So, if you have critical applications, it may be very much worth looking into possibilities of using wave digital filters like for the military applications at Bell Labs.

**Nebeker:**

From your introduction of the technique to the present, has there been a time when there suddenly was more interest, or was it a gradual change?

**Fettweis:**

Certainly, in a sense, there was suddenly more interest, not only in my approach but also in all of signal processing: for example, the patent here at Siemens. They patented it, because they realized it represented a very basic idea. To be on the safe side, they took the patent in many different countries. But they were not really convinced that digital signal processing would ever play a major role. I mentioned a talk where Schuessler and I were discussing the possibility of having floating point arithmetic. I remember Poschenreider raising his hand and saying, "That would be a machine, if you did this." In other words, that would be a huge machine. They didn’t really believe that it would ever materialize, you see. The breakthrough came when integrated circuits progressed to the point where digital signal processing could really be a practical alternative to other methods. So in the early stage, people looked at it with interest. We got, perhaps, quite a bit of admiration, but it was a skeptical admiration.

**Nebeker:**

They felt that it wouldn’t be practical.

**Fettweis:**

Not in the long run. Europeans are more skeptical of such developments than Americans. Americans are much easier drawn towards new ideas than Europeans. That’s something very fundamental in the nature of Americans, as opposed to the Europeans.

That’s how digital signal processing was perceived. For wave digital filters, I would say that Siemens has used them from the beginning. There were people in America, like Darlington, who saw the potential. The first time I gave a talk on the subject in America, he told me “if ever digital filtering, then wave digital filtering.” That was his conviction. But, you see, he came from classical circuits.

**Nebeker:**

Yes. So it appealed to him.

**Fettweis:**

It appealed to him. It would also make his won contributions to be of even more lasting importance. The others who thought we could forget classical circuits, said, “No, that’s too complicated.” So, the scientific community was very divided about it. There was somebody here at the conference who told me that a hearing aid now in production uses wave digital filters, because this turns out to be more efficient for purpose. The point is, really, that you get the advantages without having to spend extra money. It depends on the structure one uses, but the solutions do not require more equipment, so they don’t have to be more expensive than other solutions, if one knows how to do it.

**Nebeker:**

Is it especially in hearing aids that stability is important?

**Fettweis:**

Yes, stability and sensitivity. Sensitivity plays a completely different role in digital signal processing than in analog signal processing. In analog signal processing, sensitivity is primarily important because of manufacturing tolerances, aging problems, and temperature effects. None of these play a role in digital signal processing. For manufacturing, you know which digit you have to implement. Whether you build it here or in Russia or in any other place, you obtain exactly the same algorithm. Temperature effects and aging don’t play a role, as long as the circuits don’t go completely off, i.e. as long as the digital operations remain correct.

Sensitivity in digital signal processing plays a double role. The first one has to do with the accuracy, with which you have to implement the coefficients. The number of bits that you have to use for this increases indeed as sensitivity increases. The second role is related to the first one and concerns the dynamic range. Dynamic range is the ratio of the upper limit of the signal to the noise level. The noise level is determined by round off, and the upper limit is due to saturation.

Between the two extends the dynamic range. In digital signal processing, unfortunately, they usually simply say “noise.” This assumes the signal to be normalized in such a way that its value is between -1 and +1. If that’s what one uses, the signal-to-noise ratio and the noise alone are essentially equivalent, but what really counts is the dynamic range. It was intuitively clear to me when I started with wave digital filters, as I have proved later in a paper, that there is indeed a strong relationship between dynamic range and sensitivity. Due to this relationship, an algorithm that is less sensitive tends to have a better dynamic range performance.

A similar situation exists also in analog signal processing. I will talk about this a little bit more in a paper that I have been asked to write. It will be a contribution on sensitivity for a major Wiley Project, the Encyclopedia of Electrical Engineering.

**Nebeker:**

We talked before about this overlap between the concerns of the Signal Processing Society and the Circuits and Systems Society. Were people from the circuit side more likely to adopt wave digital filters?

**Fettweis:**

Definitely, especially the older circuit people. That is certainly true because we carried basic ideas from circuits into signal processing. I mentioned Darlington, of course, and other people like Bose who embraced it very strongly from the beginning. The other side is a little bit more skeptical.

There’s one important part which we haven’t touched upon at all but about which I would like to say a few words.

The relationship to numerical integrations has come up only relatively recently. In the long run it could be quite important. It gives a new outlook on how digital signal processing can be applied to numerical methods. In numerical integration, again, you have stability problems. That means an algorithm for numerical integration can be unstable even though the underlying system is stable. This can occur in relatively simple situations, even in ideal algorithms, i.e. under the assumption of infinite precision, in which case it is due to discretization of the independent variables.

Another problem is the numerical inaccuracies due to finite precision. We know that in digital signal processing this can cause trouble; therefore, it also can cause trouble in numerical integration. It appears that practically no one has gone deeper into this, because it was not known how to approach the problem. It concerns very intricate, highly non-linear phenomenon, so it’s not accessible to strict analytic theory. But one cannot ignore it altogether, especially if one wants to have very large computational procedures. One needs such procedures in field computations, where one is dealing with multi-dimensional systems described by partial differential equations. One may, for very complicated problems, have to compute for days, weeks, or perhaps even months, even with extremely fast computers. The size of the problem can easily become gigantic. If the same program runs for a long time, say on a massively parallel computer, and if something goes wrong once at one place, you must make sure that it’s not destroying the validity of the entire computation.

I call this the robustness problem. It’s not what control people call robust, although all robustness problems are related somehow. It concerns the difference in behavior between the ideal and the actual algorithms and how you can make sure that your algorithm is as resistant against disturbances as possible.

For a long time I had been thinking one should be able to apply wave digital ideas to numerical integration. For a single dimension that is obvious. We start from an electric circuit, or better, a Kirchoff circuit. I should indeed say Kirchoff circuit, because the essential point is that the Kirchoff laws are valid whether the circuit is electric or not. The Kirchoff circuit represents a physical system. We build an algorithm that behaves essentially like that original circuit. Therefore, it solves the differential equations that describe the systems. In that sense, a wave digital filter doesn’t really have to do filtering, but it does numerical integration. So whenever you can represent a system by a Kirchoff circuit, whether it’s mechanical or whatever, then you can apply the wave digital ideas, produce an algorithm that integrates the original differential equations, and this algorithm will have all these good robustness features.

**Nebeker:**

So what you’re saying is that the numerical analysts who work on such problems might well make some progress by adopting that kind of an analysis?

**Fettweis:**

Exactly.

**Nebeker:**

Do you know if any have taken advantage of that?

**Fettweis:**

That is the next thing, because most of these people don’t--

**Nebeker:**

They don’t follow engineering.

**Fettweis:**

Yes. They don’t understand how the method works, because it’s more complicated than other methods of numerical integration. That’s why it’s hard to understand. But my two years at Notre Dame were for that purpose.

**Nebeker:**

Oh, I see. Specifically for the numerical integration?

**Fettweis:**

Numerical integration of partial differential equations. That’s why I was there. They have picked up the approach and are working on it. That’s why they asked me to join.

**Nebeker:**

Are these mathematicians, or numerical analysts?

**Fettweis:**

They are in computer science.

**Nebeker:**

Computer science?

**Fettweis:**

Yes, the Department of Computer Science and Engineering at Notre Dame. Steve Bass, in particular, is the head of the department. Quite some years ago already there was interest in ordinary differential equations, but as I said, it’s relatively straightforward. What is not so trivial is the non-linear case. But this works very nicely as well. It has been used for instance to analyze chaotic behavior in circuits. It’s also being applied to examine solitons distributed systems described by nonlinear partial differential equations. This has been done by a former student of mine who is a professor at Paderborn University here in Germany. He uses an approach which I would call one-dimensional, and is more like the one I described when I was referring to Gabriel Kron. You model the circuit described by partial differential equations by means of an electric circuit with many, many components. If you make them all small enough, then you obtain a relatively accurate model. It’s really a one-dimensional model. It’s like in the old days of electrical engineering. If you have a transmission line, you are really dealing with partial differential equations. I think it was Kennelly who did it first. You split the line into little cascaded pieces and replace each piece by a lumped circuit.

**Nebeker:**

Yes, I know Kelvin's analysis of the transatlantic cable was done in a similar way.

**Fettweis:**

Yes, that’s it. But I think the more systematic analysis was Kennelly. It was in America, I think. Kelvin was in England. That’s really how filtering came about. In a cable, the inductance is indeed too small compared to the capacitance to get distortion-free transmission. You can increase the inductance, but you cannot easily do this in a distributed fashion. So you do it locally, and then you can approximate the piece of cable between two coils by a simple shunt capacitance. You then have shunt capacitance and longitudinal inductance. That is an elementary low-pass filter. That’s how filtering was discovered.

**Nebeker:**

I knew of that work, but I hadn’t thought of the connection between that and filtering.

**Fettweis:**

This is in fact the old Lagrange chain. Lagrange was a French mathematician and physicist. He analyzed a chain of masses and springs. That indeed has a pass-band and a stop-band, and thus frequencies at which propagation takes place, or does not take place. It is the analog of image parameter filter theory. I studied this in a theoretical physics course when I was a student with Manneback.

I've never need mentioned any place this connection between filtering and theoretical physics, I must confess, although the Lagrange chain is much older than Kelvin and Kennelly, Heaviside, Campbell and Wagner. It was mechanical, not electrical. You can, in other words, replace a transmission line by many individual components, and then you have only time as an independent variable. Then you can apply wave digital ideas to it. That’s what Meerkötter did. You can do it, but you have the problem what we call the most critical loop. But that’s probably going into too many details. You have to be able to do operations in parallel, let’s put it that way. So you have to introduce some modifications.

The approach I have initiated in Bochum avoids this problem. Right away you get a massively parallel algorithm, which is very important. That means all points in space can be computed in parallel. If you have several spatial dimensions, it’s not easy, but it can still be done. What you have then is a wave digital algorithm in, let’s say, three spatial dimensions plus time, which is massively parallel. All spatial points at one time instant can be computed in parallel. This works, however, only for finite propagation velocity. That is the restriction, finite propagation velocity. But any physical system has finite propagation velocity. Therefore, in principal, you can do it.

However, you have problems like elliptic problems, or even parabolic problems. Elliptic problems usually do not involve time, but parabolic ones do, for example, diffusion equations or those of heat transmission or skin effect. These imply infinite propagation velocity. So, if you put a heat source at the end of a rod, theoretically heat appears instantaneously at the other end. But, of course, that’s only theoretically true and therefore something must have been neglected. The term that restricts the propagation velocity has been dropped, and you have to restore it. If you restore it, you can apply the method. But it doesn’t mean that it is advantageous to do so. Other integration methods may be more advantageous for parabolic equations.

Elliptic problems, which usually concern problems at rest, thus not dynamic problems, can be solved if you solve the underlying dynamic problem. Suppose you have conductors, and want to determine the capacitance. This is, for instance, important for integrated circuits where you may want to compute the capacitive couplings between the wires. For doing this, you can make a first guess for a charge distribution and compute the resulting dynamics by means of Maxwell's equations, until the system has come to rest. You can do that by introducing strong dissipation. Whether that method is advantageous in that case, or not, I don’t know. But it could be applied, at least in principle. It’s a kind of a relaxation approach: you could start with a coarse grid, have the system come to rest, then refine the grid, have it come to rest again, and so on until you converge on the fine grid you want to have. There are various additional tricks one can use. But the method hasn't been used for this kind of application yet.

Anyhow, I had the idea for many years that it should be possible to use wave digital principles for numerically integrating partial differential equations. I had mentioned it to others but I didn’t have time to go about it myself. Finally, I started on it, and one of my doctoral students at the time, a very intelligent fellow named Nitsche, took over the work. We showed that it really works, and with some interesting problems, too. We used a two plus one problem: two spatial dimensions, plus time. We took first only a simple linear constant problem close to electrical engineering: conducting plates. This is a generalization of the transmission line problem. The plates run not just in one direction but in two directions. Actually, mathematically, it is the same problem as if you consider Maxwell's equations in only two spatial dimensions. If you assume, for instance, that everything is independent of z, then these equations decompose into two sets of equations, which both are of the same mathematical type. We also showed how it works for the complete Maxwell equations.

Of particular interest are the fluid dynamics equations, especially those for aerodynamics. As a mathematician you may know that these are non-linear partial differential equations. They are what I call “essentially non-linear," although mathematicians tend to say “quasi-linear.” That is a somewhat misleading term, because that makes it sound as if they are almost linear. In electrical engineering we tend to use "quasi-linear" to refer, for example, to a resistor that is almost linear, and for which the results can thus be expanded into Taylor series. In such a case it may be sufficient to just take the first non-linear term and neglect all others.

Mathematicians call the fluid dynamics equations quasi-linear because they are linear in the partial derivatives. But the coefficients are strongly non-linear, so you get a product of several independent variables. But these equations are absolutely fundamental for aerodynamics. In a sense, what Maxwell's equations are for electrical engineering, the fluid dynamics equations are for aeronautics. However, it’s not like in Maxwell's equations where a non-linearity is in first approximation a linearity, but with the addition of a corrective term.

One can extend the passivity and losslessness properties to such non-linear problems. I have shown that the fluid dynamics partial differential equations can be represented by non-linear multi-dimensional Kirchoff circuits. Inductors and capacitors are defined by means of partial derivatives, not simply by derivatives with respect to time. There are four original partial derivatives corresponding to the original four coordinates, the three spatial ones and time. It is best to use some combinations of the four. You rotate the four-dimensional space, and then, in the new coordinates, you can describe everything in such a way that an equivalency between a multi-dimensional Kirchoff circuit and the fluid dynamics equations is created. And once you have this Kirchoff representation, you can use the standard wave digital ideas to derive an algorithm. In this case, a multi-dimensional algorithm.

**Nebeker:**

It sounds like you’re still very much involved in this work.

**Fettweis:**

As far as this is concerned, I’m still involved. Except that I have to slow down considerably, because I have had a by-pass operation last fall. Anyhow, other people are interested in this kind of work. I’ve talked to mathematicians and find it’s the same thing as what happened when I started with wave digital filters. There are the ones who realize there is something important behind it, and the others that say, “How can an electrical engineer do anything that could be of interest to a mathematician or to an aerodynamics engineer?” That is the difficulty.

Anyhow, I’m retired now. I’m still involved in it, but I don’t have money anymore, or students who could do the work. There is an enormous amount of programming work to be done to carry it out in practice. I don’t have the time for this. Also, there is more theory than for other methods. One can explain the standard methods of numerical integration relatively quickly to a newcomer, either quickly or, at least, without too much effort, because the basic ideas behind are relatively simple. My method draws from a very large experience in engineering, in circuits, and in digital signal processing. Also in numerics, in a sense, because I told you all these numerical computations we had to do in the early days with the desk calculator. You could really see what was happening and what the numerical problems were. It also has to do with the very broad education Belgian engineers get: even as electrical engineers, we had a course in which the fluid dynamics equations were derived and discussed. I never thought I would go back to that material, but I am indeed doing it.

As I see it, my main contribution will be to write a book. People have asked me to write a book because they realize that the theory behind this, with all its ramifications and justifications, is very involved. It’s not available anywhere in a compact fashion. They would have to struggle through many different engineering texts that are not easily accessible to modern mathematicians. Even when we engineers used rigorous mathematics, we use mathematical language as it was used in olden days. I told you that when I was a student, the mathematicians would sit with us in class for the mathematics courses. But our mathematics professors still used the old language of mathematics. It was not Bourbaki influenced mathematics yet. Furthermore, the mechanical people, the fluid dynamics people, and the aerodynamics people in particular, don’t like the electrical language we have been using. We use Kirchoff circuits, which they are not familiar with. But they are probably the most important potential users of the method. Hence the difficulty: the most important potential users did not grow up in the same fields as we did.

**Nebeker:**

Well, in history, of course, there are many examples of new mathematics coming out of science and engineering. Maybe it just takes time for the rest of the ideas to spread.

**Fettweis:**

I was asked to write a book which makes it accessible, and I should. I haven't written a book about wave digital filters as such yet, either. So I realize I have to write two books, at least, one on the filtering aspect and one on the numerical integration aspect. In as far as wave digital principles are concerned, they will be very similar, but the users are completely different.

**Nebeker:**

Yes. So the books must be different.

**Fettweis:**

The filtering people are familiar with the electric circuit and the Kirchoff laws, and they know what for instance a pass-band, a stop-band, and the frequency domain are. The numerical integration people are completely different. You have to explain the basics of Kirchoff circuits to them, but not how to design a filter, because they are not interested in designing filters. So the audience is very different, with completely different backgrounds.

When I was at Notre Dame, I taught a different course each term. My most important teaching concerned a two-semester course on this numerical integration method. This will form the basis for my book on it. I also taught a one-semester course on wave digital filters, from a filtering point of view, for the electrical engineering department. I taught the numerical integration course for the computer science and engineering department. So my activity at Notre Dame is an important first step.

**Nebeker:**

You have the lecture notes.

**Fettweis:**

I have the lecture notes as the basis for these books. There was another project which unfortunately has not gone according to plan. We had planned a major project at Notre Dame in which five of the professors were involved. Four of them were from the computer science department, and I was one of these four, and one was from aerodynamics, a well-known man in the field of computational fluid dynamics. We wanted to carry on a joint project, which at the very end might have been oriented toward developing massively parallel computers that are more dedicated to a task like fluid dynamics. It fit very nicely into the computer science and engineering department. I’m convinced that in computers quite a bit of progress is still feasible if one goes to more specialized computers. It could advance a field like aerodynamics computers were available that are optimized to solving their equations.

I’ve talked to people from a company that specializes in massively parallel computers. There is a market for two hundred, maybe even four hundred such computers. Of course, that’s not a huge market, but there is definitely a market with the aeronautics companies, and aeronautics problems apply in many other areas, like automobiles, since these also have to be aerodynamically properly designed. For such a more specialized computer, it would be attractive to design it such that it can, in an optimum fashion, take into account an approach like this new one. For instance, it could probably work with much shorter word lengths. You probably would not need floating point arithmetic but could use fixed-point arithmetic instead. Many processors could then be placed on one chip. Maybe it will never materialize, but these are long-term visions of what is feasible. And, that’s what fascinates me for the moment. They can carry us over a long way into completely different areas.

**Nebeker:**

Well, it is the strong mathematics background that ties all these things together.

**Fettweis:**

It’s all mathematics, and the broad educational background.

**Nebeker:**

I wanted to be sure to hear a little bit about your involvement with the Signal Processing Society. I know you started many years ago, when it had different names. I see you joined the IRE in 1956.

**Fettweis:**

That’s right, in 1956. That’s when I was in the States.

**Nebeker:**

Was there a Professional Group on Circuit Theory? Is that what it was called?

**Fettweis:**

That’s right. I had just started at that time in circuit theory.

**Nebeker:**

Has that remained your principal affiliation?

**Fettweis:**

That has remained my principal affiliation. SPS has been my second most important affiliation. That’s definite. Because of my background is in circuits in its generality, I find a broader range of interest in the present Circuits and Systems Society than in the Signal Processing Society. That’s where I feel most at home.

**Nebeker:**

I was interested in what you were telling me before we started here today about how you see the historical development of these departments, and how it was that signal processing ended up in the Audio and Electroacoustics group.

**Fettweis:**

Yes, I had mentioned that, but I think it was not recorded.

**Nebeker:**

Right, I would like to get it on tape.

**Fettweis:**

I have the impression that it's due to the fact that signal processing, when it started, could only be done at very low frequency. In fact, at first, it could only be done off-line. You could record a signal, then process it slowly, and then compress it again, and so on.

**Nebeker:**

It was because in the speech realm.

**Fettweis:**

In speech, the audio range, at least from the telephone point of view, goes up to 3.4 kHz. For hi-fi, of course, you have to go to 20 kHz, but still the audio range is really quite low in frequency. Furthermore, processing speech off-line was also of interest to speech people.

**Nebeker:**

Because in some application areas, it had to be real time?

**Fettweis:**

Yes, but you could, for instance, process it slowly off-line, and then, as I said, compress it again, and then listen to it. It’s not like in transmission where you have to have high speed, with all the high-speed problems. I don’t see how you could reasonably decouple the transmission speeds from experimental speeds, but in speech processing where you want to see how the speech is changed, how you can for instance extract information from it, you can, in principal, do it off-line. But even if you want to do it in real time, the speed restriction is not as dramatically severe. Therefore it was natural that the audio people and the speech people thought of using digital signal processing methods. I’m convinced that's how they came to be interested in signal processing. Whereas, the circuits people usually came from higher frequency applications. At a time when integrated circuits weren't that far advanced, they didn't see any possibility of actually making use of digital signal processing, and were more reticent to go into it.

**Nebeker:**

So because of the speech processing community being drawn within this Audio and Electroacoustics group, and digital signal processing being developed initially from the speech processing people, it has grown up.

**Fettweis:**

They changed the name from Audio and Electroacoustics to Audio, Speech, and Signal Processing. Then they dropped the original, the Audio and Electroacoustics part, and reduced it just to signal processing. This brings about some conflicts with the CAS Society, which considers signal processing to be their domain also. Their main Transactions, as you probably know, is split into one which is called Fundamentals, and one which carries the words signal processing in the subtitle of the journal.

**Nebeker:**

Bede Liu told me that at one point he had proposed that the society be “Circuits, Signals, and Systems,” and thus try to draw more of the signal processing people into that society.

**Fettweis:**

This is probably another reason why my first paper on the wave digital filter met with opposition, because the Circuits and Systems Society was not sufficiently open to digital signal processing problems. They didn’t realize that they should go more openly and willingly in that direction. For instance, since my own work is really a bridge between the two areas, it has been pointed out by others, like Jim Kaiser, that I have carried circuits into the signal-processing domain. Kaiser was also very much interested in the Circuits and Systems Society. If I may say so, what really carried the traditional ideas of circuit theory into digital signal processing was the wave digital concept. So in that sense, I am probably in a good position to form a link between the two areas, and in that sense, the CAS Society should have just taken the opposite attitude. When I submitted my paper, they should have realized that this was really a way of moving the CAS Society more strongly into the digital signal-processing domain. Unfortunately, they missed this.

**Nebeker:**

We are writing, as you know, a short history of the society and its technologies. Are there any other thoughts you have, since you’ve been involved for many years, about the evolution of the Signal Processing Society and the earlier societies with different names? How have they changed over the years?

**Fettweis:**

I don’t have too much to contribute to this question, because I have not gotten involved in the broad range of topics in the Signal Processing Society. The topics covered by the Signal Processing Society with which I have been affiliated essentially involve filtering, including adaptive filtering and multi-dimensional filtering. I have not got involved in speech processing, so I have not followed what is going on in that area. I realize the importance of it, and admire the work that has been done in it.

One area we have been interested in was music coding. I should say perhaps that I like analytical approaches. I am rather mathematically oriented. I'm interested more in theoretical physics than in experimental physics. To get me really interested in something, it must be an object that can be described by precise mathematical equations, differential equations or partial differential equations. Now, speech processing involves quite a bit of feeling and intuition of proper judgments. I don’t know how to formulate it correctly, but you can’t do so much with…

**Nebeker:**

With a mathematical description.

**Fettweis:**

Yes. You can use mathematics, of course, but you can never really formulate a problem precisely in mathematical terms, and then work in a strict mathematical way, optimize it in a mathematical way. That is hardly possible in areas like speech processing. In image processing it is the same, especially when it comes to pattern recognition. When interpreting images, it is very difficult to use rigorous mathematics. You can definitely use mathematics, I’m not saying otherwise, but it can only alleviate the problem. The whole problem cannot be described by a strict closed mathematical approach. Many of the topics in the Signal Processing Society go more in the direction of this kind of approach.

We were drawn toward areas like music coding, because that is associated with filter banks. We used these filter bank and multi-rate ideas very early and have applied them intensively. Wavelets, particularly orthogonal wavelets, appeared for the first time in a paper which I had written, together with Nossek, who is the technical program chairman at this conference and was at Siemens at the time, and Meerkötter whom I have mentioned. That was published in the ASSP Transactions in 1985.

**Nebeker:**

Was this one of the first papers on wavelets?

**Fettweis:**

The term “wavelet” was not around, but the problem of orthogonality was. I didn’t use the term orthogonality because it has a rather restrictive meaning. Instead, I used the more general terms unitarity, para-unitarity, and para-unitary properties. That is what we published, but it is usually overlooked. Originally, I presented this paper at the ASSP Society conference, the ICASSP '84, in San Diego thirteen years ago. The results in it are essentially those that came up later in connection with wavelet theories. I have indeed used the term para-unitarity, though it is now usually referred to as having been introduced by P. P. Vaidyanathan, who is at Cal Tech. But he had it from me, and I had it, in turn, from Belevitch.

Belevitch had introduced that term in connection with the scattering matrix. The scattering matrix of a lossless circuit has that para-unitarity property. It’s a mathematical property and is in fact the analytic continuation of the losslessness property from real frequencies to the entire frequency domain. Or, if you have a multi-dimensional problem, it is to the multi-dimensional frequency domain. The term “para-unitarity” was thus coined, to my knowledge, by Belevitch in his classical circuit theory. I carried it over to the digital signal-processing domain, and used it also in connection with multi-rate systems based on wave digital filters. We had shown that for such systems that you could obtain complete recovery by using losslessness, thus para-unitarity.

In the impedance domain, losslessness means the real part is zero. In the scattering domain, losslessness of a frequency-independent system becomes orthogonality, but if there is frequency dependence, then at real frequencies it becomes strictly the same as what mathematicians call unitarity. A unitary matrix multiplied by its transposed conjugate complex is equal to the unit matrix.

For extending this to the entire frequency domain, assuming you have real circuits, you replace the complex frequency by its negative and transpose the matrix, which yields what is called the para-conjugate. Note that at real frequencies, replacing jΩ by minus jΩ corresponds indeed to taking the complex conjugate. If you have a complex system, which one can also consider, one can generalize the para-conjugacy and para-unitarity concepts.

In wave digital filters you have waves or wave quantities, so it’s related to the scattering matrix. Therefore these properties indeed carry over. The only point is that you use for instance the z parameter instead of the complex frequency, or you use a bilinear transform thereof. Do you know what the z parameter is for the signal processing people?

**Nebeker:**

Yes.

**Fettweis:**

So z-1 divided by z+1 replaces the complex frequency s in a classical circuit. That forms the basis for deriving wave digital filters. Therefore, it’s a very simple mathematical connection, and you can thus express para-unitarity in the classical z domain. You can also express it in terms of the equivalent complex frequency, which is called Y. This variable Y is indeed equal to z-1 divided by z+1. Replacing it by its negative amounts simply to replacing z by 1/z.

**Nebeker:**

And what exactly is the connection between what you did and the subsequent development of wavelets?

**Fettweis:**

Orthogonal wavelets, and the possibility of using these concepts in wavelets, was present in my early paper on filter banks which were based on wave digital ideas.

**Nebeker:**

And was that paper itself influential?

**Fettweis:**

No, it was not influential.

**Nebeker:**

But the connection was there.

**Fettweis:**

Yes, it was there, but it was not influential, in that the wavelets came completely independently.

**Nebeker:**

I see.

**Fettweis:**

But it antedates wavelets. In a sense, all filter banks, and in that sense, even the old carrier telephone systems, which were based on filter bank ideas, largely predate wavelets. They were not designed for 100 percent reconstruction. One did not start with just one signal which one decomposed and then reconstructed. There were separate signals, but the individual conversations were then gathered together on one line, in different frequency positions, and separated again at the other end. So you do it just the other way around. Since a wavelet is essentially the impulse response of a filter, decomposing a signal into many wavelets is like the other stack of filters. Therefore wavelet decomposition is very closely related to the old technology that was common in carrier telephony.

We were interested in music coding because it was an interesting application for wave digital filters and because music coding is not oriented towards voice production. It’s hearing oriented. Our ears are based on the basilar membrane, which is essentially a filter bank. So, it’s really a wavelet transformer. We all carry with us two wavelet transformers. All animals that have two ears, have two wavelet transformers. These make it possible to divide the spectrum up into many different channels and to process the different channels separately.

Thus, hearing is a strongly filter-based phenomenon. Music is not produced by our vocal tract. It’s produced differently, but we hear it and therefore the filtering aspect is important. If you want to have efficient coding, filtering makes sense, thus also wave digital filtering. For voice production, it’s not that clear, except that you can use it for modeling. Manfred Schroeder has been interested in wave digital filters from that point of view, for vocal tract modeling. Again, of course, this is based on partial differential equations or modeling by cascaded sections composed of discrete components.

As I said in the beginning, in answering your question about the Signal Processing Society, I haven’t followed the society as broadly as I have followed the CAS Society.

**Nebeker:**

Thank you very much for the interview.