Difference between revisions of "Oral-History:Jay Lathrop"
m (Protected "Jay Lathrop Oral History" [edit=sysop:move=sysop])
|Line 11:||Line 11:|
[[Willis Adcock Oral History|Willis_Adcock_Oral_History]] and [[Robert_N._Noyce_Oral_History|Robert_N._Noyce_Oral_History]] provide further information on integrated circuit development at Texas Instruments.
[[Willis Adcock Oral History|Willis_Adcock_Oral_History]] and [[Robert_N._Noyce_Oral_History|Robert_N._Noyce_Oral_History]] provide further information on integrated circuit development at Texas Instruments.
== About the Interview<br>
== About the Interview<br> ==
JAY W. LATHROP: An Interview Conducted by David Morton, Center for the History of Electrical Engineering, 1 May 1996
JAY W. LATHROP: An Interview Conducted by David Morton, Center for the History of Electrical Engineering, 1 May 1996
<br>Interview #265 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineers, Inc.
<br>Interview #265 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineers, Inc.
== Copyright Statement ==
== Copyright Statement ==
Revision as of 18:50, 20 May 2009
About Jay Lathrop
Dr. Lathrop was born in 1927 and grew up in Orono, Maine. He attended first the University of Maine and then the Massachusetts Institute of Technology, from which he received his BS, MS, and Ph.D. degrees in physics. He received his doctorate in 1952 and then joined the National Bureau of Standards/Harry Diamond Laboratories, where he worked the microminiaturization of solid-state circuits for the U.S. Department of Defense. For this work he was awarded the Department of the Army Meritorious Civilian Service Award in 1959. The previous year Lathrop joined Texas Instruments, where he continued his work on miniaturization of integrated circuits. He joined the faculty of Clemson University in 1968, where he is a professor of electrical engineering. During the 1970s he oversaw students' research into the characteristics of solar cells and co-invented the solar chemical converter system of energy conversion. Presently Lathrop and the EE department at Clemson University are working on joint projects with the Semiconductor Research Corporation. He is a Fellow of IEEE and a consultant on solar energy and reliability.
The interview covers Lathrop's education and career, focusing mainly on his work with Diamond Labs, Texas Instruments, and Clemson University. Lathrop concentrates on the continuity of his work, explaining how Defense Department uses for miniaturization ultimately led to work on civilian projects like business machines for Texas Instruments and solar cells for solar energy. He discusses his contributions to the microminiaturization field and explains the importance of integrated circuit development to the electrical engineering field. Lathrop praises Clemson University's working relationship with industry, and advocates teaching future electrical engineers to be innovators and not just problem-solvers. The interview concludes with Lathrop's own attempts to encourage innovation in his students.
About the Interview
JAY W. LATHROP: An Interview Conducted by David Morton, Center for the History of Electrical Engineering, 1 May 1996
Interview #265 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineers, Inc.
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Staff Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA, or email firstname.lastname@example.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.
It is recommended that this oral history be cited as follows:
Jay W. Lathrop, an oral history conducted in 1996 by David Morton, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.
Interview: Jay W. Lathrop
Interviewer: David Morton
Date: 1 May 1996
Place: West Union, S.C.
Family, childhood, and education
Dr. Lathrop, why don't we start with a little biographical information? Tell me where and when you were born, and tell me about your education.
I was born on September 6, 1927, in Bangor, Maine. I moved around the country a little bit, but most of my formative years, from the second grade on, were spent in the state of Maine. When it came time for college, I wanted to go to MIT. Unfortunately, I couldn't get in to MIT because they didn't teach analytical geometry in the small town that I grew up in, Orono, Maine, and which was required at the time by MIT. And so I entered the University of Maine for two semesters and then transferred to MIT and stayed there for all three of my degrees.
Before you went to MIT, how were you sure you wanted to be an engineer? Did you have experiences in childhood or in high school that directed you that way?
Well, actually I didn't really aspire to be an engineer as a youngster. I was more inclined towards science. I loved mathematics and I loved problem solving, that sort of thing. One day I was up in the attic of the house that we rented there in Orono, and looking around, I found some old books. It turned out that the former person who had lived there had gone to MIT and had left these old books. They were chemistry and algebra books and various things of that sort, and I was fascinated by them. I said, "Boy, that's where I want to go, and that's the sort of thing I would like to study."
My father hoped I would enter a biological field, because he was an entomologist. So when I went to MIT I enrolled in biophysics. In my first semester there I got A's, or H's as they were called in those days for honors, in everything except the biology course, in which I got an F. I decided then that I didn't want anything more to do with biophysics! So I dropped the biology part and stayed with physics. All three of my degrees from MIT are in physics.
National Bureau of Standards
When I got my PhD in 1952, I went to the National Bureau of Standards in Washington, D.C. This was because one of the fellows working in the lab was there on sabbatical from NBS. He convinced me that NBS would be good place to go, a place I would enjoy. His name was Judd French. The last I heard he was director of the National Bureau of Standards and had been a laboratory director there for a number of years.
Anyway, I went to work there, in what was called the Tube Lab, which had been built during the Second World War to develop fuzes for bombs, rockets, and mortars. My thesis at MIT had involved microwave gas discharges, so I worked on developing gas trigger tubes for mortars at the National Bureau of Standards. We had a proximity mortar fuze in which the front part of the shell was insulated from the back part and a gas trigger tube connected across the gap. As the shell approached the earth, a static charge was induced across the gap. When it got high enough the trigger tube would breakdown, discharging a capacitor into a detonator and blowing up the shell, hopefully before it hit the ground. My job was to develop a sensitive and yet reliable gas trigger tube.
Diamond Ordinance Fuze Laboratories
After a year or so the weapons aspects of the National Bureau of Standards were split off to become the Army's Diamond Ordnance Fuze Laboratories, or DOFL, under the Department of Defense. About this time, it was recognized that fuzing electronics would need to be substantially miniaturized if the required degree of weapon sophistication was to be achieved. At this point miniaturized glass envelope tubes, both gas and vacuum, were used in all types of fuzing. These were too big, drew too much power, were subject to breakage, and they wore out. We began looking at transistors as an alternative. At that time transistors were just beginning to be commercially available and it was difficult to buy them with the right characteristics, so we decided to make our own. I was put in charge of establishing a fabrication facility in which we could develop transistors optimized for fuze applications. We started with raw germanium, purified it by zone refining, grew the crystals, sliced the crystals, formed the junctions by diffusion and alloying, attached leads and packaged the device. Because no commercial fabrication equipment was available, we had to build our own. It was a wonderful opportunity to learn about semiconductors, something I had not had the opportunity to learn at MIT because it hadn't been part of the regular curriculum. The result was a pretty fair state of the art germanium mesa transistor, but fabricated with a twist.
We used photoresist techniques to define the contacts. Jim had heard that photoresist was used to etch printed circuit boards and since we were making only small quantities of devices he thought it would be worth a try rather than trying to make metal evaporation masks. After defining the alloyed emitter and base contacts using direct photoresist etching techniques, we again used photoresist etching to define the shape of the mesa. It was not possible in those days to obtain high-resolution photographic negatives, so in order to achieve patterns with the desired accuracy we exposed each individual transistor die by shining light through a reticle placed in the eyepiece of a metallurgical microscope. This had the advantage that we could align the pattern using the microscope in its normal mode, but with a red light source. Once the pattern was aligned, high intensity light was shined through the eyepiece reticle to expose the resist. I had always thought this was the first time that resist had been used to define device geometry. However, my long-time friend, University of Minnesota Professor Ray Warner, recently pointed out to me that transistor co-inventor William Shockley had experimented with it at Bell Labs in 1954, at least a year before we worked with it at DOFL. Unless Shockley also beat us in naming the process, we were the first to call it photolithography because the word had a good ring to it, even though we realized it was not lithography at all, but etching. That was in 1955 more than 40 years ago, and it is still prominent in the lexicon of semiconductor technology...and still used incorrectly!
Now let me go back to MIT for just a minute. One of my classmates there was a fellow by the name of Bob Noyce. He and I were not exactly chums, but we took the same courses in physical electronics, which was as close to solid state as they came in those days, and we spoke with each other, went to the same seminars and parties at professor's houses, that sort of thing. So we were friendly, and of course he later became one of the co-inventors of the integrated circuit. So I was very familiar with him.
2D flip-flop circuit demonstration and press coverage
OK, back to the Diamond Ordnance Fuze Labs. Now that we had fabricated decent transistors, the next step was to incorporate them in a miniature circuit. The approach that we came up with at DOFL was called 2D, for two-dimensional. Jim Nall, my colleague in the Tube Lab, and I started with a ceramic substrate about the size of a postage stamp on which silk-screen resistors and conductors had been formed for a flip flop circuit. We would then attach unencapsulated chip capacitors on the surface and insert our unencapsulated transistors in holes that had been cut through the substrate. Epoxy was used to cement the transistors in place, but the trick was to connect their contacts to the silk-screened wiring on the ceramic substrate without using wires. To do this, we used our friend photoresist for two purposes. One purpose was to provide an insulating film over the surface of the germanium transistor that could be selectively opened over the emitter and base contacts. The second purpose was to define the connecting lead geometry from an evaporated aluminum film subsequently deposited over the circuit. Although it is hard to describe in words, it is really conceptually quite simple and I have some pictures and documentation here of the finished product, a truly 2D flip flop.
The first time that we presented what we had done was Friday, November 1st, 1957, at the IEEE Electron Devices Meeting in Washington. The paper had been received too late to be included in the regular program and it was delivered as a late paper. It described the approach that we had taken and I have a copy here that you can look at. The concept was picked up very quickly by the press, and an article appeared as "Transistors made by Photography", Electronic Week, November 11, 1957. The Washington Post had an article, "Printed Transistor Developed Here," on November 23, 1957, complete with our pictures.Electronic News had a story on the 25th of November, 1957, "Transistors In Printeds Seen Miniaturization Step." What captured the imagination of the “popular” press was the photoresist aspect of the fabrication that it was possible to print transistors as photographic pictures were printed — not exactly a true statement at that point, but that's what led to all the press coverage.
At that point we made a little display to demonstrate the device. It was a shift register that counted down from an oscillator with our 2D postage stamp-sized flip-flop circuit being the final stage. It was battery-powered and in a clear plastic box. The output was a miniaturized incandescent lamp that blinked on and off. The oscillator would oscillate and the shift register would divide the frequency down until it was at a frequency that would cause the lamp to slowly blink. There was a switch that allowed the 2D flip-flop to be connected or disconnected. When it was connected, the lamp blinked half as fast as when it was disconnected. It made a nice demonstration because with all the components and wiring visible there was no chance for trickery. You could put it on the table and explain what we had done; how we'd miniaturized this circuit down to the size of a postage stamp and the person we were talking to could flip the switch and prove that it worked. And people would say, "That's fantastic!"
And so Jim Nall and I presented a paper on the technique and demonstrated it with the little plastic box at the World's Fair in Brussels in 1958. Bill Shockley, co-inventor of the transistor, was in attendance and he was as impressed by our demonstration as anyone. In fact he invited Jim Nall and myself to become members of the team he was assembling at Shockley Labs. We decided we didn't particularly want to work for Shockley, for one reason or another, and so we declined, but it was nice of him to ask anyway and probably a mistake for us to decline. He had recruited an impressive group of engineers including Bob Noyce. After Bob received his PhD in physics from MIT he had joined Philco where he was engaged in developing surface barrier transistors. As a matter of fact, we had met several times since we left MIT because DOFL was exploring the use of surface barrier transistors for fuzes and he showed us around Philco's facilities.
Jim and I demonstrated our little flip flop around Europe that spring. We showed it at Siemens in Germany and places of that sort on the continent, and while everyone admired it as a curiosity they universally could see no commercial application for it. It was obviously useful if one were building mortar fuzes, but no one could see any reason to miniaturize commercial electronics. We showed it to G.W.A. Dummer at the Royal Radar Establishment in England. He was England's leading microelectronics expert and was very interested in our work, but he wanted to make the entire circuit from a single piece of crystal rather than an assembly of individual components as we had. Jim and I came away feeling that sort of thinking was science fiction.
Although I have discussed only the work that Jim Nall and I did at DOFL on miniaturization, there were actually 5 scientists involved in making the first 2D circuits. While Jim and I handled semiconductor aspects, fabricating the transistors and diodes and incorporating them in the circuit, Edith Olson prepared the ceramic substrates, Norm Doctor silk screened the resistors and capacitors, and Tom Prugh did the circuit design and layout. On October 12, 1959, the five of us were awarded the Army's highest civilian award for this pioneering miniaturization work, the Meritorious Civilian Service Award. In addition to medals, the award included a $25,000 stipend, which in 1997 dollars would be $135,000. I used my portion to buy a new car, a Nash Rambler station wagon, and still had some left over. The award ceremony took place in the Pentagon courtyard and was quite impressive. A band played and a color guard marched. It was indeed a proud moment for a 32-year old physicist.
Texas Instruments; single chip circuit
In the summer of 1958 I left government service and the Army's Diamond Ordnance Fuze Laboratories to work for industry at Texas Instruments in Dallas. That was where I met Jack Kilby. Jack had arrived in May, a few months before I did. The same person that hired Jack hired me, Willis Adcott. (Willis would go on to become a vice president at TI and later a professor at the University of Texas.) TI didn't know exactly what they wanted to do in terms of miniaturization, but they knew they wanted to do something because of the large military market that was there. At that time, as you probably know, for summer vacation TI shut down for the first two weeks in July. However, if you hadn't been a full-time employee for a year, you didn't get a vacation. So Jack was left there for two weeks with no guidance and hardly anyone else in the plant. To keep him busy, Willis told him to think about the problem of miniaturization and see what he could come up with. So he thought about it awhile, and decided that if he could make all the components of a circuit out of the same semiconductor material that transistors were made out of, then TI would be able to make complete circuits as easily as they made transistors. The result would be a small single chip circuit, a concept along the lines Dummer had discussed with Jim Nall and me in England and that we had dismissed as blue sky. The difference was that Jack figured out how to do it!
First he breadboarded some simple circuits using components made entirely from germanium. Rectangular bars with contacts on the ends were used for resistors, the pn junctions of diodes for capacitors, and of course unencapsulated transistors and diodes for the active components. The only components he wasn't able to conceive of being made out of semiconductors were inductors, so he stuck to digital circuits where they were not required. He interconnected the components according to conventional circuit diagrams by thermocompression-bonded wires, and lo and behold, the circuits actually functioned. The next step was to make a circuit from a single piece of germanium instead of individual pieces, and this he demonstrated on the 12th of September 1958. At that time I happened to be working on some oxide removal techniques for silicon transistors, just a few doors away from his place, and Willis called me down to see this thing. I looked at it and was amazed like everybody else. Now it was my turn to say, "That's fantastic!" The monolithic chip was no longer science fiction...it had arrived. That was my first view of an integrated circuit. TI immediately recognized the potential of "Solid Circuits," as we called them, and shortly after the demonstration on the 12th of September, I was the first engineer to be assigned to his project and I worked with him for the next ten years while I was there.
There were problems, however, in fabricating his initial concept and it lay in the air isolation approach to separating the various components on the chip. My first responsibility was to develop the technology needed to cut intricate shapes out of germanium and silicon to use this air isolation. After shaping, contacts were then interconnected using bonded wires. And so, while Jack's integrated circuit started out as a single piece of silicon or germanium, it could end up as individual pieces interconnected by jumper wires here and there.
Integrated circuit patents
Jack worked on a patent all that fall and in the process, came to realize that "flying wires" were fundamentally wrong. This resulted in his adding in the description that instead of using gold wires for interconnections, a film of oxide could be deposited on the semiconductor and a material such as gold laid down on the oxide to make the necessary connections. The patent was filed on February 6, 1959.
Now, that one word "gold" was the problem that he ran into on the patent. If he had only said, "such as aluminum" instead of "such as gold," I believe he would have been given sole credit for inventing the integrated circuit. As it was, Bob Noyce, who by then had left Shockley Transistor Laboratories to become one of the founders of Fairchild Semiconductor, had conceived of a way to do the same thing using Fairchild's advanced planar technology. Whereas Kilby had demonstrated the concept earlier using air isolation, Noyce used the junction isolation of the planar process plus the insulating layer of in situ oxide on the surface, essentially the same technique used today. Although he was the second person to come up with the concept of the integrated circuit, and his application had been filed after Kilby's, Noyce was declared by the patent office to be the inventor because his application used vacuum deposited aluminum leads over native silicon oxide insulation. Aluminum sticks to oxide; gold doesn't. Later, when TI and Fairchild were in great conflict over this patent situation, my group and I were given the job of trying to get gold to stick to oxide in hopes of invalidating the Fairchild claim, but we never could. Maybe it can be done today with modern techniques, I don't know, but in those days, with the techniques that we had available, we could never get gold to adhere to oxide, whereas aluminum behaved beautifully. So it's a shame that those particular words were chosen to describe the concept of vacuum depositing interconnections over insulating oxide. Litigation between Fairchild and TI went on for over 11 years before the last appeal to the Supreme Court and Noyce was officially declared the inventor. As far as the scientific or business community was concerned, however, Noyce and Kilby are considered co-inventors, Kilby for the concept of integration and Noyce for its practical implementation. TI and Fairchild had early on agreed to cross license the patents, while scientists realized the importance of both inventors' contribution. The patent litigation was a fascinating chapter in the integrated circuit story, but the final resolution was of no particular consequence.
Integrated circuit production and marketing
Well, during the next ten years we worked hard on making integrated circuits and like everyone else, used Noyce's planar process in silicon. One of the most difficult things in those early days was convincing potential customers that ICs were good for anything. I can remember making presentation after presentation and participating in innumerable panel discussions, trying to defend the integrated circuit approach against conventional assembly. Most people felt that ICs would be too expensive and unreliable because nobody could make them very well. While at the time, these arguments were correct; few were willing to take a longer-range view. It took perhaps a decade before they were accepted as an inexpensive way to make electronic circuits and by far the most reliable way. The whole program, of course, was driven in the early days by miniaturization and only later on, by commercial applications. Every new technology is always faced with the dichotomy that the price is too high to support a large volume, but the price can't be reduced until there is a large volume—the old chicken and egg proposition. A market "bridge" to high volume production is needed. For integrated circuits that bridge was military miniaturiza¬tion. In the long run, however, miniaturization by itself became almost inconsequential.
What about these presentations and publications that you described as defending the integrated circuit. Ultimately, how important were they in terms of acceptance in the trade? It sounds like in the early days, as you said, it was necessary to promote the technology. When did that change, and why do you think it suddenly became obvious that integrated circuits were the way to go?
I'm not sure there was a sudden conversion on the road to Damascus, but rather a gradual awareness. I have here some articles that might lead us in that direction. (By the way, here's a picture of Jack and myself in the early days, and here's a picture of Jack and myself at the thirtieth anniversary of the invention on September 12, 1988.) Here is an article that appeared in Electronics in 1960, called "Semiconductor Networks for Micro-Electronics," and it describes the manufacturing approaches at that time. Custom designed ICs cost two, three, five hundred dollars or more apiece. But as we produced more we gradually learned how to make them better and the price dropped. Here's another article in the Proceedings of the IEEE of December 1964 on semiconductor network technology. In fact, the whole issue is devoted to microminiaturization techniques. Very quickly it became something that the technical people— engineers in the industry—were aware of and began looking into. The conferences and publications of the IEEE played a big role in publicizing the technology. We were able to provide samples to interested companies. After a few demonstration projects for the military, particularly the Air Force, they began to award production contracts, fabrication facilities were built up, equipment suppliers became interested, better manufacturing techniques resulted and catalogs of off-the-shelf components began to appear. It was an evolutionary process.
I get the sense that customers weren't all that concerned with miniaturization per se.
Military customers were, but others weren't really interested in it. They liked the power reduction. But basically it was the lack of having to assemble circuits by labor-intensive techniques that sold the concept for commercial applications. Gates, shift registers, adders, and half-adders, could be purchased ready made off the shelf.
Would it have been feasible or even reasonable for TI or other manufacturers trying to get people interested in using and buying integrated circuits, to stop worrying about miniaturization completely and start concentrating on the integration, the making the circuits on the chip?
That occurred with time, but in the early days when ICs cost several hundred dollars apiece it wasn't possible to justify that kind of cost on anything other than miniaturization. Miniaturization provided the "bridge" market to high volume commercial applications. The only reason to pay such a premium over hand assembly is if the objective could not be achieved in any other fashion, mortar fuzing or the Apollo program, that sort of thing. In cases like that it had to be miniaturized. Without the leverage from miniaturization, the integrated circuit ball would never have started rolling.
I guess I'm thinking in terms of sort of non-military applications, where size and power isn't so critical.
Costs had to come down in order for the industry to take off. If the cost is low, sure, everybody wants it. But the problem in the early days was that we couldn't make it. I mean, we were having yields of .0001 percent. It was ridiculous. We'd have to start a hundred wafers to get one integrated circuit because we had not yet developed the techniques to make them. That's why the cost was so high. We could sell all we could make, even at exceptionally high prices; it was just a question of making enough to satisfy our demonstration contracts at that point. We had a contract for an Air Force computer, our first major demonstration project. After that we had a Minuteman Missile contract with North American Aviation. Demonstration contracts such as these enabled TI to develop the necessary technology and production facilities. At that point you couldn't buy equipment to make ICs. We made our own diffusion furnaces, deposition equipment, mask making facilities, photo resist exposure stations, etc. Everything had to be made from scratch. It was difficult and, as the technology was still being developed, the equipment didn't work well. But as soon as things got a little better, they got a lot better, and yields became respectable.
In those early years, were TI people sort of sharing information? I mean, how did you guys find out about what the other guys were doing and vice versa? You've mentioned at least talking informally to some of these people, but were there regular exchanges?
You mean with competitors?
Yes, was it really fierce competition?
Oh, yes. There was terrific, fierce competition. TI, like everyone else in the industry, had a license with Bell Telephone Laboratories regarding transistor technology, and that was very helpful. One of the big breakthroughs, as far as we were concerned, in the very early days was the ability to reliably etch oxide. But as far as the exchange of information between competitors, that was a no-no, and we went to great lengths to try to avoid it. We worried at great length that employees, who were privy to trade secrets, were going to leave and join a competitor.
Did that happen a lot?
Not as much in those days as it does now. I suspect that now, particularly as a result of downsizing, people move around a great deal more. In those days, in the 1960s, there was a tremendous amount of loyalty to the company you worked for. People who left were looked on as traitors and made to feel disloyal. The technology was new and exciting, we were doing things that had never been done before and as a result new organizational entities were constantly being formed and product lines expanded, so there was ample opportunity for everyone within the company. Since it was early in the game there weren't an awful lot of spin-offs either. TI had never had a layoff until after I left in 1968, and after that, things went downhill as far as loyalty was concerned.
TI circuit production; LSI, VLSI, and ULSI
OK, one more thing, before we terminate the TI portion of the interview. As the technology developed we became able to manufacture appreciable quantities of integrated circuits and the yields went up to the point where we were getting ten or twenty good integrated circuits out of say one hundred on a slice. The next logical step was to increase the level of integration so as to make a more complex chip. This first attempt at higher-level logic was termed "Large Scale Integration" or LSI. Very Large Scale Integration (VLSI) followed LSI, and Ultra Large Scale Integration (ULSI) followed that, and who know what it is called today as we approach a billion transistors on a chip. Instead of just making gates, for example, we wanted to make flip-flops that typically require two interconnected gates. Well, at that stage in the technology in the 1960s, if you tried to make two interconnected gates instead of just one, the yields went way down because of random defects. And if you attempted even more complex circuitry, yields quickly dropped back to the impossible levels we had during the start up phase. One of the concepts to increase the level of circuit complexity was the so-called discretionary wiring approach, in which a slice that had been fabricated through the gate level was probed to identify the good gates and only these interconnected through an additional layer of metallization. A computer program generated an interconnection mask based on the probe results that could shape the final level of metallization to form the more complex structure of a half-adder, for example. Then the entire slice, which in addition to the half-adder would contain additional unused gates, some good and some bad, would be packaged. Because the location of the good circuits would be in different places for each slice, a unique interconnection mask would be required for each, yet they would all function the same electrically.
I was given responsibility for this technology and that may have hastened my leaving, because it was an impossibly difficult thing to do economically. It can be built, and we made a number of higher-level circuits, but it was a dinosaur from the beginning. We generated methods of automatically probing and testing slices, developed algorithms and programs that used this data to generate interconnection patterns, and made a computer driven cathode ray tube system to produce the second level metallization masks, all things that actually had applications beyond just the discretionary wiring project. But within a couple of years, the MOS device made this all obsolete. We're going to be talking about innovation later, but the discretionary wiring approach, while unique was not innovation, it was straight forward problem-solving. It was not an economical way to fabricate complex integrated circuits. With the advent of much smaller and lower powered MOS devices it was possible to fabricate the complex circuit as an entity with satisfactory yields. This came about for two reasons: first the size of a complex MOS circuit was much smaller than the comparable bipolar circuit so that reasonable yields could be maintained even with the same number of random defects per slice. And secondly, as technology progressed, producers learned how to reduce the number of random defects per slice permitting ever larger and more complex circuits to be produced.
And with that, in 1968 after an exciting ten years I left TI. After spells in government and industry, I opted for a final career in academia. I went to Clemson University, where I was in a completely different environment for the next twenty years. When I arrived, the Electrical Engineering Department had no facilities for fabricating integrated circuits. In fact, when I got there vacuum tube circuits were still being taught -- there wasn't even any transistor course work, let alone integrated circuits. Since I had forgotten all the vacuum tube circuitry I ever knew, I made a complete change in the curriculum, put in a solid-state program, built up a small facility for doing some experimentation—not building integrated circuits, but we got into solar-cell work that I will describe later. We got into a lot of reliability work, both with regard to integrated circuit and solar cells. The Semiconductor Research Corporation (SRC) of Research Triangle, NC, sponsored our IC reliability work and the Jet Propulsion Lab (JPL) of Pasadena, CA sponsored our solar cell reliability work. Clemson University now has a class 100 clean room, a state of the art electron microscope lab, chemical vapor and vacuum deposition equipment, and so on. In order to qualify for the SRC work it was necessary to upgrade the electron microscope facility to include the latest scanning electron microscope equipment. So my interests sort of stayed the same, but it wasn't a commercial venture at all, it was something else.
I'm interested in your transition to academia. Was it a sort of happy byproduct that Clemson hired you and you were able to help them make the transition in the curriculum to solid-state, or were they actively seeking someone out to help them do that?
My father had grown up in Orangeburg, SC and he attended Clemson University, then called Clemson College, for his undergraduate degree, graduating in the class of 1913. He always spoke very highly of Clemson. He had been head of the department of entomology at the University of Maine at Orono, ME, and when he retired he and my mother moved to Asheville, NC. In 1968 he and my mother were no longer in the best of health and, being an only son, I felt I needed to be closer to them. There were no device manufacturers in the southeast at that time and Clemson was the closest large university to Asheville. So I sought out Clemson and said, "Hey, look, here's my background and here's what I can do for you." And they said, "Gee, that sounds good, come and join us." It was that sort of a thing. No, I don't think they were actively looking for a professor with a solid-state background. At first some of the faculty, who wanted to stay with vacuum tubes, had a problem with solid state. They felt that vacuum tubes would continue to be widely used and that it was necessary to provide students with a complete understanding of their fundamentals. Within a year, however, everyone was on board and transistor circuits had completely replaced vacuum tube circuits in the curriculum.
I mentioned the Clemson solar cell work.
Interestingly enough, I got involved in that through Jack Kilby. He left Texas Instruments shortly after I did in 1968. Incidentally, that was the year that Bob Noyce and Gordon Moore left Fairchild to start Intel. Jack wanted to set up shop as an independent inventor. He opened an office in Dallas, in a shopping center not far from his house, and began to invent things. He asked me to come down during the summer and work with him on some of these inventions and I did this for a number of years, but I'm not sure how much I really contributed to his work. We worked on a number of different ideas, did some very interesting work, and received a number of patents, but nothing of the same magnitude as the integrated circuit. His consulting activities with industry probably were considerably more lucrative than his inventions.
One of the ideas he came up with was a solar cell made out of spheres of silicon. These would be spheres the size of BBs—very small spheres that you could make somehow by dropping through a shot tower. At least that was the concept. Molten silicon falling from a shot tower should coalesce in the shape of small spheres just the way lead shot was made in revolutionary days. After a subsequent diffusion, each one of these spheres would become a miniature solar cell that could be integrated in a mosaic and connected in parallel. He convinced TI that they should set up a group to do this and, although no longer a TI employee, he directed the project. I came down and spent a summer with them. We were able to come up with an electrolytic system and a system to interconnect the little spheres in glass matrix. One of my contributions was using hydrogen bromide as the electrolyte, since it would dissociate easily at low voltages. Each sphere was a pn junction diode, and by interconnecting them in parallel all we had was the voltage of a single diode to break down this electrolyte. Water requires 1.23 electron volts to dissociate, but the voltage generated by a single junction exposed to sunlight would dissociate HBr. TI received a multi-million dollar contract to develop this system from the Department of Energy.
Solar cell reliability; 1970s energy crisis
Was this work a result of the energy crunch in the early 1970s?
Yes, and at that time there was plenty of money available for solar cell development. TI's approach was only one of the ways to fabricate solar cells. There were crystalline cells, polycrystalline cells, thin film cells, and then there were these little spherical cells. TI was actually able to demonstrate a working system using the concept. Eventually a pilot plant was even established in Lubbock, TX, but it never was economically feasible and the project has now been terminated. As opposed to the discretionary wiring approach to LSI, which was problem solving rather than innovation, making solar cells from spheres was truly innovative, but unfortunately it went the same way in the end. Just because a solution is innovative doesn't make it successful. Neither is a straightforward solution always doomed, either. Factors other than technical, like economics, will determine the ultimate success or failure of a project.
When did people realize that a spherical solar cell matrix wasn't going to be economically feasible: at the middle, beginning, or end?
Well, that's a very good point. Various people recognized it at different times. On any project, if you're a pessimist, you're convinced from the start that it won't work, while if you're an optimist, you know for sure that it will work if you just try hard enough. And people can move back and forth from one camp to the other over time. I usually tend to be pretty pessimistic and often play the role of devil's advocate. I like to feel I am only being pragmatic. Jack, on the other hand, is an eternal optimist. Without his faith in the future of the IC, it never would have progressed at TI as it did. In the days of our darkest despair, when the yields approached zero, his optimism served as the light at the end of the tunnel for us. Through sheer optimism he kept the team functioning to the point where volume production became possible. Had management been pessimistic, the project probably would have been terminated early on. Regarding the solar cell system, I became quite pessimistic about it. My main problem was visualizing a house with a roof covered with hydrogen bromine. Jack, on the other hand, was convinced of the project's ultimate success. Because of his reputation with the company and his track record on ICs, management was hesitant to cut it off. So it probably lasted somewhat longer than it should have. Actually, we need to bring Dilbert in here to answer the question of why some projects last and others don't.
Would you speculate that Kilby really had some sort of insight that other people didn't, or was he just optimistic: you know, you win some, you lose some, he really didn't know.
I'm sure it was both. Jack, of course, had a vested personal interest in the concept, after all he was the inventor. If it succeeded, it would be his reputation that was enhanced and if it failed, his reputation would be hurt. But at the same time he has a unique ability to see the big picture. He is not one to be held hostage by little details. Now, I personally get bogged down in little details. I always seem to be so caught up in the details of a project that I find it hard to be able to step back and see the big picture. I think that even in the early days of the IC, Jack could visualize the possibility of applying the technology to something other than military systems. He realized the potential for reliability and cost improvement that would result in commercial ventures. He could certainly see that the reliability of ICs would ultimately far surpass anything that could be assembled by hand, long before others could. He could foresee that the batch fabrication process would ultimately result in low-cost devices long before anyone else. He is a visionary as well as an inventor and, being nearly seven feet tall, he has a commanding presence. That's why I asked you early on if you had interviewed him, because he's certainly someone that's worth interviewing.
Did you bring your work on solar cells back to Clemson with you?
Not originally, but after that spending that summer at TI working on the spherical cells I became fascinated with the potential for solar energy. We had a few small programs at Clemson with regard to the reliability of integrated circuits. That was during the energy crisis of the 1970s and as was mentioned earlier, there was a big push to develop solar cells. I raised the question, how long will solar cells last? Apparently no one had considered the possibility of solar cells either wearing out or catastrophically failing. We approached the Jet Propulsion Laboratory, which was the lead organization for solar cell development, and they said, "Hey, we haven't thought about it, either, why don't you work up a proposal to look into it?"
So we initiated the Clemson solar cell reliability program that amounted to over a million dollars worth of research spread over the next decade. It gave us a wonderful opportunity to interact with the industry. Clemson became nationally and internationally recognized for its expertise relating to solar cell reliability. We obtained sample cells from all the manufacturers and subjected them to very sophisticated and rigorous stress testing. We worked closely with the producers, visiting their factories and examining their fabrication procedures in order to improve their cell reliability. We developed laboratory methods to accelerate failure modes, as well as maintained outdoor test racks on the roof of our engineering building. We developed some of the most accurate test methods in the industry to measure solar cell characteristics in order to tell if their power output changed by one or two percent—not an easy thing to do. In order to do this, we had to fabricate stable light sources with particular spectral characteristics; we had to monitor and control cell temperature during illumination with the laboratory equivalent of noonday sunlight. Really, it was a very, very complicated situation, but it was a wonderful vehicle for graduate students to become involved in the many different aspects of a real world technical problem. They were able to write theses and publish papers on the subject, attend scientific meetings where they would present papers, visit manufacturers and view semiconductor production methods first hand. Reliability was an investigational area that enabled a university like Clemson, which had no fabrication facilities of its own at the time, to become deeply involved with semiconductor devices. We have been discussing innovation mainly in regard to inventions, but innovation can also apply to activities as well. Reliability research was an innovative way for students to become involved in solid-state device fabrication without actually fabricating devices. As a result of the JPL solar program, I made many presentations to the photovoltaic community, including a number overseas - Japan, England, China, India, and Spain. Government sponsored solar cell research is pretty minimal now that oil prices are low, but before the Clemson program ended we were able to make a number of significant contributions with regard to solar cell reliability.
So who was primarily interested solar cell reliability? Was that the military or space?
No, no. Our program was not part of the space program. Space has some very unique reliability aspects which require different test procedures than for terrestrial cells. Although we did experiment with a few space cells, primarily we evaluated terrestrial cells. These were cells that a home owner would place on a house roof, or a that a utility might use to supplement a generating plant, or that might supply power to a village in a third world country.
It's an interesting technology partly because it seems to have ups and downs that follow price fluctuations of the energy supply. And even now, you see some commercialization efforts that seem to correspond to some new interest.
It's a green technology, one that most people feel comfortable with. But it has some very basic technical disadvantages in addition to the universal economic considerations associated with any source of electricity. One is, of course, that you only get electricity when the sun shines; the sun doesn't shine at night or in cloudy weather. Consequently, like all electricity, it either has to be used immediately or some method of storing it, like a battery, has to be provided. Another disadvantage is that the electricity generated is DC and for most applications it has to be inverted to AC. If it were easy to run everything off DC, you'd be halfway there, and then if an efficient storage medium were available, you'd be the whole way there. For example, water pumping in third-world countries is a fine example of how to use solar energy. Water pumps can be run easily using DC motors and a simple tank provides low cost storage. But to replace the grid with photovoltaic panels on a house roof...that is a much more challenging problem. I'm afraid I'm pretty much of a pessimist in that area. It would be nice to try, but I would want to be in a position where money wasn't all that important before I committed to a stand-alone household system. A photovoltaic system connected to the grid so that surplus electricity can be sold back to the utility and needed electricity purchased is more nearly practical.
Where do you think that technology is ultimately going? Since it's obviously had a very long development that's still ongoing—I hear more and more about exactly the kind of markets you're talking about, the third-world country's small installations rather than what they were talking about in the 1970s here in the United States, individuals putting the things on their houses. But do you think that will return if there's another sharp increase in oil prices?
I think you will see solar energy used off-grid, say vacation homes in the mountains or somewhere where you're five or more miles from the grid. That is a fine application and one that will grow. I believe there will also be a market for new construction roof systems connected to the utility, depending on buy-back legislation. It should be possible to gain some economic advantage by integrating the PV panel with the roof covering. The savings from not requiring shingles should help to pay for some of the expensive electricity generated. You certainly see PV applications now in the third world and that market will continue to grow as more remote parts of the world demand electrical appliances. You see PV in small applications like watches and calculators and soon maybe even for keeping cell phones and GPS charged in remote areas, things of that sort. All these applications amount to many megawatts when added together and serve to provide the needed bridge market to achieve high volume production and lower price. And we mustn't forget another set of remote applications—space—becoming more important with the proliferation of communication satellites. So I think PV is going to grow steadily, but probably unspectacularly for the foreseeable future. Solar cells are looking for an innovative solution to the PV cell construction the way the electric car is looking for an innovative solution to the battery problem, meanwhile both keep plugging along.
What was your sense of the practicality of this at the time? Were there insuperable practical or technical problems? Or was there resistance from the utilities—we talked about the transition to semiconductors at Clemson. Was there anything comparable with the utilities? Were they reluctant to sort of jump in on this, or were they?
Many utilities wanted to get in on it. A number of them had serious programs usually partially sponsored by the government. SMUD, Sacramento Municipal Utility District in Sacramento, California, had a big program. Southern California Edison I believe had another. The Southern Company in Alabama had a cooperative program with a thin film PV manufacturer to construct a small plant. They also had a solar thermal plant in Georgia to use the sun's energy to generate electricity in a steam cycle. There didn't seem to be any reluctance on the part of utilities to get involved in PV, particularly if they could get some government money to defray some of the cost. I don't believe they viewed it as actual competition. I think they saw it as a technology they needed to keep abreast of. It's not at all clear, however, how the current climate of deregulation is going to affect utility involvement. That's not at all clear.
Did you ever have any contact with them? Were they listening to your talks and so forth on reliability? Or were they sort of waiting for manufacturers to offer these things? Were they getting in at the innovation end at all?
Yes. A number of utilities attended the meetings at the Jet Propulsion Laboratory. As I mentioned, the Jet Propulsion Laboratory in Pasadena was the main contractor with DOE for terrestrial photovoltaics. Meetings were held there every few months, and the utilities would always be well represented. The trade group EPRI, Electrical Power Research Institute, took part.
Role of government and military funding in innovation
Why don't we shift gears a little bit? You're free to add anything else, but I thought we might talk a little bit about the innovation process. One of the things that has come up a number of times is the outside sources of support in one way or another, funding particularly, but in terms of any aspect of this—either your photovoltaic stuff or the integrated circuit work we talked about earlier. The DOE money, or NSF money, wherever it came from, how important was that in stimulating innovation? Would these things have happened, in maybe some other roundabout kind of way if that money hadn't been there, or was it a crucial factor? How do you feel about that?
I feel that government money is essential for development, but largely irrelevant when it comes to innovation. The invention of the integrated circuit illustrates what I mean. The IC was invented at TI without the help of external funding. No one said, "I'll give you money if you will invent the integrated circuit." That's not the way it works. But once it was invented, plenty of money was needed to develop it and the necessary production equipment and manufacturing infrastructure. Money did enter the picture in that the government did present a lucrative market for electronic miniaturization—at the time it was the only market—and it was that market that TI was interested in entering.
Consequently, Willis Adcock assigned his new engineering hire, Jack Kilby, to look into ways that circuits could be made smaller. It was an unstructured assignment given as much to keep Jack busy during the vacation break as it was to actually achieve anything useful. And that was why it succeeded. Had he been assigned to work on a contract having stated objectives and defined goals he would probably have spent the summer diligently trying to produce higher resolution silk-screened ceramics in order to meet a contract milestone for the September report.
Would you consider military sales part of a free market, or is that sort of artificial stimulus in a sense? I guess government money in one form seems to have been essential if only in the sense that early integrated circuit projects were oriented towards miniaturization, which was based on this military market. My question is, would the same thing have happened, is that in a sense a sort of stimulus?
Government money was indeed an indirect stimulus to innovation, but there was nothing artificial about it. It was the height of the Cold War, the US was heavily engaged in Vietnam, and we were in a race to the moon. Also, NASA and the military desperately needed miniaturization. The fact that the military required miniaturization led to the integrated circuit, but it was a real requirement. I don't believe some artificial requirement would have been a stimulus because there would have been no follow up market. It was a billion dollar follow up market that provided the driving force for industry. In the early 1950s, the consumer really didn't care much about miniaturization. TI introduced the world's first miniaturized consumer electronic product—the Regency transistorized portable radio—just before Christmas in 1954, and it was an instant success. It was so small compared with other radios of the time that no one could conceive of anything needing to be smaller. The only market for what we should call microminiaturized electronics was the military and it provided the stimulus. This was undoubtedly the reason why Bell Labs didn't invent the integrated circuit, they weren't interested in making things small, only reliable, which they were doing through the hermetic encapsulation of transistors. They were attempting to solve a problem in a straightforward fashion. It had not occurred to them that microminiaturization would ultimately have a far greater impact on reliability than hermetic encapsulation.
I'm interested in this issue of whether or not that kind of focused money can be expected to have spin-offs, as a rule, or whether this is a unique sort of thing.
The really significant spin offs come about as a result of innovation, but they don't usually appear until the innovation has been developed to the point where practicality has been demonstrated. If ten engineers are put on a project and told, "All right, we've got to shrink this circuit's size," chances are they will indeed make it smaller. They will produce a more efficient layout, use smaller components, and wire the interconnections tighter, but that's not innovation, that's problem solving like Bell Labs approach to reliability improvement. Really significant innovation, like the integrated circuit is unique.
Innovation can't be predicted or planned for. There is no place for it on a pert chart. When you put ten engineers on a project, you want results. You want to see those ten engineers working, working, working. Innovation comes about by and large when people aren't working, when people have time—like Jack did during that July vacation period—to think about things without a lot of pressure. There needs to be time allowed for the subconscious mind to work on the problem. That's one of the dangers of downsizing, for example. As corporations get leaner and meaner people work harder and more efficiently. Every minute is filled with some productive work and there's little time left to innovate.
Comparison of industrial and academic research
Do you think that industry or the university is better for innovation? Which place is more likely to be innovative?
In theory, universities should be better at innovating just because people there have a chance to think about the big picture. They're not as rushed; they can put things into context; they're not trying to develop a product within a certain time. On the other hand, universities often aren't concerned about “real world” problems, or have adequate facilities for studying these problems, unless they have been fortunate enough to win government or industry support. In my opinion, there was no way the integrated circuit could have been developed at a university in 1958 because neither government nor industry had presented them with the problem and because they did not have access to the necessary equipment. I believe the best hope for fostering innovation will occur when universities are brought together with government and industry as partners, but without the requirement for them to invent on schedule.
When you were at Clemson, what kind of relationships did you have with industry? Were you doing things for them or with them? Was there communication?
Working on a government or industry contract at a university is often difficult. Sometimes the sponsor is expecting some specific result to be achieved within a given time. This forces the university into the problem-solving mode, which it is really not very good at for a variety of reasons. At the other extreme is the sponsor who decides to essentially write off the contract to graduate student training. Under these somewhat condescending conditions the university is not considered a partner and, because vital information is often withheld, finds itself working in the dark. The best working relationship that I had at Clemson was with the Semiconductor Research Corporation, the SRC. I don't know whether you're familiar with them.
Semiconductor Research Corporation
What is that?
The SRC is a non-profit corporation headquartered in Research Triangle Park, North Carolina. It was established by semiconductor equipment and device manufacturers and funded by them with some government support. Its objective is to support semiconductor research at universities. Companies assign knowledgeable employees to serve as mentors for each university contract. Our contract at Clemson dealt with the reliability of ICs and we had a mentor who was working on reliability problems at one of the SRC member companies. Other universities worked on packaging, or layout, or processing, or some other aspect of integrated circuits, and each had a mentor. Some mentors would be from one company, some from another company, and so on. Joint industry—university conferences were held to discuss the research work. So that worked well, not 100%, but fairly well. There was a lot of concern among companies that work benefiting one company would not be shared with the others, or that a company would not cooperate fully with a university because they were afraid the information might be passed on to another company.
On the contract we had with the SRC to study IC reliability we were able to look at the total picture as well as specific failure mechanisms. Integrated circuits, in case you hadn't noticed, have become very reliable. Very seldom do computers, for example, break down because the integrated circuits in them go bad. The number of transistors on a single IC chip is now approaching a billion and yet chips function day after day, year after year, without a failure. It's just fantastic. In the old days, determining reliability was straightforward; sample numbers of ICs were operated under extreme conditions and the number of failed units with time noted. This approach is no longer valid. Integrated circuits have become so reliable and also so complex and expensive, that there is no practical way to perform sample reliability testing. It isn't possible, for example, to put a million Pentium chips on test to see if ten fail after five years. The only way to evaluate a complex chip's reliability is through computer simulation. Clemson developed one of the very first programs to do this through our work with the SRC.
But it turned out that we weren't the only ones that had considered reliability simulation. One of our SRC sponsors had also thought about it and developed what they thought was a clever way to do it. In fact, they thought it was so clever that they kept it a secret and never told us, even though we were working with them on the problem! It turned out that our method was quite often better than theirs, but had we known what they were doing, we could have improved both systems. Perhaps this helped that company competitively at the expense of the other members of the SRC, but it hurt our feelings not to be taken on board as a full member of the team. We made some terrific contributions in this particular area and shared it freely with industry as was expected, and I believe all the SRC members benefited from it. It should be noted, however, that a cooperative research venture like the SRC is unique in that each member company wants to keep its own research results close to the vest, but at the same time maximize its benefit from the jointly funded research.
Do companies come to universities for the expertise that's there, or because it's a different place? Because it's got a different atmosphere and it's got students? Do they come for a particular person they know that they think will be able to help them, or do they come to universities because they're universities? Might the companies think about this problem in terms of bringing people there instead of going to the universities? Or do they really want to go and have this done at universities for some reason?
The short answer is, all of the above. As you say, they find a faculty member that they like and who can help them out. They can consult with him and run problems by him. Then the faculty member could assign a graduate student to the problem. Usually this would be something that isn't of a crisis nature, yet the company would like to know more about it, but can't justify using its own limited resources. There is also the benefit of recruiting and evaluating prospective employees before graduation. In this sense the university functions as a baseball farm team. So companies want to have a good relationship with a number of universities, to have students work summers with them in their laboratories. They are able to pick and choose the best people and pre-train them. We've had very good placement results for all of our graduate students in this sense. I don't think that industry in general is looking to universities for any kind of a quick fix. It's usually a longer-range program, because universities are not good at quick turnaround, quick solutions.
Global integrated circuit industry
One of the big issues in the integrated circuit industry is, of course, the national issue and our competitive situation with the Japanese and some other countries. Do you have any thoughts about that as a general issue? Were there large factors at work in creating this difference—this perceived decline in the American semiconductor industry—or was it the specifics? Were there more specific reasons? Do you see any generalities about the whole thing?
Well, I'm not as capable of talking about this as people connected with the industry today. We were way ahead of all foreigners when I was involved, of course, but the Japanese came on fast, and they concentrated on the RAM, the random-access memory. But as far as I can tell, they're still quite far behind with regard to things like PC chips and ASIC chips — application-specific integrated circuits. We are very good in this country with regard to introducing a variety of innovative designs for different purposes. In Japan, they concentrated on commodity designs where there is high demand for relatively few designs and they are very good at mass producing them inexpensively. The SRC that we discussed earlier, which supplies funds to universities for research on ICs, originally started out with all U.S. companies and then gradually acquired foreign companies, who began as associate members, but maybe they're full members now. There doesn't seem to be any difficulty in having German or Japanese concerns sitting in on the information exchange meetings with U.S. universities. I remember when we first started with the SRC, that there was concern that we should not discuss any of our results with foreign outfits. If they sent for a reprint, we shouldn't send it out and so on. I believe technical people learned long ago that attempting to keep things secret stifles everyone's progress more that having a free exchange.
The Japanese are praised for their manufacturing capabilities in their industries. Do you think there's anything about the structure of U.S. industry that either overemphasizes the research and development end or underemphasizes the manufacturing technology or something like that? Can you think of a plausible explanation for why the Japanese excel at manufacturing particular chips, but, as you say, didn't innovate much in other areas?
I don't know. I can't really answer that in any meaningful way. I have a sneaking suspicion that it might have something to do with our strong belief in the individual rather than team effort, but I just am not knowledgeable in that area. I am very pleased, however, that we are able to originate innovative designs and get them into chips quickly. Other countries have followed in the footsteps of the Japanese, first the South Koreans and more recently, the Chinese. The Chinese lag behind the South Koreans who lag behind the Japanese. However, in this global market everyone is running hard and at the same time looking over his shoulder.
When I was at MIT, one of my fellow graduate students in the microwave gas discharge lab was a Chinese girl. I used to give her rides to school across the Harvard Bridge. She and I received our PhDs about the same time, 1952 roughly, and she wanted to go back to China. The American students tried to talk her out of it because of the political turmoil there at the time. She did, however, and she became president of Fudan University, one of the premier universities in China that was written up in the recent Spectrum issue on Chinese integrated circuit technology. Professor Xide Xie was described recently by Popular Science as one of the five most influential scientists in China. I spoke with her this spring when she was in the United States. Our rule of thumb has been that China was at least five years behind. That is probably a good estimate at the moment, but it may not be for long with facilities like Fudan's springing up.
You have been involved in various ways in the research end of IC R&D over the years. Were there ever any particular incidents that illustrated problems in translating something from a laboratory to a manufacturer?
Well, let me perhaps digress slightly before the tape runs out to say something that occurred to me regarding innovation. At Clemson I tried to teach innovation, or at least to encourage it. I started a course for senior year students in which a problem was presented to them and they were to come up with a solution. The class was divided into a number of groups each consisting of five to ten students. Problems were conceptually outlined and were rather blue sky. None at the time had readily apparent solutions, in fact many of the ideas didn't come to fruition for another ten years or so, things like paying for gasoline at the pump with a credit card and a portable radio that would display the call letters of the station being listened to. They were really innovative problems and what I was looking for were innovative solutions.
Each group would elect a group leader and all groups would work on the same problem. At the end of the semester they discussed their detailed solutions in presentations to the entire class and a panel of professors. Each presentation lasted 50 minutes. After all presentations had been made, the panel of professors critiqued each group's solution before the entire class and awarded “points” to each group. The group leaders would then take the points that had been awarded to their group and pass them out to the group members in the form of the semester grades for the course, A=4, B=3, etc. The more points a group got the higher the grades could be in the group. Group leaders were then graded by me. The overall number of points that the judges could award was limited to three times the number of students so that the average grade for the course remained a B from semester to semester.
One of the judging criteria was innovation. Others were economics and feasibility, as well as several other things dealing with the presentation. This pretty much simulated the real world situation of industrial R&D. Interestingly enough, some of the solutions were quite innovative, while others were strictly problem solving. During the critique the professors tried to show how each solution could have been improved, but seldom were they able to come up with anything more innovative than the students, illustrating how difficult it is to teach innovation. The professors were usually able to offer quite a few suggestions regarding practicality and economics, however. The course served to introduce graduating electrical engineering seniors to the importance that careful planning, innovation, economic considerations, public speaking, and how the ability to work in teams plays in the industrial environment. I presented a description of the course at the Frontiers in Education Conference in Terre Haute, Indiana in October 1987. It was received with interest, but I have not heard of any other similar EE courses that have been established. My experience with the course convinced me that innovation can be encouraged, but it can't be taught. Nevertheless, just like any true inventor I remain firmly committed, perhaps irrationally, to the idea of unstructured courses of that sort.
You've done an excellent job of attempting to put my 40 years of involvement with solid state in some sort of perspective. The work at NBS/DOFL on miniaturization pioneered the use of direct photolithography to define device geometry. It was used there to fabricate and interconnect devices in what today would be called hybrid circuit configurations. This technology was subsequently used at TI to construct the first actual integrated circuits. Solar cell and IC reliability studies at Clemson enabled the university to make contributions to the technology database and at the same time provided essential graduate student training. Innovation serves to provide the spark that ignites a technological explosion. While it is evident to me that innovation can neither be taught nor predicted, it is possible to create an atmosphere where the probability for innovation is increased. Innovation is like lightning in that respect. It should also be pointed out that there are lots of small innovations that occur in the course any R&D project, but I would refer to these as "cleverness" which, while essential for a project's success, doesn't alter civilization like the IC did.
Finally, I would like to say that innovations could have both good and bad consequences. One discovery can lead to another with unpredictable results. The PC came about as a result of the IC, which came about as a result of the transistor. I can think of a number of unfortunate applications that have resulted from these inventions. Yet once the genie is out of the bottle, it is not possible to control it. It would not have made sense, or even been possible, back in the 50s to have avoided the IC, even if someone had wanted to. At that time an explosive mixture of technology was there just waiting for the spark of innovation. In that sense, it was inevitable. If Kilby and Noyce hadn't done it, someone else would have. And today it makes no more sense to debate the halting of some contemporary innovation, like cloning research, for example. Technology will advance as humankind's knowledge increases and there is little that can be done to stop it.
List of microminiaturization articles that were transmitted with the interview.
Nov. 1, 1957, Summary of "Photolithographic Fabrication Techniques for Transistors which are an Integral Part of a Printed Circuit," J.R. Nall and J.W. Lathrop, IRE Electron Devices Meeting, Washington, DC
Nov. 11, 1957, "Transistors Made by Photography,"Electronic Week
Nov. 18, 1957, "Army Scientists find "Missing Link" to Reduce Size of Electronic Brains," Department of Defense News Release
Nov. 23, 1957, "Printed Transistor Developed Here," Washington Post
November 1957, "Transistors in Printed Seen Miniaturization Step, Electronic News
Dec. 1 1957, "Photolithographic Transistor," Engineering Review article, Electronic Design
February 1958, "Army Develops Printed Transistors," Control Engineering
May 1959, "Recent Advances in the Application of Photolithographic Techniques to Semiconductor Devices," T.M. Liimatainen, IEEE paper 2950
May 1960, "Semiconductor Networks for Microelectronics," J.W. Lathrop, Electronics
February 1964, "Microelectronics," J.W. Lathrop, MIT Club of Boston speech
December 1964, "Semiconductor Network Technology—1964," J.W. Lathrop, Proc. IEEE, 52 (12): 1430-1433
August 1966, "Discretionary Wiring Approach to Large Scale Integration," Proc. of WESCON Record, Los Angeles, CA
November 1966, Preprint of "The Impact of Large Scale Integration on Packaging and Interconnection of Digital Electronic Systems," J.W. Lathrop, IEEE Microelectronics Comes of Age Symposium, Boston, MA
Nov 1967, "Discretionary Wiring System as the Interface Between Design Automation and semiconductor Array Manufacture," J.W. Lathrop, R.S. Clark, J.E. Hull, and R.M. Jennings, Proc. IEEE, 55 (11) 1988-1997 (Nov. 1967)
July 1976, "Invention of the Integrated Circuit”, J.S. Kilby, IEEE Transactions on Electron Devices
- 1 About Jay Lathrop
- 2 About the Interview
- 3 Copyright Statement
- 4 Interview
- 4.1 Family, childhood, and education
- 4.2 National Bureau of Standards
- 4.3 Diamond Ordinance Fuze Laboratories
- 4.4 2D flip-flop circuit demonstration and press coverage
- 4.5 Texas Instruments; single chip circuit
- 4.6 Integrated circuit patents
- 4.7 Integrated circuit production and marketing
- 4.8 Industrial competition
- 4.9 TI circuit production; LSI, VLSI, and ULSI
- 4.10 Clemson University
- 4.11 Jack Kilby
- 4.12 Solar cell reliability; 1970s energy crisis
- 4.13 Role of government and military funding in innovation
- 4.14 Comparison of industrial and academic research
- 4.15 Semiconductor Research Corporation
- 4.16 Industrial/academic collaborations
- 4.17 Global integrated circuit industry
- 4.18 Teaching strategies
- 4.19 Career progression
- 5 Appendix