About Gary Boone
Gary Boone was born in Canton, Ohio and graduated from Rose Polytechnic Institute in Indiana. After a stint at Collins Research in Iowa, where he got wind of the growing importance of MOS technology, he joined Texas Instruments. At Texas Instruments, he designed custom chips for business and industrial customers and, wishing to streamline this process, directed a team that developed early microcontrollers, notably the TMS-100. In 1973, he was awarded U.S. Patent 3,757,306 for single-chip microprocessor architecture. He had departed Texas Instruments in 1972 to develop microcontrollers at Litronix, a business that, despite technological success, fell afoul of trade policies that gave an advantage to Asian goods. After this entrepreneurial venture, he joined the Electronics Division of Ford Motor Company, where he assisted the transition of automobile engines toward greater digital control. In 1982, he founded MicroMethods, a consultancy that focuses on patent issues. In the interview, he discusses the details of his career, the teams that developed the TMS-100 and TMX-1795, the challenges of inserting electronics into engines, and patent law and its relation to technological innovation.
Boone died on 12 December, 2013
About the Interview
GARY BOONE: An Interview Conducted by David Morton, Center for the History of Electrical Engineering, June 22, 1996
Interview #273 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineers, Inc. and Rutgers, The State University of New Jersey
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.
It is recommended that this oral history be cited as follows:
Gary Boone, an oral history conducted in 1996 by David Morton, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.
INTERVIEW: GARY BOONE (#273)
INTERVIEWER: DAVID MORTON
PLACE: COLORADO SPRINGS, COLORADO
DATE: JUNE 22, 1996
Education and Collins Research
Why don't we start with where you were born. Can you tell me a little about your childhood, how you developed an interest in technical things, and maybe talk about your education a little bit.
I was born in Canton, Ohio. I grew up with interest in technical and mathematical things. I went to college in Indiana, the school was Rose Polytechnic Institute, now known as Rose-Hulman Institute of Technology. I received a Bachelor's in Electrical Engineering in 1967. I worked for two years in Cedar Rapids, Iowa for Collins Research, doing test equipment and communication and computer projects.
I'm familiar with the Collins that made communications receivers and so forth. Is that the same company or part of that company?
It's the same company. It's also famous for avionics receivers. This is not quite as well known, but they are also very good at communications processors. That is computers devoted to communicating.
So that was '67. They were, I take it, already well into the semiconductor age at that point. They weren't still using vacuum tubes?
(Laughing) Your age betrays you. Yes, we were well past vacuum tubes. The technology that was state-of-the-art is now called TTL.
MOS and Texas Instruments
There were hints that the technology called MOS was about to become very important. Collins had a branch in Newport Beach, California that was exploring the MOS-type technologies. I did not work there, but being aware of that change, or that trend toward MOS technologies, I ended up at Texas Instruments in late 1969. I spent a very productive couple of years, 1969 to 1972, at Texas Instruments. I had no idea at the time that I would end up in legal settings, giving testimony about the work that we did in those years. But, it turns out it was pretty important work, dealing with microprocessors and microcontrollers.
Do you want to talk about that some? How did that project get started? Did you have a specific application in mind?
Well, since you're the captive audience and you have to sit there, I think I'll give you the more than 5 minute version.
Very human situations cause these things. Inventions or discoveries occur because of a need. Sometimes that gets lost in the historical recaps or the little sound bites that the general public receives.
Let me try to describe the situation just before we decided to do what is now called microprocessor or microcontroller solutions. The business of the MOS Department at Texas Instruments was largely a custom business, where desktop calculator companies and the like would come to Texas Instruments with a set of specifications. Texas Instruments would turn those specifications into a chip set, typically four, five, six chips to implement the specifications for companies like Canon, or Olivetti, or Olympia. Those three I picked because I worked on those projects.
If I could interrupt you, how many chip sets would be sold, typically for one of these specific contracts? How big a business was that?
At the time I was not responsible for or paying much attention to the business aspects. They were modest, as you would expect for a business market, not a consumer market. I really don't have numbers on that. I'm sure we could find them. But one important thing to keep in mind is that the customers were quite pleased with this. The previous generation using technology called TTL might use a hundred chips or two hundred chips. So, to them it was amazing that we could provide this service of compressing all of that electronic wherewithal onto a mere four or five or six chips. MOS had this leverage built in. It was higher density, and we could do the same function in perhaps one-tenth the number of previous generation chips. That was a good business, but it was still an industrial market type business.
Genesis of TMS Microcontroller Chip
Now, switch to a human perspective. Here I was with a few other people, flying around the country, or to Italy, or to Germany, and trying to understand this new customer's requirements. Having done this a few times, you get bored. Furthermore, having customers that are pleased with previous projects means that you've got a whole flock of new customers trying to get you to do to their requirements what you did to somebody else's last year. So we had a significant demand for more of these projects.
The Texas rule that applied was "one riot, one ranger." That is, one chip [required] one engineer. So TI, with maybe twenty [MOS design] engineers, can deploy three or maybe four of these project teams at any one time. It takes maybe six months to do one [project]. So the capacity of this business is the number of engineers divided by the number of chips, every six months. Plus, speaking as one of the engineers who got put on those teams, they all looked pretty much the same. The individual requirements differed in detail, but in principle and in overall function, they were almost identical. So, what goes through your mind is, "I'm tired of doing this. I'm working long hours. My family is not happy. I've got to find a better way to do this."
You [TI MOS Marketing manager Daniel Baudouin and I] end up thinking about a matrix of customer requirements one way, and functional blocks or chunks of circuitry the other way. So, you can identify the commonality, and you mentally consider and simulate, "Okay, now if I had this many bytes of data storage, and I had this many bytes of program storage, and I had this many bits of keyboard scan interface, then that would cover all of the specifications I know about, maybe."
So that's the genesis of particularly the TMS 100 microcontroller chip: it came out of boredom, high demand and a vision of commonalities that were being inefficiently served by deploying huge teams with many chips.
There is one technical aspect: the design technology at the time was very inefficient in terms of how it used the area of silicon. You have to choose different architectures so that you emphasize memory and regular structures, and you have to emphasize pitch-matching so that you have a bit-slice concept where everything corresponding to each of four bits of a four bit adder is laid out to be the same dimension physically. Then it fits together and matches in pitch. Then you can achieve another factor of three or four in terms of the density of silicon. This multiplies the previously mentioned ten to one improvement in the number of chips versus the previous TTL technology. So you end up with something like forty or fifty times more efficient use of silicon if you accept the constraint of architectures that are primarily memory and pitch-match oriented.
This may be jumping ahead a bit, but did it take a bit of a selling job to get customers who might previously have asked for some kind of custom design to accept what you guys were touting as sort of general purpose chip?
There was considerable resistance from customers who were used to being in charge, as in "It's their project. They are paying for it. They own it." They were NOT used to being owners of a programmation, a variation of TI's product. Yes, that was a problem. Some of the customers did not like it.
There was also an internal popularity problem, in that some of the engineering teams did not like the risk, the uncertainty of the proposed newer, denser architecture compared to what was a routine, relatively low risk method that they were used to. So, there was some friction between competing teams within the MOS department at TI.
How did that work out?
Well, I don't know. I guess I've had it on my un-implemented "To do" list for a number of years that I should call some of those people and see what they think now. I think we won them over, but I don't know.
Microprocessor vs. Microcontroller
We were discussing the difference earlier between a microprocessor and a microcontroller, which was what you were working on. There seems to be a lot of confusion about that. Could you delineate the difference?
Well, terms of art mean what they mean. They don't mean what reporters think they mean. They don't mean what people unfamiliar with the microelectronics industry choose to assign as a meaning. They mean what the participants in microelectronics meant. Period.
[The term] "microprocessor", following this ownership doctrine, means a processor on a single chip. A processor has an instruction set and it operates on data, according to a program. The data, the program, and the input and the output are arranged externally to that single chip. Examples of microprocessors are Intel's 8008 processor and an experimental design at TI called the TMX-1795; those were a couple of the early ones. More contemporaneously, [Intel's] Pentium processor is a microprocessor; and, Motorola, IBM and Apple have a competing product known as Power PC. That is a microprocessor.
Okay, now, a "microcontroller" is a slightly different concept. It attempts to provide a self-contained processing system on a single chip. In addition to a processor, it also includes: memory for data, memory for program, an ability to deal with inputs like keyboard inputs, and an ability to provide outputs like numeric or alpha-numeric display outputs.
Normally there's a trade-off here, because by including all those things, the amount of processing power may be less than it would be on a corresponding microprocessor of the same price. So, the question is, do you want a blazing fast processor, and you'll provide external things like RAMs and ROMs to store respectively, data and program? Okay, that's called a PC. That's what you do if you build a PC, or a Macintosh or a Sun workstation.
On the other hand, if you want to build a controller for your sprinkler system, or you're Ford Motor Company and you want to control the ignition timing for your engines, then you'd probably buy a microcontroller. You'd try to get everything over with on one chip. There might be a few minor exceptions, but the philosophy is dramatically different.
Applications and Growth of Market
So the applications were, in today's terminology, sort of embedded. You mentioned the calculator business. Were you guys also thinking in terms of some of the other applications, such as industrial applications for microcontrollers? Was this product well suited to that market? Or did that market even exist?
The answer to all of the above is yes. We did think about it, you can read about it in our very first patent applications. [For example, see U.S. Patents 3,757,306 and 4,074,351.] I specifically remember [we anticipated using microcontrollers in] taxi meters, digital volt meters, and counters. The concept of a microcontroller was there from the start. What are now called embedded applications were the focus. We could see the power of using programmed microcontroller solutions in place of dedicated electronics. We would look at some customer's rack of electronics, and instead of seeing the 150 chips or whatever, all wired together with various little knobs and lamps and displays, we would see a microcontroller and a standard display and a standard keyboard input.
In fact, that vision came true in spades. There are over 2 billion microcontrollers produced every year now. How many people are there on the planet? Everybody in any developed country is using microcontrollers.
That's interesting. You haven't been on the marketing side of this, but maybe you know this anyway. How has that market developed in the last twenty years, or is it thirty years? Has it been exponential in the last few years, or has it been growing steadily? It seems that there are a lot more consumer applications now than there used to be. Was the market much smaller for a long time?
It was, at least subjectively, a little slow to start. You had the resistance of customers who knew how to do it the old way, and maybe didn't want to bother with learning a different way. And it also involved a skill shift, from a hardware orientation to a software orientation. We could write a little program to turn a generic microcontroller into a sprinkler system. Instead of wiring up a bunch of gates and valves directly in hardware, the program says turn valve 7 on, and the valve 7 driver says okay and turns it on.
Appeal and Advantages of Microcontrollers
This may be more of a reflection than a question, but in the early history of computing, the attraction of computers was that they were general purpose things that could be reprogrammed easily to do different tasks. Whereas after it is installed in something, you may or may not ever change the program of your sprinkler system. Is it still part of the attraction of these things that they can be easily reprogrammed, or is their main attraction that they're cost saving? Are people out there using these things in applications where the task changes significantly? I'm thinking, for example, of a car, where they might have these microcontrollers, but I seriously doubt if you would ever change the program. So what's the attraction of a microcontroller versus a hardwired system?
Okay. There are several aspects that I'll try to untangle here. First of all, the notion of a microcontroller [includes] a permanent memory for storing a program.
Once it's programmed, generally it stays put. It just runs that program. But then, that's for, say the sprinkler system. The same generic microcontroller could be bought by a different company, say a company that makes bathroom scales, and now that microcontroller is weighing you. So its output is mostly a display, and its input is the sensor that operates as strain gauge to measure your weight. Viewed from the manufacturing point-of-view, it's the same chip, with the same costs, from the same factory, and of the same design. Yet it applies to both customers. The only difference is [TI or its] customer installs a program to configure it as a sprinkler system, or an ignition system, or a bathroom scale.
Now, getting to your question about what's the attraction: The attraction, from a business and manufacturing point-of-view, is the economy of serving multiple application needs from a common factory, a common design, and a common technology. It is the ability to combine those in ways that are transparent to the factory. The factory doesn't really care what [chips] it's making today, because they all look alike at the factory level.
There is also a second aspect that is known as a learning curve. In manufacturing experience, you tend to reduce the costs roughly 20 or 30 percent every time you double the volume. So it makes a great deal of sense to try to aggregate as much as you can into one manufacturing process, because you can then take that whole volume and that whole factory down a learning curve that improves 20 percent per octave, as long as you keep increasing the volume.
Let's move back a little bit. Can you tell me about your work on the TMX-1795 and the TMS-100? I take it from what you've said that you were more or less the designer? Who were the other people on the team? For instance, who did the logic and what were the other stages? Maybe you could tell me a little about how the team worked together, and who contributed what.
Okay. Well let's talk about that, one item at a time. The TMS-100 happens to be on my mind. I can talk about that one more quickly. The team that implemented the TMS-100 microcontroller, that initially served the calculator requirements for that new design, included about six people. At the top of my list for other contributors are Mike Cochran and Jerry Vandierendonck. Mike joined the team in March . It had been a project that was making decisions and figuring out how to do things since December . Working parts were functioning properly on July 4 of 1971. So the project ran about seven months and Mike joined just before the mid-point. By that time I had the basic elements outlined: how much ROM and how much RAM and what kind of ALU and so forth. Also, how to do keyboard scanning and display scanning was outlined.
What Mike and Jerry did in a series of vicious iterations of the design was test those premises of what was included, because this was a big chip. It was pushing the limits of what could be built on one chip. Again, it was trying to consolidate what is normally or conventionally three or four chips. And it was using an architecture that I proposed but nobody had built before. So there were these vicious iterations, where we would take a design or a set of assumptions about what a design consists of, and try it.
I think Mike's contribution was his ability to hold all of that, hardware and software, logic and circuit and timing, all of these aspects in his head at the same time. He could envision what was being efficiently handled, like the floating point routines he had to figure out for floating point arithmetic.
So we met the initial customer requirements with a tentative design, and then [reviewed], "Well, we're using up a lot of silicon to do this little tiny function. Is there some better way to do that?" And so we would completely iterate the design. I think there were about three or four of those vicious iterations. But then the design that resulted fit on one chip and functioned properly. After it started being delivered in, I forget, August maybe, it paid back the development costs eighteen times over — in four months that were left in that calendar year .
Jerry was a very diligent and a very careful designer. Nominally he was responsible for the ALU design of that TMS-100 processor. But he did much more than that. He served what I call the integrating role.
Most design projects [that fail] don't die in the hard part. They die in the interface between things that nobody is watching. Jerry was watching all of those cracks.
What's an example of that?
An example of what?
The interface. I'm not sure I follow what you mean there.
Okay, you've got a team of engineers to whom you delegate portions of the design. So, if I recall, Tony Bell was designing a RAM to store the data. Roger Fisher was designing a ROM to store the program. Jerry was designing an ALU to do BCD arithmetic, binary coded decimal arithmetic, where 7 plus 5 becomes 12. That is, two digit [registers, one holding a 1, and the other holding a 2]. Mike was nominally writing the software. Each of those guys would do their job precisely. But somebody has to be in charge of integrating all of that — making sure that the program generates instructions that are decoded properly, and that the decoded control signals operate the ALU and the ROM and the RAM correctly in synchronization so that nothing breaks, nothing fails. That integrating role was largely Jerry's.
I see. Okay. For a second there, I thought you were suggesting that he was integrating the personal interfaces, personalities. Making sure people were working together or something like that. Were there any problems like that?
I guess that's probably true as well.
I guess that brings up something else. I think I misunderstood your role in all of this. Were you at the head of this project, doing the architecture, sort of watching over the whole thing? Or were you actually in there doing part of the design stuff?
I was head architect and project leader until about April of '71, and designated designer for portions of what I would call the first iteration of the design. The baton passed from me to Mike Cochran, in terms of the architectural lead, in about April . The baton passed from me to a circuit engineering manager named Dick Gossen, at that same time. There was a handoff of the implementation responsibilities about halfway through the project.
Design Tools and Simulations
In terms of making this thing work, were there any tools available to you guys at that time? Circuit simulation, rule checking, was any of that kind of stuff available, or was this all done on paper?
Well, I really want to answer that twice. Because the first answer is accurate but it's not complete. The first answer, and you could get this from any number of sources, is that there were almost no CAD tools or design aids or design automation. Those things basically did not exist. We were doing a lot of work manually, in our heads, in manual calculations. We were using computer simulations, analogous to SPICE and logic simulation, that were provided by TI's design software department.
That's the first answer, but it's nowhere near complete. There were some spectacular things about this project. To my knowledge TMS-100 was the first major [microprocessor] chip project that did not have a breadboard. Because of the density of MOS circuitry, because there are so many transistors on one chip, the conventional practice was to do a breadboard. That way, you could put a scope probe down and watch any signal you chose, on the breadboard. Then, theoretically, you could translate the breadboard circuitry into MOS circuitry and it should just work.
But there were problems with breadboards. The technology used in the breadboard did not match some important aspects of the way MOS transistors actually behave. So, I made the decision, "We're not going to have a breadboard. It's more trouble than its worth. We're going to do simulations."
The design software department of Texas Instruments provided, not just off-the-shelf simulation programs, but also programmers. They assisted us in a critical way. It would not have worked without them. We relied on our own simulations and our own software. Today it would be called register transfer model simulating. We modeled the [instruction] processing and program and data and input and output resources on this chip. That basically permitted us to test the design at a high level, before our silicon existed. That was a leap, an innovation in design methodology. It's very common now, of course. Nobody uses breadboards now.
So, that reminds me of two other people who were involved in the project. One is Dan Wu, and the other is Sudhakar Joshi, [and Charles Brixey].
TI, Intel and TMX-1795
What was the relationship between TI and Intel? Did you guys have any idea what the other group was doing in microprocessors? I know there are lot of stories in this industry of visits to other places and sort of getting ideas, a sort of semi-openness in the industry. Was that your experience?
No, my recollection is that it was pretty competitive and pretty private. There were a couple of episodes that I know about where somebody overheard something in a coffee house. For example, a situation that I know about that I characterize as competitive pertains to the TI project called TMX-1795 and the corresponding Intel project that was originally called 1201, and commercially known as 8008. Here's what happened. A company called CTC, Computer Terminal Corporation, now known as Datapoint, apparently made contracts with both TI and Intel. I was aware of the TI part of that. That is, CTC and the principal architect there, whose name is Vic Poor, gave me a requirements document. Initially, I tried to do it on three chips using conventional design methodology. He told me that was unacceptable, saying "Intel can do it on one." That is the first time I had heard about Intel on that project.
So we got sent home. CTC was in San Antonio, and TI was in Houston. So we got sent home to Houston to rethink whether we were going to give up or try to do it on one chip. We received what you might reasonably characterize as hints about Intel doing a better job than we were, or Intel promising a better result than we promised. In any event, we implemented an original design, TMX-1795 design, [and later, slightly revised design, called TMX-1795A], to meet Mr. Poor's requirements, including his one-chip requirement. Any assertion that we improperly received information belonging to anybody else is incorrect. We did not.
Maybe we can come back to this, but we ought to run through at least some of these later positions you held. You left Texas Instruments in 1972, after these projects were completed, and went to Litronix.
Litronix started out life as an LED company. If I remember correctly, they had tremendous growth in the early years of their existence, going from essentially nothing to about $14 million worth of LEDs when I got there. Then in the next years we went from there to $44 million, and then crashed back to $15 million.
Where was this company?
It was located on Homestead Road in Cupertino California. Litronix eventually got bought out by the German company Siemens.
You mention in your resume doing VLSI work there. Were they making that transition from LED's to integrated circuits at that time?
Yes, they were doing what the industry calls "integrating". That is, the display company wanted to produce everything: not just the display that goes in calculators (the little red lights), but also the processing chip (the MOS part of the calculator), the keyboard, and an inductor in the power supply. So we did that. We built very high quality calculators. We were for a couple of years the highest unit/volume calculator company in the world. They were of excellent quality and were offered with an unconditional warranty. Everything was fine, except Hong Kong kicked our butt.
So you went there and, if I'm reading this right, did the architecture for a new microprocessor for this calculator. Interesting that their attitude was that they wanted to make their own. They weren't going to use one made by somebody else. Was there an economic justification for that? Did they think that would give them some sort of advantage or did they want a custom product?
Well, there are different business models, and the model that Litronix chose was the TI model of a highly integrated company in the sense that they made everything. They didn't have to buy very much other than sand to make calculators, either at TI or Litronix. You had basically silicon in, calculators out. Litronix was already proficient in the gallium arsenide and gallium arsenide phosphide technologies that are essential to making LED displays. They were very good at that. They were number one in the world at that. They basically decided to become an integrated calculator company following the business model that was popular at the time, to try to do everything yourself.
Did they try to make a gallium arsenide microprocessor?
No, no. That's only a more recent outrageous idea. They were good at making gallium arsenide display devices. Electricity in, photons out.
By that time, you'd sort of done this before. Was there anything radically new about this? Or was it getting to be a standardized process by this time?
I'm startled with "standardized." No, it was not standardized. It was second generation, more sophisticated architecture and design. Yet another factor of two or something improving the density in silicon. Litronix recruited me to put together a team and develop this capability. So I brought with me a couple of people from TI, Jerry Vandierendonck and Dan Wu. I hired another systems and logic guy from a different culture, a competing architecture contingent from Rockwell. His name is Ray Lubow.
This new team, organized at Litronix in 1972, then produced another microcontroller. You said microprocessor but I think you meant microcontroller. Again, aimed at calculator type things, calculators being the biggest identified microcontroller market at that time. As a result of the production efficiency and the lowered costs and so forth, we were able to sell calculators profitably as low as $19.95, which was even more remarkable than the TI breakthrough that startled people a year earlier, breaking $100. So there were innovations. But the philosophy was not aggressive, in terms of patents, and most of the inventions were not patented. That's kind of a shame.
Business Difficulties of Litronix
You mentioned the downfall of this as being connected to overseas manufacturing. Were there other factors involved? Was it difficult for this company to make that transition to integration. Did that go smoothly?
Well again, "smoothly" is not an accurate description. It was anything but smooth. But it was successful in ways that defied predictions that it would not be successful. For example, there's a transition most companies go through at about $25 million a year. A lot of companies don't survive that transition. Basically, the business is too big to be run as a grocery store, and you have to start the transition of founders giving up power to professional manager types. Traditionally that occurs somewhere around $25 million. We went right through there. Skated right through $25 million and went up to almost twice that.
Another example is consumer marketing. Conventional wisdom is that component companies will fail if they attempt consumer marketing. We did not fail. We hired brilliant people in consumer marketing [including Russ Stewart and Dick Veatch]. We had an advertising campaign that emphasized quality and we delivered on it. We had wonderful ads. We had a wonderful reception in both the department store channel, like Macy's, and the discount channel, like K-Mart. Consumer marketing was not the problem. Japan was not the problem.
The problem was differential tax rates and differential profits of off-shore, lower quality companies. It went like this: Some XYZ Semiconductor company would produce and deliver relatively low quality, relatively low feature calculator chips for bargain prices, that were assembled [into calculators] in Hong Kong. Since these calculators were shipped FOB Hong Kong, consumers' opportunity to remedy problems was about zero. They retailed about $9.95. We could compete down to a retail price of about $12.95. So there was maybe a two dollar differential for a high quality, unconditional guaranteed product versus one that somehow manages to call itself a calculator, but doesn't offer much function and has a battery life might be one hour. There are serious problems with it, but it is $9.95.
At that time there was also a legal change, something called the Magnuson-Moss Warranty Act, which established a [plain English] consumer warranty, which kicked in at, you guessed it, at $9.95. That is, anything above ten dollars is subject to the warranty Act. Basically the Act is a good thing. What it did is put an end to a portion of consumer fraud that pertains to warranties.
The other thing was the differential tax rate. The Hong Kong rate was 18%, I think. The U. S. tax rate was 45%. So, if you had the same manufacturing costs, and you had the same after-tax profit, that translated to a $2 difference in retail price. The effect of the 18% tax rate on the Hong Kong calculator company compared to any domestic calculator company, was about $2 retail, enough [for the Hong Kong company to escape the effects of the warranty Act]. It was incredibly vicious and disappointing to have done the very best we could do, and get creamed by such flaky products.
Ford Electronics Division
When was that happening? Was that at the end of the '70s? Did you leave right after that?
I think it was '76, around in there. I hung around a little while before I went to the relatively high ground of Dearborn, Michigan.
Why don't we go ahead and talk about that? I'm reading that in '77 you went to Ford Motor Company in the Electronics Division. At that time, what did that mean? Were you working on electronics for automobiles or electronics for the production of automobiles?
Ford's Electronics Division had the responsibility to produce modules for internal sale to the automotive car line divisions, including entertainment things like radios, options like climate control, and also, at the time I was getting there, ignition and fuel injection control, which was known under the acronym EEC for Electronic Engine Control. I think it's now called ECM or Engine Control Module.
Ford actually manufactured its own chips?
No, and that's why I went there. In this case, two business models were followed religiously, with a highly integrated solution at GM's Delco division, and a religious dependence on suppliers at Ford. Ford needed to develop good suppliers. I personally believe the correct solution was the latter solution. So I went to Ford in the capacity of the technical liaison between Ford's engineering groups, and outside suppliers like Intel and Motorola.
We haven't talked much about this, but was there a big difference in going from a company like TI (I don't know much about Litronix in terms of their corporate structure) to Ford, which obviously is a big, established company with all of the characteristics you associate with that. Was this a big transition for you personally? Texas Instruments in its early years, for example, was known for this sort of entrepreneurial spirit. Was that lost going to Ford? Or did that not really matter to you?
Well, it was sort of a deliberate retreat. Remember, I had just been scorched. I felt disillusioned with the entrepreneurial, start-up dream because I had just been through it in a five year marathon at Litronix. I never felt TI was particularly entrepreneurial. They were just another big company to me. But Litronix was definitely entrepreneurial, and I was going for the brass ring and was somewhat startled when it didn't work. It was to some extent a disillusioned engineering manager who showed up in Dearborn offering to help Ford integrate better with its suppliers, and get better results than their competition, the highly integrated Delco operation at GM.
One of the interesting things about your career at Ford is getting in on the early period of putting digital electronics into automobiles. That's something that historians of technology haven't focused much on. What, from a management perspective, were motivations for that? And were there people who felt like "an automobile is an automobile, and an electronic device is an electronic device"? Was there any resistance to that, or was it embraced?
It was not embraced. The Detroit engineering community is 95% mechanical engineering. They did not like the idea of electronics being involved in the real part of the car. It's okay to have an electronic radio, or even electronic control of your air-conditioning. But putting a small computer in charge of of spark and fuel? "That's terrible! We don't need that! Let's just build a fancier carburetor." That was the reaction.
The compelling force that launched electronics into the powertrain, into the core of the automotive engineering design, was the government. There were emissions standards, and those were government mandated. And, in the period when I was there, there was a second major government requirement kicking in, called the CAFE Law, or Corporate Average Fuel Economy Law. This Law mandated a progression to higher and higher miles per gallon, taken as a weighted average by sales.
There's a third inherent requirement, not government mandated, but coming from marketing and sales demand, and that is high performance.
So you have this three way contest for how the powertrain shall perform, trying to minimize emissions, maximize fuel economy, and at least retain if not improve performance. It was horrible. The mechanical solutions were having greater and greater difficulty meeting the mandates. Consequently, these EECs or ECMs were reluctantly adopted.
There is a type of ROM program called a "strategy" that maps the sensor inputs (like accelerator position, and current velocity, and temperature) into spark timing and fuel injection timing. Various new strategies are tested each year, [applying for legal ability to sell cars having certified strategies].
Slow adoption is a function of certification. It cost about a quarter of a million dollars for certification of a strategy. You place your bets on alternative strategies, which are different optimal compromises of emissions, economy and performance. Each strategy has to be tested to demonstrate that it meets emissions mandates for 50,000 miles (or 1000 hours, or six weeks), on the test track. The best compromises, among certified strategies, get built for following model years. Over a period of five or six years, Detroit finally integrated computerized ignition and fuel timing.
Was this a fundamentally different kind of project than the ones you'd worked on before? Was there an enormous amount to learn to do this? Or did calculator microcontroller techniques translate pretty well into this?
Technically it was not that big of a leap. Microcontroller approaches weren't straight-forward, but they could be applied and then enhanced. For instance, I think there's a Ford invention known as PPP, which is Piston Position Predictor. It was important to know exactly what's going on in this mechanical, chemical, combustion environment. There were some additional blocks needed to match the real-time processing requirement, where the spark in the next cylinder needs to be timed, and that might be a function of piston position.
Another example is a Knock Sensor, so you can advance the timing, which tends to improve both emissions and performance to a degree until you get the knock. You had to consider the possibility of adding the cost of a Knock Sensor to keep that strategy out of trouble. In other words, it's potentially a better strategy, but you can't have engines knocking. So, it would include a Knock Sensor. So improvements were needed for the real-time processing.
Performance Requirements & Challenges
The other dimension that was severely different, compared to consumer things like pocket calculators or wristwatches, was reliability and temperature. You drive in arctic to tropical temperature and humidity ranges, and all of this stuff is supposed to keep working? That was a shock to some of Ford's suppliers.
Ford already knew all of this. They had been through all kinds of problems with early transistorized ignitions, for example. They knew that disaster lurks if you're not careful. To "high tech" suppliers like Intel and Motorola, that was horribly startling. Automotive specifications are startling.
Did the need to achieve that kind of performance get translated into the actual design of the device? Or was it just a matter of packaging external components that made the difference?
Let me answer a little bit facetiously, and then I'll come back and be serious. One time early in my career, I was at Collins Radio. I was the hot-shot electronics guy who was applying this latest technology [to design a box of equipment], and assigned to the team with me was a mechanical engineer. I made a statement, something like, "Tell you what. You take care of all the mechanical parts and I'll just take care of this one little piece of electronics here." He glared at me and he said, "All parts are mechanical."
Everything physical, everything that has three dimensions, is a part. Consequently, things like packaging are not separate from design; they're part of design. You have to treat them with as much respect as you treat bits and bytes and megahertz. The corrosion resistance, or the resistance to vibration, or the ability to temperature cycle and not generate cracks, these are design things. Ford had more experience with those aspects than some of its suppliers did. Some of its suppliers needed that lecture I got: All parts are mechanical.
Just following up on that, were there also circuit designs made for, say, thermal stability? Was that true or was it not really necessary?
Yes, getting back to the more serious answer to your question, I think where you were headed is electronics design. The microprocessor, for example, was that different? To some extent, the answer is yes.
You always depend, in any chip design, on what's called "characterization data". As a design engineer, you want to know what a process produces. Never mind what you'd like it to produce, what does it produce? Then you basically rely on that production reality data to do your calculations and figure out whether it's going to run at 10 megahertz or 20 megahertz without difficultyÑor 200 megahertz.
You then factor in the application reality. So that, for example, in an automotive application, you've got horrendous noise. Electrically it looks like noise. But sparks generate transients that could potentially cause a malfunction of the chip that's causing the spark. So you get into this fight between who's in charge, the chip causing the spark or the spark causing that chip to fail? [You end up with constraints from both ends: from chip manufacturing reality such as min and max chip voltage supply range; and from application reality such as min and max passenger compartment temperature range. All of those constraints define a parametric box that allows the design to function properly — while meeting all those constraints, those edges of the parametric box.] Then the thing runs perfectly and nobody has problems. We did that.
There's another dimension too. We integrated some architectural improvements so that a microprocessor became a microcontroller when it incorporated I/O. But that was not real-time I/O. For example, in the TMS-100 chip, the program was much faster than a human. So there was not a real-time aspect to it.
Now, fast-forward to Detroit, and you get 6000 rpm times 4 cylinders per revolution. So, how much time do we have to calculate settings for the next spark? And to what accuracy do we need to know the clock time that a given sensor went off?
Say an event happens and a clock circuit time stamps it so you know the event happened at that time. Then, it may or may not get processed in the order that it was received. But at least you know what time the event was stamped, so you can calculate how long ago it happened. You end up with a new appendage on a microcontroller, in charge of real-time events.
Patents & Innovations
Why don't we shift gears a little bit and talk about your latest venture, MicroMethods, which you started in '82. This sort of leads us into a more general discussion of innovation. Now you're working more with outside companies and looking at their problems related to particular innovations in the form of patents. Are there any general observations you might make about the relationship is in this industry between a patent and an innovation? The simple obvious relationship between a patent and an innovation is that a patent is a description of a new technology, that a patent protects an inventor from having somebody else copy that technology, and that a patent accurately describes what that technology is and how to make it. Do those things hold true in this industry? Do patents function the way they're intended to function?
Okay. I think it's important to start at the source, which in this case is the U.S. Constitution. I'm looking for the quote. Article 1, Section 8, Clause 8 says, That Congress shall have the power to promote the progress of science, and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries. So there is Constitutional basis for all intellectual property statutes and regulations, that sentence that I just read.
I guess your question boils down to, "How does that work?" or, "Does it work?". I think I can come at that several ways. Let me start with just personal experience. We've already talked about some projects at Texas Instruments that landed me effectively in court as a somewhat uncooperative witness in litigation between TI and companies that TI was trying to take money from. I also attended a CAFC proceeding to determine inventorship of single chip microcomputers and microcontrollers.
As a function of actually working on those things, I was obliged to learn about patents. I could not understand what the lawyers were talking about. We earlier talked about the terms of art in engineering, such as microprocessor and microcontroller. The terms of art in patents are by far worse. It's very difficult to understand. But a funny thing happened on the way to understanding that. I took a course and got registered to practice as a patent agent before the U. S. Patent Office. I do not offer that service. But, in effect, I took a course and passed the exam that allows you to do that. So, nominally, I should understand some patent terms of art. The funny thing that happened to me in that experience was that my respect (for patents, for patenting, and even for patent lawyers) went up.
I was not doing this willingly. I was doing this defensively to try to understand what was happening. Yet, now, here I am an advocate of the system. It really does work. It has some strange terminology associated with it, and it has some unintended consequences associated with it. But, in general, it is a good system and, in my opinion, it is critical to the continued prosperity of the United States. We don't have any other kind of property that is protectable. Intellectual property is our property and if we don't have a system in place to protect that intellectual property, then countries with larger populations and, I would say, more aggressive competitive environments (such as Japan, Korea and China) will take prosperity away from us. We must have these intellectual property statutes, or we will lose big time.
Let's see. I'm trying to think of a specific example that would illustrate this. Say there's a large company established in some market and there's a small company that has some better ideas and is trying to become successful competing against this large company. The critically important thing in terms of the American style of free enterprise is the capacity of little companies to avoid being squashed, and thus to become successful. Let's say the little company takes their bright idea and obtains a patent that somehow is required if you're going to use the invention that they've come up with. The way that works is, legally the patent is a right to exclude. The little company has the right to exclude the big company from using their invention for a limited period, which is now twenty years from the date that they applied for their patent. To answer your question, "How does it work?", that buys them time to exploit their invention for a limited time. Presumably in that limited time they can come up with yet more bright ideas and patent those, so that their company growth can continue on another limited time from those other inventions.
Of course, big companies are entitled to patent any inventions that they come up with as well. So, you end up in two kinds of negotiations. In one kind you have big companies with big portfolios. Say one guy has a thousand patents, and another guy has fifteen hundred patents. They probably meet every couple of years and negotiate, in effect, the value of those two portfolios. You figure out which patents in company A apply to products made by company B and assign some royalty to that. Then you do the reverse and figure out which patents of company B apply to which products in company A. By multiplying out the volumes of the products, you figure out the net, whether company A owes company B so-many dollars, or the reverse.
So you have essentially peers negotiating relatively minor license fees in the sense that they will not materially affect the survival, or possibly even the income, of either company. But it is a respectful, positive thing that they go ahead and conduct these negotiations and effect payment in some reasonable manner. This happens all the time. The bulk of my business is offering MicroMethods (R) services with regard to analyzing patents and processors for big companies like this.
The second kind of negotiation is more of a survival situation, where a little company has an invention that either is already being used by a big company, or the big company wants to use it. The law says that the little company has the right to exclude the big company if it chooses. That's the way the law works, and in practice it usually turns out that the little company is willing to settle for some chunk of money or some other asset of value in trade, in order to grant the big company a license to use the invention of the little company. This can be life or death for that little company.
If you're competing against a billion dollar company and you're two guys in a garage, you better not just have a good idea, you better have a patented good idea, or that big company is going to squash you. That kind of situation is unfortunately not well understood. Some little companies don't understand how critically important it is that they not just invent things, but go on ahead through the relatively tedious bureaucratic process of patenting the good idea. It's not an easy thing to do, but in my opinion it is also not an optional thing to do if you're a little guy.
Inventing around Patents
Have you seen many examples of companies inventing around patents? Do you see that as a major source of minor innovations? That patents act as sort of obstacles to be overcome and therefore spurs to certain kinds of inventive activity?
Yes, the situation often arises where a company wishes to produce a product, and at least at the executive levels they don't really care whether they use method A or method B, or whether they use invention C or invention D. But, at a practical level, those things, those patented inventions may have an entry price that's severe. Quite often, engineering projects get an additional constraint. Remember the parametric box that we talked about that had constraints from the reality of [chip manufacturing tolerances], and then other constraints from the reality of temperatures and environment and so on from the application. Well, this is just a third reality, which is the need for licenses if you wander into patented territory.
The practical reality for complex products is there will be a number of licenses necessary. I don't have a good, credible source on this, but somebody estimated that to make a modern memory chip you need licenses to about a hundred patents. You're just not legally "entitled" to make a DRAM, for instance.
So, these are not trivial questions, these "design around" situations. Quite often that's what a design guy like me will be brought in for, to think about a way to do the required function that avoids using a certain known invention. If you don't use it, and you're both technically and legally careful to establish that you don't use it, then you can potentially avoid paying the license fees that would otherwise be brought.
Or, since a patent is a right to exclude, you have to face the more severe question of whether a big guy can exclude you. The owner of the patent that we're talking about can either demand a license fee, or he can stop you. There are examples of using the exclusionary nature of patents. I don't remember the name of the company, but there is a patent that goes to the use of microprocessors to control stage lighting. I'm not sure exactly what the patent covers, but one aspect that's interesting about the company that has this patent is that they do not allow anybody to use their invention. So, in effect if you want that kind of stage lighting, there's only one source.
In terms of spurring additional inventions, designing around a patent forces you to explore alternative methods or circuits to accomplish a similar function. I guess that forcing you to explore could be considered a spur, but I think there's a broader lesson which goes to the possibility of arrogance in terms of the attitude of a design group. One of the requirements to obtain a patent is that you are obliged to show how to practice the invention. I think that's what the word "patent" means, to lay open the method by which you achieve this result. The deal is the government grants you the right to exclude, for a limited time, in return for [you] showing the public how you did it. Well, that's the theory. Unfortunately, this theory does not get used very much.
Most engineering design groups do not feel that there is much to learn by reading patents. I feel that's unfortunate, because there is a huge amount to learn from the accumulated five million issued patents, just picking up the U. S. Patents alone. I think it comes with decades of experience to realize that, it's not enough to just be a bright design guy. You should see what other people are doing. In particular, the lesson is two-fold: get the patents that are relevant and read those, and also do reverse engineering. That is, take your competitors products apart, to the extent that you can understand how they work. If you do those two things, you will come up with better products. Unfortunately, I don't think very many engineering groups operate that way.
Have you seen any cases where patents worked in the opposite way, contributing to a potentially successful innovation's failure? I'm imagining a circumstance where a really good invention lies fallow because a particular company isn't able to exploit it and isn't willing to license it. Are there analogous cases you can think of?
No, I can't think of anything like that. The classic instance is where the company considering obtaining a patent decides it doesn't have a big enough budget or decides it can't deploy the engineering time in this "wasteful" manner. Then it ends up biting the dust because a competitor starts producing a clone or a cheaper version of what could have been a patented invention. That's what's really sad. I've been involved in a couple of cases where they could have had, in effect, a bartering chip, the patent as a tool to stay alive.
Can you mention any names?
I don't think so.
Okay, fair enough. What do you think the most influential patents in this industry have been? I mean there's obviously this business, this on-going business of the microprocessor/microcontroller. What have been the really important patents in the industry been?
Well, that's an interesting question, I should really know that. I think what happens is we're so specialized that we only know about one area and furthermore we only know about what's currently in dispute in that area. It shouldn't be too hard for the diligent historian to figure out the answer though. Quite often there are press releases and articles announcing major licensing events. There are also annual reports. As a stockholder of minor note, I get the TI annual reports and they break out the revenue from patents. Most of it I think came from memory patents of some kind.
Do you get the sense that they have a large number of patents that generate in aggregate those numbers or do you think there is some key patent in there?
Well, it's sort of in-between those two. It's aggravating how accurate this is. If you have a thousand patents, there will be about fifty that become important. Several companies have portfolios in excess of a thousand patents. In the first type of negotiation that we talked about, those companies end up across a table, addressing each other's stack of about fifty or so patents. The situation of a single patent being spectacularly productive is unusual.
Why is that?
Well I think it's a combination of things. First of all, the patent is issued with the presumption of validity, but if there's a substantial amount of money involved, the guy who's accused of infringing that patent will be vigorous in attacking the validity of the patent. It's possible he may cause a court to decide it's not valid. So, if you go to court to enforce a patent, you are risking that patent: If the courts hold that patent is invalid, you can never enforce that patent again.
Is it true that the courts have traditionally not held patents to be valid?
Well, there are trends. We're going into legend here that's beyond my actual experience. The key date was 1982, when they formed the CAFC, the Court of Appeals for the Federal Circuit. Before that date, my understanding is that it went something like 70-30 against validity. Since that date, it's been 70-30 for validity. I'm sure that's a simplification, and there are probably going to be future trends that we don't understand yet. Recent Patent Office rule changes and CAFC rulings are changing patent news and views. We don't really understand what unintended side effects may be inbound. Another decade down the pipe, it may be a different situation.
One of the things that you probably have intimate knowledge of from studying the patents is the phenomenon of simultaneous invention, where two or more inventors come up almost exactly the same thing at exactly the same time without direct knowledge of each other's work. Has that been prevalent in this industry? If so or if not, why do you think that is?
I think if you look below the surface, you'll find that that situation is a little bit more complex. The "separate, independent inventions" part is true. The "exactly the same" is probably not true. The effect is usually sorted out. If there is the "same invention," let's say, patented by two different people, then there is a procedure known as an Interference within the Patent Office, to determine who is the proper inventor. That's a black and white thing. There are specific rules that are applied to find out who is entitled to the patent. In the U. S., the basic rule is the one who is entitled to the patent is the one who invented it first, provided he reduced it to practice with due diligence. That is without abandoning it, and letting it lay fallow.
Now, in terms of, "Does this happen a lot?" and, "Why does this happen?", it goes back, in my opinion, to the needs. I described a scenario where these desktop calculator companies were demanding, in effect, more than we had the engineering capacity to provide. That, together with boredom re-doing similar things, provoked thoughts that led to important inventions. I doubt if that was a local matter in Texas. Other companies with similar MOS business probably experienced the same thing. You see, market forces tend to cause experiments in the same direction. Whether they end up with the same solution is a different question.
We've been focusing on patents a lot. I'll think I'll just open up the floor. Do you have any more general comments about this business of innovation. I know you outlined some stuff that we haven't addressed directly, so maybe this is the time to do that.
Well, just to get my outline on the record here, I was thinking about the processes that relate to innovation, and I came up with five: engineering, inventing, entrepreneuring, integrating and researching. We talked about most of those to some degree. Orthogonal to that list of activities, there is another way to categorize, and that's hardware/software. One thing that I think is important is how patents apply to software.
There have been recent decisions clarifying that software is patentable. Basically the courts have said, quit bothering us with the details of whether you use iron or silicon or bits. The question is not what it's constituted of; the question is whether it's new and whether it's not obvious. If it's new and not obvious, it's probably patentable.
Software is not different, despite what you may hear from programmers. Software is not different from any other kind of engineering. You can definitely make those kind of distinctions. This code is new, this code is not. Or, this code may be new, but it's obvious. The same test that has traditionally and I think quite equitably, distinguished patentable inventions from all other inventions, applies equally well to software.
There are some other regulatory requirements. You can't patent an algorithm, but if you attach that algorithm to a valve, and [the algorithm] decides when the valve opens, then that becomes an apparatus, which falls into the statutory categories of what's patentable.
Anyhow, I'm boring you with anecdotal war stories, but the concern is not so much legal, because that is clear. But to get back to the attitude of engineering and software design groups, I think that there are some really sad examples of companies whose business is nominally software not obtaining patents when they should and when they probably could. There seems to be a curious attitude among software people that patenting is bad. I don't know if software people are Socialists or what, but most software people I know think it should be free. I'm sorry, but that's wrong. This is America! This is free enterprise! It is only through the patent statutes that one can declare "What I discovered, I'm entitled to exclusively." It's really sad to see a whole category of companies that will become more important than hardware development companies, adopting an attitude that's so negative about such a fundamental right. End of lecture.
One other area that deserves comment is Internet, the World Wide Web and telecommunications. There is total chaos due to the recent Telecom Act and competing technologies like fiber optic and cable modem. My take on all that is that this is a wonderful time to be in engineering and inventing and entrepreneuring and all of these things. The world is being turned upside down. Literally nobody knows what it's going to look like, even as far as ten years out.
Care to make any predictions?
The main prediction is we're going to have a lot of fun. If you're a participant, you'll view it as fun, as exciting, as positive. If you're on the sidelines, it's going to scare the hell out of you. The main thing to hope for is that you get to participate in sorting out the chaos.
Very good. Thank you very much.
- 1 About Gary Boone
- 2 About the Interview
- 3 Copyright Statement
- 4 Interview
- 4.1 Education and Collins Research
- 4.2 MOS and Texas Instruments
- 4.3 Genesis of TMS Microcontroller Chip
- 4.4 Resistance
- 4.5 Microprocessor vs. Microcontroller
- 4.6 Applications and Growth of Market
- 4.7 Appeal and Advantages of Microcontrollers
- 4.8 TMS-100 Project
- 4.9 Design Tools and Simulations
- 4.10 TI, Intel and TMX-1795
- 4.11 Litronix
- 4.12 Business Difficulties of Litronix
- 4.13 Ford Electronics Division
- 4.14 Powertrain Electronics
- 4.15 Performance Requirements & Challenges
- 4.16 Patents & Innovations
- 4.17 Inventing around Patents
- 4.18 Microcontroller Patents
- 4.19 Simultaneous Invention
- 4.20 Innovation Processes