Archives:History of IEEE Since 1984 - TECHNICAL TOPICS


international congresses

The first International Exposition of Electricity (Exposition internationale d'électricité) was held in Paris, France, in 1881 at the Palais de l'Industrie on the Champs-Élysées in Paris. Exhibitors came from France, Germany, Italy, the Netherlands, the United Kingdom, and the United States. They displayed the newest electrical technologies such as arc and incandescent lamps, meters, light fixtures, generators, boilers, steam engines, meters, and related equipment. (See Appendix 2: Select Electrical Congresses, Exhibitions, and Conferences.) As part of the exhibition, the first International Congress of Electricians featured numerous scientific and technical papers, including proposed definitions of the standard practical units of volt, ohm, and ampere. This congress was a decisive step in the building of the modern International System of Units (SI).

Fig. 1-3. Image, Lighthouse exhibit at the International Exhibition of Electricity (Exposition Internationale d'électricité), Paris, 1881.
Fig. 1-4. Medal, The International Exhibition of Electrical Apparatus, Paris, 1881.

On 8 March 1881, Alexander Ramsey, U.S. Secretary of War, ordered Captain David P. Heap, Corps of Engineers, to sail for Europe in July for “the purpose of acquiring information respecting late inventions and improvements in electricity” by attending the International Exhibition of Electricity to be held at Paris from early August to mid-November 1881. On his return, he was to submit a detailed report for the use of the War Department.[1] The U.S. State Department appointed him honorary commissioner from the United States, and military delegate to the Congress of Electricians.

The first International Electrical Congress was held in conjunction with the electrical exhibition. The 1881 Electrical Congress was one in a series of international meetings held between 1881 and 1904, in the then new field of applied electricity. The first meeting was initiated by the French government, including official national representatives, leading scientists, engineers, industrialists, entrepreneurs, and others. Often, countries sent official representatives and companies sent displays of their newest products. Subsequent meetings also included official representatives, leading scientists, and others. The primary aim was to develop reliable standards, in relation to both electrical units and electrical apparatus.

pre 1884 elec history

Earlier Discoveries

Before there were electrical engineers, and before anyone coined the word “electron,” there were observations of certain physical phenomena that needed explaining. Charles Coulomb showed that the force of attraction between two charged spheres was inversely proportional to the square of the distance between them, lending precision to the field of electrical studies. In 1799, Alessandro Volta developed the first electrical battery. This battery, known as the Voltaic cell, consisted of two plates of different metals immersed in a chemical solution. Volta's development of the first continuous and reproducible source of electrical current was an important step in the study of electromagnetism and in the development of electrical equipment.[2] He built a battery, known as a Voltaic pile, made of alternating copper and zinc discs, with each pair of metals separated by flannel soaked in weak acid. The Voltaic pile stimulated so much scientific inquiry that, by 1831, when Michael Faraday built the first dynamo, the basis of electricity had been established.

INSERT Fig. 1-7. Five Pioneers in the Development of the Electric Dynamo and Motor. (Jehl, v. 2, p. 587.)

Volta’s battery made electricity easily available in the laboratory. Hans Ørsted, in 1820, used one to pass current through a wire near a compass, causing its needle to swing to a position at right angles to the wire—the first indication that electric current and magnetism were related. Soon after, mathematician Andre-Marie Ampère observed that a live coiled wire acts as a magnet, and developed equations predicting the force between two current-carrying conductors. This established the field of charges-in-motion, or electrodynamics. In 1827, schoolteacher Georg Simon Ohm wrote a simple equation relating potential, current, and circuit resistance, now known as Ohm's law.

One more discovery would be needed to set the stage for a new age of electrical invention. In 1821, Michael Faraday demonstrated that a magnet placed near a current-carrying wire caused the wire to move. That concept formed the basis of the electric motor. A decade later, he passed a current through one of two coils wrapped around an iron ring. When he varied the current in the first coil, it induced a current in the second. This experiment established the principles of the transformer and the rotating generator.

The theoretical and experimental basis for “engineering the electron” arrived with the work of James Clerk Maxwell. Between 1860 and 1871, at his family home of Glenlair in Scotland, and King’s College London, where he was Professor of Natural Philosophy, Maxwell conceived and developed his unified theory of electricity, magnetism, and light. A cornerstone of classical physics, the Theory of Electromagnetism is summarized in four key equations that now bear his name. Maxwell’s equations underpin all modern information and communication technologies. Maxwell built on the earlier work of many giants, including Ampère, Gauss, and Faraday, but he revolutionized the fields of electrical and optical physics, and laid the groundwork for electrical, radio, and photonic engineering, with his experiments, theories, and publications. The unification of the theories of electricity, magnetism, and light, which comes directly from Maxwell’s equations, clearly sets Maxwell’s work apart from similar achievements of the time.[3]

In 1865, Maxwell mathematically unified the laws of Ampère and Faraday, and expressed in a set of equations the principles of the electro-magnetic field. He showed the electric field and the magnetic field to be indissolubly bound together. Maxwell also predicted that acceleration or oscillation of an electric charge would produce an electromagnetic field that radiated outward from the source at a constant velocity. This velocity was calculated to be about 300,000 km/s, the velocity of light. From this coincidence, Maxwell reasoned that light, too, was an electromagnetic phenomenon.

Maxwell's prediction of propagating electromagnetic fields was experimentally proven in 1887, when Heinrich Hertz, a young German physicist, discovered electric waves at a distance from an oscillating charge produced across a spark gap. By placing a loop of wire with a gap near the oscillator, he obtained a spark across the second gap whenever the first gap sparked. He showed that waves of energy produced by the spark behaved as light waves do; today we call these radio waves. Faraday established the foundation for electrical engineering and the electrical industry with his discovery of induction; Maxwell, in predicting the propagation of electromagnetic fields through space, prepared the theoretical base for radio and its multitude of uses. With an understanding of the fundamental laws of electricity established, the times were ripe for the inventor, the entrepreneur, and the engineer.

The Electric Telegraph and Telephone: Instraneous Communication

At the time of Faraday's discoveries in the 1820s, information traveled at the speed of a ship or a messenger on either foot or horseback. Not much had changed since 490 B.C.E., when the runner Pheidippides carried the news of the battle of Marathon to Athens. Semaphore relay systems could overcome distances, but only if the weather cooperated. Messages by sea traveled so slowly that British and American forces met in the Battle of New Orleans two weeks after the Treaty of Ghent was signed ending the War of 1812. However, by the middle of the nineteenth century advances in communication and transportation technology created a more interconnected world in which steam powered ships and railways accelerated the pace of travel, and the telegraph and underwater cables quickened the pace of communication.

William Cooke, an Englishman studying anatomy in Germany, turned his thoughts to an electric telegraph after he saw some electrical experiments at Heidelberg. In London, he sought the help of Michael Faraday, who introduced him to Charles Wheatstone of King’s College, London, who was already working on a telegraph. In 1838, they transmitted signals over 1.6 km (1 mi). Their system used five wires to position magnetic needles to indicate letters. Because of discussions with the American Joseph Henry in England in 1837, Wheatstone later adapted electromagnets to a form of telegraph that indicated directly the letters of the alphabet.

In September 1837, while visiting New York University (NYU), Alfred Vail saw Professor Samuel F.B. Morse, the artist-inventor, demonstrate his "Electro-Magnetic Telegraph," an apparatus that could send coded messages through a copper wire using electrical impulses from an electromagnet. A skilled mechanic, Vail saw the machine´s potential for communications and offered to assist Morse in his experiments for a share of the inventor´s profits. Morse was delighted when Alfred Vail persuaded his father, Judge Stephen, a well-to-do industrialist, to advance $2,000 to underwrite the cost of making the machine practical. He provided them room in his Speedwell Iron Works at Morristown, New Jersey, for their experiments. Thus, while Morse continued to teach in New York, Alfred went to work eliminating errors from the telegraph equipment.

Alfred had a written agreement with Morse to produce a working, practical telegraph, at Vail´s expense, by 1 January 1838. Vail and his assistant, William Baker, worked feverishly behind closed doors to meet the deadline. Morse kept track of their progress, mostly by mail. New Year’s Day came without a completed, working model. Morse, now visiting, became insistent. Alfred knew that if success did not come soon his father would cut off his financial support and the experiments would end. Five days later, the telegraph was ready for demonstration. Alfred and Morse invited the judge to the workshop.

Judge Vail wrote down a sentence on a slip of paper and handed it to his son. Alfred sat at the telegraph machine, manipulated metal knobs and transmitted, by a numerical code for words, the judge´s message. As the electrical impulses surged through the two miles of wire looped through the building, Morse, at the other end, scribbled down what he received. The message read, "A patient waiter is no loser." (IEEE commemorated the demonstration of practical telegraphy with IEEE Milestone 13, dedicated on 7 May 1988.[4])

Fig. 1-8. Photo, Morse’s telegraph.

INSERT Fig. 1-9. Photograph of Practical Telegraphy IEEE Milestone No. 13 Ceremony.

In 1839, Morse was encouraged by a discussion with Henry, who backed Morse’s project enthusiastically. Morse adopted a code of dots and dashes for each letter and his assistant, Alfred Vail, devised a receiving sounder. Morse sought and received a U.S. federal government grant of $30,000 in 1843 for a 60 km (38 mi) line from Baltimore, Maryland, to Washington, D.C., along the Baltimore and Ohio railroad that would prove the efficacy of his system. On 24 May 1844, this line was opened with the famous message, "What hath God wrought?" Telegraph lines were rapidly built, and by 1861, they spanned the North American continent, many parts of the developed world, and Europe’s empires.

Fig. 1-10. James T. Lloyd, Lloyd's railroad, telegraph & express map of the United States and Canadas from official information, 1867.

In the mid-nineteenth century, inventors and entrepreneurs looked to span continents, rivers, and oceans, but geography, distance, as well as existing theory and technology were obstacles to the expansion of telegraph communication networks. In 1851, Morse laid a cable under New York harbor, and another across the English Channel from Dover, England, to Calais, France. In the 1850s, a number of attempts were also made to lay a cable between Ireland and Newfoundland. Before the end of the decade, in 1858, the American Cyrus Field and his English associates attempted to install a cable from Ireland to Newfoundland. It successfully carried a congratulatory message from Queen Victoria to U.S. President James Buchanan on his inauguration. However, it failed after a few weeks because the mechanical design and insulation of this 3,200-km (2,000-mi) line were faulty. In 1864, two investors put up the capital and the Great Eastern was offered to Cyrus Field to lay another cable. The Great Eastern was five times larger than any vessel afloat at the time. She was able to carry the entire new cable, which weighed 7,000 tons. Its 2,600 miles could be lowered in a continuous line from Valentia, Ireland, to Heart’s Content, Newfoundland. In June 1865, the Great Eastern arrived at Valentia and began laying the cable. Within 600 miles of Newfoundland, the cable snapped and sank. The Great Eastern, after making several attempts to recover the cable without success, returned to Ireland.

Fig. 1-11. Laying the Transatlantic Cable, 1866.
Fig 1-11A - Barker, Chart of the submarine Atlantic Telegraph - Philadelphia - Published by WJ Barker & RK Kuhns, 1858
Fig 1-11C - Gross, Alexander and Geographia Ltd -The Daily Telegraph map of the world on Mercator's projection - London c1918 - LOC.jpg
Fig. 1-12. Map. Major telegraph lines, 1891.

William Thomson (later Lord Kelvin) had developed an effective theory of cable transmission, including the weakening and the delay of the signal to be expected. It took seven years of study of cable structure, cable-laying procedures, insulating materials, receiving instruments, and other details before a fully operational transoceanic cable was laid in 1866. A new company, the Anglo American Telegraph Co. raised £600,000 to make another attempt at laying the cable. On Friday 13 July 1866, the Great Eastern steamed westward from Valentia with 2,730 nautical miles of telegraph cable in her hold.[5] Fourteen days later 1,852 miles of this cable had been laid across the bottom of the ocean, and the ship dropped anchor in Trinity Bay, Newfoundland.[6] The successful landing of the cable established a permanent electrical communications link that altered personal, commercial, and political relations between people across the Atlantic Ocean. They also succeeded in raising from the seabed the broken end of the cable lost in 1865, and connected it to its terminus in Newfoundland. By 1870, approximately 150,000 km (90,000 mi) of underwater cable had been laid, linking all inhabited continents and most major islands. Later, additional cables were laid from Valentia, and new stations opened at Ballinskelligs (1874) and Waterville (1884), making County Kerry a focal point for Atlantic communications. Five more cables between Heart's Content and Valentia were completed between 1866 and 1894. This station continued in operation until 1965.

INSERT Fig. 1-13 and 1-14. Photos, IEEE Milestone No. 5 & No. 32 dedication ceremonies.

After the U.S. Civil War, the telegraph business expanded in conjunction with industrial and commercial growth in the United States. An economic problem became known because simultaneous messages required individual lines, thus more and more wires were needed over each route to handle the increased number of messages. This multiplicity of lines tied up capital and the networks of wires cluttered the space above the city streets. Great rewards were envisioned for an inventor who could make one line carry several messages simultaneously. In 1872, Western Union paid handsomely for the duplex patent of Joseph Stearns, which doubled the capacity of a telegraph line. Telegraph networks continued to expand, and by 1876, there were 400,000 km (250,000 mi) of lines on 178,000 km (110,000 mi) of routes in the United States.

In 1872, Alexander Graham Bell read a newspaper account of the Stearns duplex telegraph and saw an opportunity for quick fortune on a multiplex telegraph line. By extending the work of Hermann von Helmholtz, who used vibrating reeds to generate frequencies, Bell could make each frequency carry an individual telegraph message. The son of a professor of elocution at the University of Edinburgh, Bell had arrived in Boston the year before to teach at the School of the Deaf.

Elisha Gray, superintendent of the Western Electrical Manufacturing Company in Chicago, also saw the possibilities for a multiplex telegraph, especially one using different frequencies. In 1874, Gray discovered that an induction coil could produce musical tones from currents created by a rubbing contact on a wire, so he resigned his post to become a full-time inventor. He used magnets connected to a diaphragm and located near an induction coil as receivers and transmitters. Gray demonstrated some of his equipment to Joseph Henry in Washington in June 1874. Henry, in a letter to John Tyndall at the Royal Institution in London, stated Gray was "one of the most ingenious practical electricians in the country."

In March 1875, Bell met Henry and impressed him with his understanding of acoustical science. Henry told Bell that Gray “was by no means a scientific man,” and he encouraged Bell to complete his work. When Bell protested that he did not have sufficient electrical knowledge, Henry exclaimed, “Get it.” Never the less, in 1875, Gray’s multiplex equipment was more advanced than Bell’s, but Bell, the man with the scientific background, sought to transmit speech. Gray, the practical man, saw the rewards of multiplex telegraph and dismissed the telephone simply as a development of scientific interest, but Bell saw the telephone’s potential as a device that could transmit speech.

INSERT Fig. 1-15. Photo, Bell telephone exhibited at Centennial Exhibition, 1876.

On 14 February 1876, Gray’s lawyers filed a caveat in the U.S. Patent Office, which as a defensive measure included a description of a telephone transmitter, but Bell’s lawyer had filed a patent application on the telephone a few hours earlier. Less than one month later, on 10 March 1876, at his laboratory in Boston, Bell transmitted the human voice with the famous sentences "Mr. Watson, come here. I want you," addressed to his assistant down the hall. Bell developed instruments for exhibition at the Centennial Exhibition held that summer in Philadelphia, which attracted great interest and fostered commercial development. In early 1877, commercial installations were made and the first Bell switchboard was placed in operation in New Haven, Connecticut, in January 1878, with twenty-two subscriber lines. Indeed, it seemed some people were willing to pay for speech communication, albeit, primarily for business purposes. In time, the telephone became a common household technology for the upper and middle classes.

Fig. 1-16. Women Using Early Home Telephone, 1904.

The potential of the telegraph and telephone attracted another young inventor to the field of electrical communication, and he was to have a profound impact. Thomas Alva Edison was a self-taught, highly successful inventor who had established a laboratory at Menlo Park, New Jersey. He had produced a carbon microphone for the telephone system, and was always interested in projects in which he could see commercial prospects. Edison was attracted to the possibilities of an incandescent lamp. George Barker, Professor of Physics at the University of Pennsylvania, was an acquaintance who was fascinated by the work of Joseph Swan on incandescent lamps, and he passed this enthusiasm to Edison. In October 1878, Edison embarked on what was to be a hectic year of incandescent lamp research. Edison's first lamps used a platinum filament and operated in series at ten volts each. He soon realized that parallel connections, with each lamp separately controlled, would be more practical. This increased the current to the sum of currents taken, and this made larger conductors necessary.

INSERT Fig. 1-17. Edison’s Menlo Park Laboratory, 22 Feb. 1880. (Jehl, v. 2, p. 459.)

INSERT Fig. 1-18. The steamship S.S. Columbia built in 1880 for Henry Villard and equipped with an Edison isolated electric light plant. (Jehl, v. 2, p. 556.)

INSERT Fig. 1-19. Original Edison dynamo used on S.S. Columbia, and later a part of the restored Menlo Park collection at Henry Ford’s museum in Dearborn, Michigan. (Jehl, v. 2, p. 559.)

The Menlo Park laboratory staff increased as more men came in to speed the work on electric light research. Among them was Francis Upton, a recent M.S. in physics from Princeton University, who was added at the urging of Grosvenor Lowrey, Edison's business and financial adviser. Upton focused on mathematical problems and analyses of equipment. By the fall of 1878, Edison and Upton realized that without a high electrical resistance in the lamp filament, the current in a large parallel system would require conductors so large that the investment in copper would be prohibitive. They considered the 16-candlepower gas light their competition, and they found that an equivalent output required about 100 watts. Using a 100-volt distribution voltage (later 220 V in a three-wire system); each lamp would need to have a resistance of about one hundred ohms. Although they had a platinum filament of sufficient resistance, Upton’s calculations indicated that the cost of platinum would make the lamp uncompetitive. Edison turned his attention to searching for a high resistance filament, and after trials of many carbonized materials, he had success on 21 October 1879 with a filament made from a carbonized cotton thread. In November 1879, a longer-lived filament was made from carbonized bristol board, and a little later Edison produced even longer-lived lamps made with carbonized split bamboo.

INSERT Fig. 1-20. Thomas Edison and Menlo Park Laboratory Staff, 1879.

INSERT Fig. 1-21. New Year’s Eve demonstration at Menlo Park, 31 Dec. 1879.

INSERT Fig. 1-22. Collage of electric light inventors – Brush, Swan, Maxim, etc.

INSERT Fig. 1-23. English Contemporaries of Edison. (Jehl, 2, p. 538.)

INSERT Fig. 1-24. Great European Contemporaries of Edison. (Jehl, v. 2, p. 539.)

INSERT Fig. 1-25.American Contemporaries of Edison. (Jehl, v. 2, p. 461.)

INSERT Fig. 1-26. Scientific Contemporaries of Edison. (Jehl, v. 2, p. 463.)

During the fall of 1879, Edison held many public demonstrations of his light bulb and electric light system at his laboratory in Menlo Park, New Jersey for friends, investors, reporters and others eager to see the newest efforts of the Wizard of Menlo Park. On 20 December 1879, Edison welcomed many the New York City Board of Alderman and reporters who had the power to grant franchises to construct an electric light system in the city. Edison Electric Light Company funded Edison’s lamp research and it placed enormous pressure on Edison to bring the light bulb to market. Edison, ever the showman and master of public relations, arranged for the press to announce the invention on 21 December. Later, an impressive public demonstration with five hundred lamps was arranged at Menlo Park, and the Pennsylvania Railroad ran special trains from New York City for the event.

In the fall of 1881, Edison displayed lamps and his "Jumbo" generator of 50 kW at the International Electrical Exhibition in Paris, to a very positive reception. Although the lengthy field structure of the Edison machines was criticized as poor magnetic design by Edward Weston, who also manufactured generators at that time, Edison and Upton had grasped the necessity for high-power efficiency, obtainable in a generator having low internal resistance.

Generator design was not well understood at the time. This was particularly true of the magnetic circuit. The long magnets of Edison's "Jumbo" were an extreme example of the attempt to produce a maximum field. In 1873, Henry Rowland had proposed a law of the magnetic circuit analogous to Ohm's law. However, it was not until 1886 that the British engineers, John and Edward Hopkinson, presented a paper before the Royal Society that developed generator design from the known properties of iron. The full development of theories of magnetic hysteresis had to wait for Charles Proteus Steinmetz in 1892. Again, inventive engineering was outpacing scientific understanding, but for the impact of an invention to be completely realized, more discovery would be needed.

The first central power station was that of the California Electric Light Company in San Francisco in 1879. It was designed to supply twenty-two arc lamps. The first major English central station was opened by Edison on Holborn Viaduct in London in January 1882, supplying 1,000 lamps. The gas lighting monopoly, however, lobbied successfully for certain restrictive provisions in the British Electric Lighting Act, which delayed development of electrical illumination in the country.

Fig. 1-27. Cartoon. “Light thrown on a dark subject (Which is Bad for the Gas Companies),”Puck, 23 Oct. 1878.
Fig 1-27A - Edison versus the sun - American Gas Light Journal - Dec 2, 1878
Fig. 1-28. Cartoon – The Electric Fizzle re: gas versus electric illumination.

In the late 1800s, manufactured gas and kerosene dominated lighting, but Edison’s invention of a practical incandescent bulb created a powerful incentive for electrical distribution.

Edison and Upton were also working on plans for a central power station to supply their American lamp customers. They estimated a capital investment of $150,000 for boilers, engines, generators, and distribution conductors. The buried conductors were the largest item at $57,000, of which the copper itself was estimated to require $27,000. This made clear to Edison that the copper cost for his distribution system was to be a major capital expense. To reduce the copper cost, Edison contrived first (in 1880) a main-and-feeder system and then (in 1882) a three-wire system. Annual operating expenses were placed at $46,000, and the annual income from $10,000 installed lamps was expected to be $136,000. This left a surplus for return on patent rights and interest on capital of $90,000.

To use his system to best advantage, Edison sought a densely populated area. He selected the Wall Street section of New York. He placed in service the famous Pearl Street station in New York on 4 September 1882, for which another "Jumbo" of larger size was designed. Six of these machines were driven by reciprocating steam engines supplied with steam from four boilers. They produced an output of about 700 kW to supply a potential load of 7,200 lamps at 110 V. The line conductors were of heavy copper, in conduit installed underground.

INSERT Fig. 1-29. Lamps of Edison’s Principal Rivals, including Sawyer-Mann, Farmer, Maxim, and Swan. (Jehl, v. 2, p. 476.)

INSERT Fig. 1-30.The Edison dynamo, large, displayed at the International Electrical Exhibition, Paris, 1881. (David Porter Heap)

Fig. 1-31. Harper’s Weekly illustration, installing underground conduits.

INSERT Fig. 1-32. Pearl Street central station, 1882. The building at 257 Pearl St. housed the generators and all the accessory equipment, while the offices and storehouses were at 255. (Jehl, v. 3, p. 1040.)

The first hydroelectric power plant (at first an isolated plant rather than a true central station) followed quickly, going into service on 30 September 1882, on the Fox River at Appleton, Wisconsin, with a 3-m (10-ft) head and two Edison generators of 12.5 kW each. It had a direct current generator capable of lighting two hundred and fifty sixteen-candle power lamps each equivalent to 50 watts. The generator operated at 110 volts and was driven by gears and belts by a water wheel operating under a ten-foot fall of water. As the dynamo gained speed, the carbonized bamboo filaments slowly changed from a dull glow to incandescence. The “miracle of the age” had been performed, and the newspapers reported that the illumination was “as bright as day.” The Vulcan Street Plant was the precursor of many projects to, follow, in which civil, mechanical, and electrical engineers cooperated to provide hydroelectric power for the United States.[7]

Fig. 1-33. Photo, Edison central station constructed in Appleton, Wisconsin.

INSERT Fig. 1-34. Photo, IEEE Milestone dedication ceremony on 15 Sept. 1977.

Power generating stations proliferated in locations close to the potential consumers of electricity. Over a few decades they became consolidated into a handful of large companies (see below). The telegraph industry also grew rapidly. By 1880, the Western Union Company, with its network of telegraph lines, controlled the largest segment of the electrical industry. The rise of big electrical companies set the stage for the formation of a profession and high-profile executives who would lead the formation of what eventually became IEEE.

central stations

Central Stations: Roots of the Electric Utility Industry

Edison as an entrepreneur in the incandescent electric light field contended with three main factor, which included cost, quality, and reliability. In order to replace other forms of artificial illumination an incandescent system had to provide a superior product at a competitive price. Edison, Swan, Maxim, and other manufacturers of electric lamps, intended to sell more than a simple light bulb. Central electrical generating stations, the pioneers of today's utility industry were developed in conjunction with the electric lamp. The number of local illuminating companies grew rapidly as commercial electric light central stations because low-voltage direct current could only serve small areas. Because of the limitation on transmission distance, many stations were needed to supply cities such as New York or London. By 1900, there were more than 2800 small DC stations in the United States. However, already by 1885 inventors were exploring a new option, alternating current (AC).

The wide reach of electricity enabled elevators, thus stimulating the building of skyscrapers. Electric lights, installed in many city streets by the 1890s, made them safer. Lighting also allowed businesses to operate longer hours and gave rise to new forms of entertainment, such as movies, amusement park rides and—eventually—radio and television. At the same time, electrification of society also exacerbated social and economic inequities between the cities and the countryside, where entrepreneurs saw no profit in extending electric power to widely dispersed households.

Although Edison patented alternating current equipment, and he developed a high-resistance lamp filament to reduce line losses, he remained committed to direct current and the companies he founded to manufacture components continued to flourish. Indeed, he laid the foundation for the electrical manufacturing industry. The so-called Wizard of Menlo Park and associated companies had much invested, and—all things considered—coming in contact with a direct current line was potentially less fatal than an alternating current transmission line. In early 1888, the Edison companies went on the offensive, arguing that DC was more reliable, more compatible with existing equipment, and, most importantly, safer. In addition, DC could also be metered, while AC customers paid by the lamp, which gave them no incentive to turn them off, even when not needed. However, in 1888, Oliver B. Shallenberger of Westinghouse developed a magnetic disk meter for AC power.

In 1888, Nikola Tesla, received patents on a polyphase AC motor and presented a paper to the AIEE meeting in 1888. Westinghouse purchased the patents and hired the inventor. It took a few more years to solve some of the problems of the induction motor and to develop the two- and three-phase systems of distribution for AC power. As a consultant, Benjamin Lamme makes clear in his memoir that it took a lot of work on his and Westinghouse’s teams of engineers to turn Tesla’s paper polyphase dynamo into commercial products. Tesla was certainly useful but he never commercialized any of his patents himself.

The promise of a solution to the motor problem removed a second major argument of Edison’s interests for DC, and the furor over the safety of AC began to subside as well. In the summer of 1891, Westinghouse constructed a 19-km (12-mi) transmission line from Willamette Falls to Portland, Oregon operating at 4,000 V. In the same year, a 3,000-V, 5-km (3-mi) line supplied a motor at Telluride, Colorado.

Fig. 1-42 Tesla’s AIEE application
Fig. 1-43. Tesla's patent #US381968.

Alternating current power stations, with their coal smoke and particulates, could be located near industrial plants to supply their loads, and transmission lines could supply residential and office lighting and elevator loads elsewhere in town. Industrial use of electricity developed rapidly after Tesla, aided by C. F. Scott and B. G. Lamme at Westinghouse, commercialized the AC induction motor in 1892 and Elihu Thomson developed the repulsion-induction motor for GE in 1897.

In 1895, the Niagara Falls project showed that it was practical to generate large quantities of AC power and to transmit it to loads elsewhere. While some of the electricity was transmitted twenty miles to Buffalo, most of the electricity generated by this power station was used in close proximity to the falls at electrochemical and aluminum processing plants.

Fig. 1-44. Niagara Falls Power Company crew.
Fig. 1-45. Map of industrial customers near the Niagara central station.
Fig. 1-46. Willy Pogany’s mural, “The Birth of Power,” Niagara Falls Power Company, Source: Dean Adams 1927. V. 1, frontispiece.
Fig 1-46A - The Birth of Power description - Adams Niagara Power 1927

Westinghouse won the contract based partly on its extensive AC experience. After much debate, the specifications called for three alternators, 5,000 hp each, two-phase, 2,200 V, and 25 Hz. The choice of frequency had been chosen based on characteristics of the steam engines that drove the turbines, but AC motors required lower frequencies. In 1889-1890, Lewis B. Stillwell proposed 3600 cycles per minute or 60 Hz, which was acceptable for lamps and motor loads both. When the Niagara alternators were being designed, a compromise was reached at 25 Hz, which was suited to motor applications or electrolytic refining, but was within the flicker rate of incandescent lamps. In later years it produced a very objectionable flicker in fluorescent lamps, and areas supplied with 25 Hz have since largely been changed to 60 Hz operation.

On the European continent, progress also continued apace. In 1891, soon after Willamette, the installation of a three-phase AC line at 30,000 V, carrying 100 kW for 160 km (100 mi) between Lauffen and Frankfurt in Germany, created much interest and further advanced the AC cause. Technically, Oerlikon of Switzerland and Allgemeine Elektricitats-Gesellschaft (AEG) showed that their oil-immersed transformers could safely operate at 40,000 V.

Even after Niagara Falls cemented the American lead in technology, many advances continued to come from Europe. Walther Nernst of Gottingen developed the Nernst lamp, a light source of heated refractory oxides, in 1899. Siemens and Halske patented a tantalum filament lamp in 1902, and Carl Welsbach produced the osmium filament lamp. These challenges to the lucrative carbon lamp business alerted Steinmetz and General Electric (GE) at Schenectady, and the General Electric Research Laboratory was established in 1901, with Leipzig-trained Willis Whitney in charge. Steinmetz saw competitive threats to the company's lucrative lamp business, which led him to propose a laboratory, divorced from production problems, devoted to work on scientific fundamentals underlying company products. In the fall, Willis R. Whitney, an assistant professor of chemistry at M. I.T., arrived as the first laboratory director. Whereas Edison had already pioneered what might be considered the first research and development laboratory at Menlo Park, New Jersey (and later in West Orange, New Jersey) the fact that the now large and growing electrical industry saw the need to do research at the corporate level shows the significance of the new electrical industry.

The reciprocating steam engine, usually directly connected to the generator, reached 5,000 hp by 1900. The 10,000-hp engines built for Ferranti's Deptford Station required the casting of the heaviest steel ingots in the history of British steelmaking, indicating that a much greater engine size might not be possible. The challenge was that, fundamentally, the reciprocating engine and the electric generator were a mechanical mismatch. The reciprocating engine supplied an intermittent torque, whereas the electrical generator and its load called for a continuously applied torque. This inequality was smoothed by storing rotational energy in large flywheels, but these took up space and were necessarily heavy and costly. The constant torque of the steam turbine was an obvious answer to the need. Turbine development had been carried on for many years, and in 1883, de Laval in Sweden designed a practical machine operating on the impulse principle, the wheel being struck by high-velocity steam from a nozzle. Rotational speeds as high as 26,000 rpm were produced, but the power output was too limited by the gearing needed to reduce the speed to that suitable for the generators. However, by 1889, de Laval had a turbine that drove 100-kW generator.

Above 40,000 volts, the line insulators posed a limit, being pin-mounted porcelain of considerable size and fragility. In 1907, H. W. Buck of the Niagara Falls Power Company and Edward M. Hewlett of General Electric solved the problem with suspension insulators composed of a string of porcelain or glass plates whose length could be increased as the voltage was raised. C. F. Scott and Ralph Mershon (AIEE President, 1912-1913) of Westinghouse investigated another significant problem. Scott had noted luminosity accompanied by a hiss or crackling sound on experimental lines at night when operating above 20,000 volts. The phenomenon was the cause of considerable energy loss, enough to justify further study, which these men undertook on the Telluride line in 1894. They concluded that this "corona loss" was due to ionization of the air at the high field intensities created around small-diameter wires. The power loss appeared to increase rapidly above 50,000 volts, posing a potential limit to high­voltage AC transmission. Scott's 1899 report of the Telluride work interested Harris J. Ryan, who showed that the conductor diameter and spacing might be increased to reduce the field intensity and hence the corona loss. After this work, conductor diameters increased and line voltages reached 220 kV by 1923.

The invention of the rotary converter—a machine with a DC commutator on one end and AC slip rings on the other—helped to moderate the AC/DC battle, and allowed for an orderly transition of the industry from the era of electric light to the era of electric light and power. Westinghouse used this machine in its universal electrical supply system, based on polyphase AC, which it displayed at the Chicago Exposition of 1893. The general use of AC equipment was also aided in 1896 by a patent exchange agreement between General Electric and Westinghouse by which rational technical exchange became possible.

Some utility executives foresaw the need to bring order to the supply of electricity by consolidating small stations and their service areas into larger ones. In Chicago in 1892, Samuel Insull had left the dual position of vice president of the new General Electric Company and manager of its Schenectady works to become president of the Chicago Edison Company. Believing in the economy of large units, Insull's first step in Chicago was to enlarge the system by building the Harrison Street Station in 1894, with 2400 kW of generator capacity. By 1904, the plant had been expanded to 16 500 kW, using 3500-kW units. With so much generating capacity available, a large market had to be found, so Chicago Edison acquired twenty other companies in the Chicago area, becoming the Commonwealth Edison Company in 1907.

Insull and his chief engineer, Louis Ferguson (AIEE President, 1908-1909), saw that the space requirements and weight of the reciprocating steam engines at Harrison Street limited the maximum rating for that station. Travelling in Europe, Ferguson saw many advances in steam turbines. Insull worked out a risk-sharing arrangement with GE to build a 5,000-kW steam turbine, which went into service in October 1903. The unit, vertical in design, was one-tenth the size and one-third the cost of the reciprocating engine-generator initially scheduled for the plant. Although not as efficient as expected, the lower cost and improvements made later units satisfactory. In 1909, the original 5,000-kW units at Fisk Street were supplanted by 12,000-kW sets that required virtually no increase in floor space.

Insull, on an 1894 European trip, met Arthur Wright, then the manager of the Brighton municipal station. Wright had been influenced by Dr. John Hopkinson, a world-renowned electrical authority. Hopkinson had communicated ideas to Wright concerning the optimum loading of a system and the scheduling customer tariffs to cover not only the cost of energy delivered, but also the cost of the capital needed to maintain the system. Wright had developed a metering system measuring not only the use of energy, but also the extent to which each customer used his installed capacity or his maximum demand on the system. Specifically, Insull learned that the system had to recognize both energy costs and capital costs for the customer. Load diversity and load factor were additional management principles introduced by Insull before 1914.

Wireless Telegraphy and the Birth of Radio

While advances were being made in electric power, telegraphy, and telephony, scientists and engineers wondered whether Hertzian waves could be used to transmit electromagnetic signals through the air without recourse to wires. In the 1890s, Tesla worked on the problem, as did Reginald Fessenden in Canada and Aleksandr Popov in Russia. The most commercially successful was Guglielmo Marconi, an Italian married to an Irish wife, who worked on wireless telegraphy first in Italy, then the U.K., then North America. Because of the discontinuity of electromagnetic waves produced by an electrostatic spark, Marconi transmitted dots and dashes based on Morse code. Receivers were likewise limited, originally coherers in which metal filings were moved by electromagnetic forces, and then natural semiconducting crystals of carborundum, silicon, and galena. After sending and receiving messages over increasing distances in the 1890s, in 1901 Marconi apparently succeeded in sending the letter “S” across the Atlantic Ocean. Despite these achievements, it would take more science and engineering to make radio a revolutionary means of communication, with all that implied for the world’s nations and societies.

INSERT Fig. 1-47. Marconi and his ship Electra (in MOH exhibit)

In 1897, J. J. Thomson announced a startling discovery. He had been trying to explain the nature of the cathode ray—a beam that occurs when an electric current is driven through a partially evacuated glass, or Crookes, tube. He found that the sub-atomic particles, now called electrons, which make up the beam, were more than 1,000 times lighter than the hydrogen atom. Thompson was awarded the Nobel Prize in 1906 “in recognition of the great merits of his theoretical and experimental investigations on the conduction of electricity by gases.”

The identification of the electron cleared up the puzzle of the Edison effect, which Thomas Edison had observed in 1880 while working on light bulbs with carbonized bamboo filaments. After glowing for a few hours, carbon from the filament would move through the vacuum and collect on the inside wall of the bulb. Edison found that the carbon had a charge, suggesting that electricity was moving through the vacuum. Because the electron had not yet been discovered, Edison could not explain the phenomenon. It later became the basis of the electron tube (often called the vacuum tube), which would soon have a tremendous effect on the world. Engineering based on these tubes came to be called electronics.

In 1904, John A. Fleming of England used the Edison effect to build a rectifier to detect radio waves. The output of two-element tube became known as a diode, which produced an audible signal from a radio wave. The diode was stable but could not amplify the electromagnetic energy that a connected antenna received. In October 1906, Lee de Forest presented an AIEE paper on what he called an “Audion,” a three-element tube, or triode. When evacuated, the Audion enabled amplification of a modulated, continuous radio wave received through an antenna. Circuits using triodes mad became the key component of all radio, radar, television, and computer systems before the commercialization of the transistor in the 1950s.

In 1939, William Shockley at AT&T’s Bell Labs revived the idea as a way to replace vacuum tubes. Under Shockley’s management, John Bardeen and Walter Brattain demonstrated in 1947 the first semiconductor amplifier: the point-contact transistor, with two metal points in contact with a sliver of germanium. In 1948, Shockley invented the more robust junction transistor, built in 1951. The three shared the 1956 Nobel Prize in Physics for their inventions.

INSERT Fig. 1-48. Early radio inventors and entrepreneurs: Marconi, Fessenden, Armstrong, de Forest, and Sarnoff.


Engineering Education

The U. S. Congress authorized President George Washington to establish a school for engineers at West Point, New York, in 1794, which had a curriculum based on civil engineering. Rensselaer Institute at Troy, New York (now Rensselaer Polytechnic Institute), gave its first degrees in civil engineering in 1835 to a class of four. By 1862, it featured a four-year curriculum with parallel sequences of humanities, mathematics, physical science, and technical subjects. At that time, there were about a dozen other engineering schools, including some at big universities such as Harvard and Yale. The Morrill Land Grant Act of 1862, signed into law by President Lincoln, gave grants of federal land to the states, which helped them establish colleges of "agriculture and the mechanic arts, for the liberal and practical education of the industrial classes." Within ten years, seventy engineering schools had been established. C. F. Scott (Ohio State, 1885) and B. J. Arnold (Nebraska, 1897) were the first graduates of land-grant schools to reach the presidency of the AIEE in 1902-1903 and 1903-1904, respectively.

After the U.S. Civil War, as steam became the prime mover, mechanical engineering diverged from civil engineering, with curricula at Yale, M.I.T., Worcester Polytechnic Institute, and Stevens Institute of Technology. Opinions differed on the relative importance of the practical and theoretical aspects of engineering education. Some schools such as M.I.T., Stevens, and Cornell emphasized mathematics and science, while others, including WPI, Rose Polytechnic, and Georgia Institute of Technology stressed shop work.

Electrical engineering usually began as optional courses in physics departments. The first such course was organized at the Massachusetts Institute of Technology (MIT) in 1882 by Charles R. Cross, head of the Department of Physics and one of AIEE’s founders. The next year, Professor William A. Anthony (AIEE President, 1890-1891) introduced an electrical course in the Department of Physics at Cornell University, in Ithaca, New York. By 1890, there were many such courses in physics departments. Professor Dugald Jackson (AIEE President, 1910-1911), the first professor of electrical engineering at the University of Wisconsin, wrote in 1934 that "our modes of thought, our units of measurement, even our processes of education sprang from the science of physics (fortified by mathematics) and from physicists." The University of Missouri started the first department of electrical engineering, in 1886. D. C. Jackson, of the Edison General Electric Company in Chicago, organized a department in 1891 at the University of Wisconsin, winning assurances that it would be on an equal footing with other academic departments.

The AIEE, as an organization both of the industry men and the experimenters, was extremely interested in how the next generation would be educated. The AIEE heard its first papers on education in 1892. Professor R. B. Owens of the University of Nebraska thought that the electrical curriculum should include "a good course in modern geometry, differential and integral calculus… But to attempt to analyze the action of alternating current apparatus without the use of differential equations is no very easy task... and when reading Maxwell, it becomes con­venient to have quaternions or spherical harmonics, they can be studied in connection with such readings."

In 1903, the AIEE held a joint summer meeting with the newly formed Society for the Promotion of Engineering Education. The papers given by industrialists suggested that engineering students should learn "engineering fundamentals," but no mention was made of what those fundamentals were. There was disagreement on the value of design courses. One speaker urged that the senior year curriculum should include design courses that covered the materials of electrical engineering as well as study and calculation of dynamo-electric machinery and transformers, with minimum work at the drawing board. Steinmetz, in his AIEE presidential address in 1902, expressed the opposite view: “The considerations on which designs are based in the engineering departments of the manufacturing companies, and especially the very great extent to which judgement enters into the work of the designing engineer, makes the successful teaching of designing impossible to the college.” He urged more analysis of existing designs. Despite Steinmetz's recommendation, the teaching of design courses continued into the 1930s.

ge/bell labs/radio/rca/WWI

Research Laboratories at GE and Bell

The GE Research Laboratory was already working on improvements to the incandescent lamp. Tantalum superseded carbon in lamp filaments in 1906. William D. Coolidge tackled the problems posed by the next step. These were associated with the use of the metal tungsten. Tungsten was suited for lamp filaments because of its high melting point, but it resisted all efforts to shape it. Coolidge appears to have been more of an engineer than a scientist, a trait very useful in solving the problem of making tungsten ductile. Coolidge's ductile tungsten filaments made a further dramatic improvement in lamp out­put and efficiency when introduced in 1910. Because the filaments operated at a higher temperature, they produced more light. At the annual meeting of the AIEE in Boston in 1912, the GE Research Laboratory put its science team on show, with successive papers by Whitney, Coolidge, and Langmuir. The year 1913 brought an engineering first with a joint effort by Steinmetz of General Electric and B. G. Lamme of Westinghouse, the top engineers of their competing companies. This was a paper titled "Temperature and Electrical Insulation," which dealt with the allowable temperature rise in electrical machines.

The concept of team research that had been put forward by Edison at Menlo Park, and then expanded by him at West Orange, was now well established at Schenectady and supported by many accomplishments. This new management approach was valuable because it showed how the scientific advantages and freedom of the university could be combined with industrial needs and directions.

The engineering problems of the Bell telephone system in the United States were legion, but the most fundamental was the transmission of signals. Bell researcher William H. Doherty wrote in 1975, “The energy generated by a telephone transmitter was infinitesimal compared with what could be generated by pounding a telegraph key…But more than this, the voice spectrum extended into the hundreds, even thousands, of cycles per second. As the early practitioners, known as electricians, struggled to coax telegraph wires to carry the voice over larger distances, they were increasingly frustrated and baffled."

In 1900, George Campbell of Bell and Michael Pupin, professor of mathematical physics at Columbia University, independently developed the theory of the inductively loaded telephone line. Pupin was given the priority on patent rights. Oliver Heaviside had first pointed out that, after the line resistance, the line capacitance most limited long-distance telephone transmission. Acting between the wires, this capacitance shunted the higher voice frequencies, causing distortion as well as attenuation of the signal. He left his results in mathematical form, but, in 1887, there were few electrical engineers who understood mathematics well enough to translate Heaviside’s results into practical form; nor was there need to do so at that time.

Pupin drew his understanding of the physical problem from a mathematical analogy to the vibration of a plucked string loaded with spaced weights. He thereby determined both the amount of inductance needed to compensate for the capacitance and the proper spacing for the inductors. When, in 1899, the telephone system needed inductive loading to extend its long-distance lines, Pupin’s patents were purchased by the Bell system.

Using loading coils properly spaced in the line, the transmission distance reached from New York to Denver by 1911. This was the practical economic limit without a "repeater" or some device for regenerating the weak received signal.

Campbell's objective was always to extend the distance limits on telephone communication, but to do that he found it necessary to explore the mathematical fundamentals. His ability to do so increased the Bell Company's appreciation of in-house research, which had been only sporadically promoted.

Theodore N. Vail, who was one of the AIEE founders and had served as AT&T President in 1885-1887, but had left AT&T after financial disagreements, returned to AT&T in 1911. Vail's return signaled more support for basic research. This support was badly needed, as the system was about to start building a transcontinental line with the intent of initial operation at the opening of the Panama-Pacific Exposition in San Francisco to be held in 1915.

AT&T undertook a concerted study of the repeater problem. By making refinements to de Forest’s Audion, a transcontinental line, with three spaced repeater amplifiers, was used by Vail in July 1914, on schedule. The next January, President Woodrow Wilson and Alexander Graham Bell both spoke over the transcontinental line from Washington to San Francisco for the opening ceremonies of the Panama-Pacific Exposition in the latter city.

The Bell system initially had problems with the switching of customers' lines, solved with the so-called multiple board, which gave an array of operators access to all the lines of the exchange. Men were initially used as operators, since men dominated all industries outside the home, especially technical ones. However, AT&T found that a woman’s voice was reassuring to users, and so women began to dominate this occupation.

When automatic switching equipment was invented in 1889 by Almon B. Strowger, an undertaker from Kansas City, the Bell Company reacted as a monopoly sometimes does—it discounted the innovation. It took thirty years for Bell to drop its opposition. The automatic switching equipment, manufactured by the Automatic Electric Company of Chicago, operated by means of pulses transmitted by the dial of the calling telephone. Rotary relay mechanisms moved a selector to the correct tier of contacts, thus choosing the subscriber line desired by the calling party. The first installation was at LaPorte, Indiana, in 1892, with an improved design installed in 1900 at New Bedford, Massachusetts, for a 10,000-line exchange. This was a completely automated telephone exchange.

Bell's support of basic research in the field of communications was furthered in January 1925, by the organization of the Bell Telephone Laboratories from the Western Electric engineering department in New York, then numbering more than 3,600 employees. Frank B. Jewett became the first president of the Laboratories, which became a source of major contributions to the communications field.

In 1916, Bell engineers used a large array of small triodes in parallel to transmit voice by radio from Washington, D.C., to Paris and Honolulu, Hawai’i. There was a clear need for more powerful vacuum tubes to open long-distance radiotelephone communication. Although Alexanderson's high-frequency alternators had been used in point-to-point radiotelegraph service for many years, the future of radio lay with the vacuum tube.

An unexpected research bonus came from Karl Jansky of the Bell Labs in 1928, when he began a study of the static noises plaguing AT&T’s new transatlantic radiotelephone service. He found that most noises were caused by local and distant thunderstorms, and also there was an underlying steady hiss. By 1933, he had tracked the source of the hiss to the center of the Milky Way. His results were soon confirmed by Grote Reber, a radio amateur with a backyard antenna, and after the war the new science of radio astronomy rapidly developed with giant space-scanning dish antennas. This gave vast new dimensions to the astronomer's work, adding radio frequencies to the limited visual spectrum for obtaining knowledge from space.

Edwin Armstrong

In 1912, Edwin Armstrong was an undergraduate student at Columbia University, studying under Pupin. He built an amplifier circuit that fed the output signal back to the input, greatly boosting amplification. Armstrong patented the invention, the regenerative circuit, in 1914. It vastly increased the sensitivity of radio receivers, and was used widely in radio equipment until the 1930s.

At about the same time, Fessenden, Alexander Meissner in Germany, and H.J. Round in England all built circuits that produced similar results, as did de Forest himself a year or so later. De Forest started a patent action that was later taken over by AT&T. The ensuing long legal battle exhausted Armstrong's finances. The Supreme Court in 1934 decided against Armstrong. The Board of Directors of the Institute of Radio Engineers (IRE) took notice and publicly reaffirmed its 1918 award to Armstrong of its Medal of Honor for his "achievements in relation coregeneration and the generation of oscillations by vacuum tubes." Because of the patent litigation, many companies had used the regenerative circuit without awarding Armstrong his royalties.

Meanwhile, Armstrong made a further discovery about this circuit: just when maximum amplification was obtained, the signal changed suddenly to a hissing or a whistling. He realized this meant that the circuit was generating its own oscillations. The triode, he realized, could thus be used as a frequency generator—an oscillator.

Armstrong’s second invention would not have been possible without his first. Fessenden had, in 1901, introduced the heterodyne principle: if two tones of frequencies A and B are combined, one may hear a tone with frequency A minus B. Armstrong used this principle in devising what came to be called the superheterodyne receiver. This device converted the high-frequency received signal to one of intermediate frequency by combining it with a signal from an oscillator in the receiver, then amplifying that intermediate-frequency signal before subjecting it to the detection and amplification usual in receivers. The circuit greatly improved the ability to tune radio channels, simplified that tuning for consumers, and lowered the cost of electronic components to process signals. Armstrong developed the circuit while attached to the U.S. Signal Corps in Paris in 1918. RCA marketed the superheterodyne beginning in 1924, and soon licensed the invention to other manufacturers. It became, and remains today in software-defined radio receivers, the standard type of radio receiver.

Fessenden had demonstrated radio’s use as a broadcast medium all the way back in 1906, and now transmission and receiver technology meant that it could be more than a novelty or hobby. Despite the threat of interception, radio was used extensively in World War I for point-to-point communication, but after the war inventors and entrepreneurs again began to explore the commercial application of broadcasting. U.S. military officials argued that radio should be a government-owned monopoly, as it was in most countries, or reopened to private ownership and development. The IRE joined this argument on the side of private development, reminding Congress and the public that the radio field still faced important technical challenges. Government interference would impede technical creativity, its leaders argued and testified before Congress.

A crisis occurred in 1919 when the British-owned American Marconi Company proposed to buy the rights to the Alexanderson alternator from the General Electric Company, intending to maintain its monopoly position in the radio field. In Washington at that time, a monopoly might have been considered allowable, but certainly not if that monopoly was to be held by a foreign government. Seeking quick action, President Wilson turned to the General Electric Company to maintain American control of the radio industry. General Electric purchased the American Marconi Company for about $3 million, and proceeded to organize a new entity, the Radio Corporation of America. A few months later, AT&T joined RCA by purchase of stock, giving the new corporation control of the de Forest triode patents as well as those of Langmuir and Arnold.

Beginning around 1919 entrepreneurs in the United States and elsewhere began to experiment with what was to become commercial radio. Westinghouse Radio Station KDKA was a world pioneer of commercial radio broadcasting. Transmitting with a power of 100 watts on a wavelength of 360 meters, KDKA began scheduled programming with the Harding-Cox Presidential election returns on 2 November 1920. The broadcast began at 6 p.m. and ended at noon the next day. From a wooden shed atop the K building of the Westinghouse Company's East Pittsburgh works, five men informed and entertained an unseen audience for eighteen hours. Conceived by C.P. Davis, broadcasting as a public service evolved from Frank Conrad’s weekly experimental broadcasts over his amateur radio station 8XK, attracting many regular listeners who had wireless receiving sets. IEEE awarded the Westinghouse radio station KDKA an IEEE Milestone for its pioneering broadcast in 1994.[8]

Fig. 1-51. Photo, Westinghouse Radio Station, KDKA, 1920.

INSERT Fig. 1-52. Photo, IEEE Milestone dedication on 1 June 1994.

By 1922, there were more than 500 American broadcast stations jammed into a narrow frequency band, and a search was on for a method to narrow the frequency band used by each station. In the usual technique, known as amplitude modulation (AM), the amplitude of the carrier wave is regulated by the amplitude of the audio signal. With frequency modulation, the audio signal alters instead the frequency of the carrier, shifting it down or up to mirror the changes in amplitude of the audio wave. In 1922, John R. Carson of the Bell engineering group wrote an IRE paper that discussed modulation mathematically. He showed that FM could not reduce the station bandwidth to less than twice the frequency range of the audio signal. Because FM could not be used to narrow the transmitted band, it was not useful.

In the late 1920s, Armstrong turned his attention to what seemed to him and many other radio engineers to be the greatest problem, the elimination of static. "This is a terrific problem,” he wrote. “It is the only one I ever encountered that, approached from any direction, always seems to be a stone wall." Armstrong eventually found a solution in frequency modulation. He soon found it necessary to use a much broader bandwidth than AM stations used (today an FM radio channel occupies 200 kHz, twenty times the bandwidth of an AM channel), but doing so gave not only relative freedom from static but also much higher sound fidelity than AM radio offered.

With the four patents for his FM techniques that he obtained in 1933, Armstrong set about gaining the support of RCA for his new system. RCA engineers were impressed, but the sales and legal departments saw FM as a threat to RCA's corporate position. David Sarnoff, the head of RCA, had already decided to promote television vigorously and believed the company did not have the resources to develop a new radio medium at the same time. Moreover, in the economically distressed 1930s, better sound quality was regarded as a luxury, so there was not thought to be a large market for products offering it.

Armstrong gained some support from General Electric and Zenith, but he carried out the development and field-testing of a practical broadcasting system largely on his own. He gradually gained the interest of engineers, broadcasters, and radio listeners, and, in 1939, FM broadcasts were coming from twenty or so experimental stations. These stations could not, according to FCC rules, sell advertising or derive income in any other way from broadcasting. Finally, in 1940, the FCC decided to authorize commercial FM broadcasting, allocating the region of the spectrum from 42 MHz to 50 MHz to forty FM channels. In October 1940, it granted permits for fifteen stations. Zenith and other manufacturers marketed FM receivers, and by the end of 1941, nearly 400,000 sets had been sold.

Armstrong received many honors, including the Edison Medal from the American Institute of Electrical Engineers and the Medal of Honor from the Institute of Radio Engineers. Thus, he received the highest awards from both of IEEE’s predecessor organizations.

On 21 January 1921, First unsuccessful attempt to transmit a radio from the Capitol occurs during the inaugural address of an American President–Warren G. Harding–on the radio. On 4 March 1925, the first national radio broadcast of an inauguration occurred when President Calvin Coolidge took the oath of office on the East Front of the Capitol. More than 22,000,000 households reportedly tuned in for the broadcast.

On 22 February 1927, Congress passed the H.R. 9971, the Radio Act of 1927, providing government control over airwaves, licensing, and standards. The act also created the Federal Radio Commission (the precursor to the Federal Communications Commission). President Calvin Coolidge signed the Radio Act on February 23, 1927. A year later, Congress enhanced the Radio Act with passage of the Davis Amendment, S. 2317, which expanded radio technology to rural communities.

The Growth of RCA

In 1920, Westinghouse bought Armstrong’s regenerative-circuit and superheterodyne patents for $335,000. Having entered the radio field late, it now had a bargaining position. It entered the RCA group in 1921. Essentially a patent-holding corporation, RCA freed the industry of internecine patent fights. It also placed tremendous power in one corporation. Entrepreneurs, however, could enter radio-manufacturing business with only a small investment and infringe major patents, gambling that technical advances would dilute a patent’s value well before legal action could be taken.

Among the three major stockholders of RCA in 1921, the licensing agreement seemed to parcel out the opportunities neatly. AT&T was assigned the manufacture of transmitting and telephone equipment. General Electric and Westinghouse, based on RCA holdings of Armstrong's patents, divided receiver manufacturing by sixty and forty percent, respectively. RCA, which was forbidden to have manufacturing facilities, operated the transatlantic radio service but otherwise was a sales organization.

RCA had another major asset in David Sarnoff, its commercial manager. Brought to the United States from Russia at the age of nine, Sarnoff started as an office boy with Marconi, learned radio operation in night school, and served as a radio operator on a ship. He had advocated broadcasting entertainment and information since 1915 to promote sales of home radios, but his superiors continued to focus on point-to-point communications. At RCA, in July 1921 he helped promote a boxing match as the first broadcast sports event. Up to 300,000 people listened, helping drive sales of home radios, and RCA became a much larger company than anyone else foresaw. Sarnoff was promoted to general manager of RCA in 1921.

Radio had a big cultural impact. It exposed listeners to a wide range of events that they would not otherwise have had access—sporting events, political convention, speeches and entertainment. The National Broadcasting Company, established in 1926 by AT&T, General Electric, Westinghouse, and RCA, initially reached up to fifty-four percent of the American population.

Sarnoff became president of RCA in 1930, the year the RCA agreement was revised and the end of the ten-year prohibition on manufacturing. RCA took over manufacturing plants in Camden and Harrison, NJ. AT&T retained rights to the use of tubes in the communication services and sold its radio station, WEAF, to RCA.

During the Great Depression, Sarnoff allocated $10 million to develop an electronic television system, based largely on the work of Vladimir Zworykin, who invented a practical electronic camera tube. After World War II, Sarnoff championed electronic color television in an eight-year battle with rival company CBS, which advocated an electromechanical system. His support of innovation at the RCA Laboratories, renamed the David Sarnoff Research Center in Princeton, New Jersey, led to the establishment of a U.S. color TV standard in 1953.

INSERT Fig. 1-53. Photo, RCA Lab/David Sarnoff Research Center.

Tools of War

Technology has always played a role in wartime—often a decisive one. World War I has often been called the first technoscientific war, but despite the fact that it followed on the heels of early advances in internal combustion engines, in aeronautics, in electric power, and in radio, it was largely fought in nineteenth century style. Edison, however, realized that technological advances would change the way wars are fought. Even before the United States entry into the war, he advocated that the country take pains to preserve its technological advantages. He was in favor of stockpiling munitions and equipment and building up soldier reserves. The government came around to the notion that technology would be the key to winning future wars, and that concept was borne out in World War II.

Interestingly, although governments were finally going to move into the electrical engineering research and development in anticipation of the next war, engineering education in the years from 1918 to 1935 was almost static; professors continued primarily to hold bachelor's degrees, and most had some years of practical experience, which was viewed as more important than theoretical or mathematical proficiency. Few teachers held master's degrees, and fewer had doctorates. Many teachers assumed that a student would need no knowledge beyond what the teacher had used in his own career. There was little thought of preparing the students for change in the field. For instance, complex algebra, which Charles P. Steinmetz advocated in circuit analysis before the AIEE in 1893, had not been fully adopted on the campus by 1925.

Frederick E. Terman (IRE President, 1941) reported that electrical research productivity from 1920 to 1925 averaged nine technical papers per year from university authors published in the AIEE Transactions, but three-quarters of those papers came from only five institutions. In the Proceedings of the IRE, more than half of the thirty papers from educational sources published in those five years were submitted by authors in physics departments.

The better students, disenchanted with boring work on the drafting table and in the shop, and by heavy class schedules, transferred to engineering physics, which included more science and mathematics in preparation for work in the industrial research field that began to flourish in 1925. During this period, electrical engineering educators failed to teach advances in physics, such as Max Planck's radiation law, Bohr’s quantum theory, wave mechanics, and Einstein’s work. Even radio, well advanced by 1930, was overlooked by most educators.

The complexities of radio and electronics attracted some students to a fifth year of study for a Master’s degree. The large manufacturers began to recognize the value of a Master's degree in 1929, increasing the starting salary for engineers from $100 per month to $150, with ten percent added for a Master's degree. The value of graduate work and on-campus research programs finally came to demonstrated only just before World War II, when new electronics­ oriented corporations, financed with local venture capital, began to appear around M. I. T. and west of Boston. These were owned or managed by doctorate graduates from M. I. T. or Harvard who had developed ideas in thesis work and were hoping for a market for their products. Another very significant electronics nursery, known as Silicon Valley, appeared just south of Palo Alto, California, which resulted in large part from Frederick Terman's encouragement of electronics research at Stanford University. The Hewlett-Packard Company began with a $500 fellowship to David Packard in 1937. He and William Hewlett developed the resistance-capacity (RC) oscillator as their first product, and manufacture started in a garage and Flora Hewlett's kitchen. Many other companies in the instrumentation and computer fields have similar backgrounds and histories. Even with these advances, electrical engineers were ill equipped to serve as researchers in the laboratories exploring the new fields opened by the technologies of war—would be a decisive factor in World War II.

One important observation in the years leading up to the World War II or I was that radio waves are reflected by objects whose electrical characteristics differ from those of their surroundings, and the reflected waves can be detected. In 1887, when Heinrich Hertz proved that radio waves exist, he also observed that they could be reflected.

A secret development to use radio waves for detection and ranging began in 1922, when A. Hoyt Taylor and L. C. Young of the U.S. Naval Research Laboratory reported that the transmission of 5-m radio waves was affected by the movements of naval vessels on the nearby Anacostia River. They also suggested that the movements of ships could be detected by such variations in the signals. (The word "radar" was invented in 1941 by the U.S. Navy as an abbreviation of "radio detection and ranging.") Returning to this work in 1930, Taylor succeeded in designing equipment to detect moving aircraft with reflected 5-m waves.

In 1932, the Secretary of the Navy communicated Taylor's findings to the Secretary of War to allow application to the Army's antiaircraft weapons; the secret work continued at the Signal Corps Laboratories. In July of 1934, Director of the Signal Corps Laboratories Major William R. Blair proposed "a scheme of projecting an interrupted sequence of trains of oscillations against the target and attempting to detect the echoes during the interstices between the projections." This was the first proposal in the United States for a pulsed-wave radar.

By 1936, the Signal Corps had begun work on the first Army antiaircraft radar, the SCR 268, and a prototype was demonstrated to the Secretary of War in May 1937. It operated at a wavelength of 3 m, later reduced to 1.5 m. Navy designs for shipboard use operated at 50 cm and later 20 cm, the shorter wavelengths improving the accuracy of the direction-finding operation.

Earlier, Sir Robert Watson-Watt of the National Physical Laboratories in England had independently worked along the same lines, and he successfully tracked aircraft in 1935. The urgency felt in Britain was such that the British military also started work on "early warning" radar using 12-m pulsed waves. The first of these "Chain Home" stations was put in place along the Thames estuary in 1937, and when Germany occupied Czechoslovakia in November 1938, these stations and others were put on continuous watch and remained so for the duration of the ensuing war. In 1940, British ground-based radars were used to control 700 British fighter aircraft defending against 2,000 invading German planes. This radar control of fighter planes has been credited with the decisive role in the German defeat in the Battle of Britain.

INSERT Fig. 1-54 and 1-55. Photos, Military Technology - Sonar and Radar.

Antenna dimensions must be proportional to the wavelength (ideally ½). The wavelengths of the early radars required antennas too large for mounting in aircraft, yet the need was apparent. If a fighter pilot could detect an enemy aircraft before it came into view, he could engage the enemy with advantages of timing and maneuver not otherwise obtainable. On bombing missions, airborne radar could assist in navigating to and locating targets.

The prime need was for a high-power transmitting tube, capable of pulse outputs of 100,000 watts or more at wavelengths of 10 cm or less. Such powers were needed because the weak signal reflected from a distant target back to the radar receiver was otherwise undetectable.

An answer to the problem came with the advent of a new form of vacuum tube, the cavity magnetron. In it, dense clouds of electrons swirl in a magnetic field past cavities cut in a copper ring. The electrical effect is much like the acoustic effect of blowing across the mouth of a bottle: electrical oscillations build up across the cavities and produce powerful waves of very short wavelength. Invented at the University of Birmingham, the British General Electric Company produced a design in July 1940.

The scientific advisers to Prime Minister Winston Churchill faced a problem: active development of microwave radar would create shortages of men and material then being devoted to extending the long-range radar defenses. They suggested to Churchill that the Americans, not yet at war, be shown the new magnetron and asked to use it to develop microwave radar systems. Vannevar Bush set up the National Defense Research Committee, which in August 1940 met with British representatives to see a sample device.

The American response was enthusiastic. In September, the U.S. military set up a new center for microwave radar research and development, the M. I. T. Radiation Laboratory. Lee Dubridge, then head of the Department of Physics of the University of Rochester, was appointed its director. Concurrently, the British magnetron was copied at the Bell Laboratories and judged ready for production.

At that time, the technology of microwaves was in its infancy. Little had been translated from basic equations to hardware and few engineers were engaged in the field. Those who were available were asked to join the M.I.T. "Rad Lab," but the main source of talent was the community of research-oriented academic physicists. Among those recruited to the Laboratory were three men who later were awarded Nobel Prizes in physics: I. I. Rabi, E. M. Purcell, and L. W. Alvarez.

By 1942, a cavity magnetron was capable of producing a peak pulse power of two million watts. The model SCR 584 10-cm ground radar, designed for tracking enemy aircraft and leading friendly aircraft home, was an early success. Smaller and lighter radars operating at 3 cm and 1 cm were quickly adopted by bomber missions for location of ground targets. Ground-based radars for early warning of impending attack were developed, too.

Because radar echoes are very weak, they are easily obscured by strong signals sent out on the same wavelength, a countermeasure known as jamming. Research was directed toward determining the vulnerability to jamming of various types of radio signals and designing very high-power transmitters for exploiting the weaknesses of enemy transmissions. The NDRC also worked on protecting one's own signals from enemy countermeasures, known as counter-countermeasures.

Among the countermeasures adopted were rapid shifts in the wavelengths used for radar and for field communications. Another was a passive shield known as "chaff." This consisted of great quantities of very thin metal strips of lengths suited to the wavelength of the enemy signals. These strips were thrown overboard by aircraft to form a screen and give false echoes for enemy radar.

The development of sonar began during World War I as a defense against the German submarine menace. Not much was then known about the acoustical properties of seawater. During World War II, the U.S. Navy Underwater Sound Laboratory, operated by Columbia University in New London, Connecticut, was established to develop this technology. The word "sonar" is derived from "sound navigation and ranging." Sonar systems use underwater sound waves to detect the presence of ships, submarines, and other objects of interest. Submarines rely principally on passive sonar equipment that listens and tracks the noise created by engines and water disturbance of target vessels. Destroyers seeking submarines use active sonar, sending out pulses of acoustic energy and timing the echoes; submarines on attack also use active sonar. In both passive and active use, the devices that receive the echoes must be designed to indicate accurately the direction of the sound source.

In time of war, radar and sonar suffer the disadvantage of transmitting strong bursts of energy that disclose their source to the enemy; modern bombs and torpedoes use such signals for guidance to their targets. A system of navigation that required no radar or sonar emissions from the aircraft or ship was needed during World War II, and the answer came with loran.

Robert J. Dippy of the British Telecommunications Research Establishment conceived of measuring the time-delay between two radio signals to determine distance. His GEE system was built for short-range blind landings by Royal Air Force planes, but the system had a longer range than expected. The U.S. military developed a lower-frequency version, called Loran (LOng-RAnge Navigation), with a range of 1,500 miles. Dippy's scheme involved precisely timed radio pulses transmitted from two accurately located stations—a "master" and a "slave." By an adjustment on the receiver, the navigator superimposed the received pulses from the two stations on a cathode-ray screen. The difference in the time at which the two pulses arrive at the receiver was measured using the known speed of radio waves, the time difference was transformed to a difference in distance. The navigator calculates his position as being at the apex of a triangle, the base of which is the line connecting the two stations. From the loran receiver, the difference in length of the other two sides can be determined. The locus of the apex of all triangles having the measured difference is a hyperbola, plotted on a loran map. Repeating the measurements using pulses from the original master and a differently located slave station placed the navigator on a second hyperbola. The apex of the correct triangle, and thus the position of the ship or aircraft, is located at the intersection of the two hyperbolas.

The U.S. military developed a lower-frequency version, called LORAN (short for long-range navigation), with a range of 1,500 miles. The first loran transmitter was installed at a U.S. Coast Guard station in Ocean City, Maryland, in November 1941. Its 100-kW pulses began to ring the bells of the ship-to-shore telephone service on Great Lakes ore carriers, but after Pearl Harbor the loran service was moved to the then-vacant 160-m amateur band. This was the loran A service, not closed down until 1981. The postwar service, loran C, uses wavelengths of 3,000 m that can span greater distances. The power of the loran C pulses is about five million watts, radiated from towers 1,300 ft. high. To secure positions within a few hundred feet at distances up to 1,600 km (1,000 mi), the pulses are matched by the individual oscillations of the respective radio waves.

The military importance of wireless radio communications had been appreciated by all the powers during World War I. Between the wars, electrical engineers made much progress, mostly based on civilian developments. After World War II, these developments were to have a profound impact on the world, on the profession of electrical engineering, on standardization, on engineering education, and, especially, on the AIEE and the IRE.


Silicon’s Coming of Age

It would be difficult to imagine an Allied victory in World War II without radar, sonar, aircraft communications, the proximity fuse, and electronic countermeasures—all based on the electron tube. Despite their ability to oscillate, modulate, demodulate, and amplify at any frequency from zero (direct current) to many millions of cycles per second, vacuum tubes had shortcomings. They were bulky, used too much power for heating the filament or cathode, and wasted power in heating the plate. They required voltages in the hundreds or thousands. They could not be relied upon to operate for long periods of time to match the virtually indefinite life of the other components in electronic systems. It was clear that at some point the vacuum tube was a technology that would have to be replaced.

One electronic device of the war years that was a key to the future of the electronics industry was the solid-state diode rectifier, which allowed one-way-only flow of current. It was the first widely used semiconductor device. Semiconductors operated in a realm somewhere between conductors, such as copper—whose electrons move freely—and insulators such as glass, whose electrons are tightly bound to their atoms.

INSERT: Fig. 2-1 – 2-3, photos of World War II technology using vacuum (electron) tubes, Radar, Sonar, and Proximity Fuse

At very low temperatures, pure semiconductors do not conduct electricity, as all of the electrons are bound to the atoms of the crystal. As the temperature rises, the vibration of the atoms produced by their heat energy causes some of the electrons to shake free. These electrons can then move when a voltage is applied, appearing as a current of negative charge. But the vacant electron sites (holes) are, in effect, positive charges, and these vacancies can also move, in much the same way that the gap between dominoes lined up in a row "moves" when they fall one after the other. This movement of vacancies constitutes a current of positive charges, opposite in direction to the electron movement.

Early in 1941, the M.I.T. Radiation Laboratory was founded to work on the development of microwave radar and other war-related projects. Some of the best physicists focused their attention on the semiconductor diode as the detector of weak radar signals returning from their targets. The solid-state diode was preferred over its vacuum-tube cousin, the diode tube, because it had less capacitance and was thus a more efficient detector of high-frequency radar signals.

At that time, physicist Karl Lark-Horovitz found that extremely pure germanium and silicon were semiconductors, and developed properties predictable from their impurity content. His team produced their first germanium diodes in 1942. Their uniformly high performance and stability under difficult conditions revealed the value of perfect crystal lattices, more refined than any natural substance. To achieve the required purity and regularity of structure required great improvements in metallurgical techniques.

Impurities could be used to change the characteristics of semiconductors. When a tiny amount of phosphorus is added to germanium, the phosphorus atoms add an extra electron, which can move freely from atom to atom within the crystal lattice. This type of germanium, man-made with an extra supply of negative electrons, is called "n-type" germanium. The electrons, in copious supply, are called the "majority carriers" of the electric current. Also present are the “holes” created when electrons are broken free by thermal energy. These holes are much less numerous than the impurity electrons and are known as "minority carriers."

If the impurity added to the germanium is boron, the roles of the electrons and holes are reversed. Boron introduced into the germanium lattice has a deficiency of electrons; that is, it naturally creates holes that are free to move in the lattice. These positive holes are the majority carriers in "p-type" germanium. The thermally generated electrons, in smaller numbers, are the minority carriers in the p-type material. It is the minority carriers that are controlled in transistors.

To introduce the desired impurity, the phosphorus or boron is added to melted germanium. By dipping a small single crystal of germanium into the melted germanium and rotating it, the germanium freezes around it and fits itself to the arrangement of atoms in the seed crystal. Later, much more sophisticated techniques were adopted, including alloying, diffusion, and ion implantation.

Fig. 2-4. Photo, Shockley, Bardeen, and Brattain

Single crystals of n-type and p-type germanium were available to John Bardeen and Walter Brattain, who were working at Bell Labs in 1947 under the direction of William Shockley. Using n-material, they found that if a pointed wire was pushed against the crystal, with a battery making the wire positive and the crystal negative, the positive point would attract the majority carrier electrons, producing a strong current. If the battery terminals were reversed, the negatively charged wire would repel the electrons and attract the much smaller number of positive holes.

Bardeen and Brattain decided that control of the minority current might be possible by placing a second pointed wire very close to the first on the crystal surface. To their delight, they found, on 23 December 1947, that a current could be drawn from the n-germanium by the second wire, and that this current was an amplified copy of any changes in the current in the first wire. The transistor had been invented. Amplification had occurred in a semiconducting solid-state device.

Shockley conceived a second type of transistor in 1948 called the junction transistor, made of p and n type semiconductors. The junction transistor exploited the tendency of majority electrons in the n region to drift or "diffuse" to the p side, where holes immobilize them. Similarly, some majority holes in the p side diffuse to the n side, where they immobilize electrons. This “recombination" causes a shortage of mobile holes and mobile electrons on either side of the junction region, producing a depletion layer in which there are no mobile charges that serves as a charge barrier.

When a voltage applied across the junction draws electrons away from the p side of the junction, the depletion layer is thickened—current is turned off. When the polarity is reversed, the depletion layer narrows, and current flows. The n-p-n (or p-n-p) junction transistor has a second closely associated junction with the intermediate p (or n) region made very thin. These three regions are named the emitter, the base, and the collector. In the so-called common-base connection of an n-p-n transistor, the input current is injected into the n-type emitter. With the battery negative to the emitter and positive to the collector, the injected electrons move readily across the first junction and into the inner p base. On entering the base, the electrons become minority carriers. This minority electron current is controlled by the transistor. A few of the electrons are trapped by the holes in the p base, but since the base is very thin, most of the electrons survive and drift to the second p-n junction.

The positive battery terminal is connected to the n collector. Holes in the inner base thus face a repelling force at the second (base-collector) junction, but the junction layer is transparent to the electrons moving out of the base to the positive collector. On passing into the collector, these electrons are attracted to the positive battery terminal, which collects them.

The amplification occurs at the second junction, between base and collector. In passing this junction, the electrons have their energy increased, this energy coming from the collector-base potential. Electrons are injected into the emitter at a low potential, possibly a few tenths of a volt; they are extracted from the collector by a higher potential of several volts. The power level has been increased.

To achieve transistor action and gain, several details have to be properly adjusted. The tiny fraction of impurities in the n and p materials has to be precisely maintained in raw material production. Otherwise, losses of electrons and holes by recombination will be too high. The base must be very thin so that the time for charge drift across the base is very short. Otherwise, the transistor cannot respond to rapid changes in input current.

The transistor far surpassed the vacuum tube. It was small, and eventually could be made so small in integrated circuits that it could not be seen by the unaided eye. Transistors have no heated filament and operate instantly upon the application of their voltages, usually low in value. Transistors are efficient, making them ideal for portable, battery-­powered equipment. Most important, transistors have a long, essentially indefinite, life. The transfer of electrons or holes is not a destructive process, as is the thermal emission of electrons in vacuum tubes. The invention of the transistor was the answer to the telephone companies' prayer for something better than the tube, and it led to generations of new electronic systems.

Shockley's junction design, conceived in 1948, was put into practice in 1951. The point-contact transistor opened the door to solid-state amplification, but it was difficult to manufacture and limited in performance. It was soon supplanted by transistors based on the junction principle. The Shockley team at Bell Labs also developed the concept of a point-contact transistor. This was the field-effect transistor, but it resisted all attempts to reduce it to practice.. In 1954, Shockley’s field-effect transistor was finally realized because of improved materials and processes. It became a central feature of solid-state electronics. In 1956, Shockley, Bardeen and Brattain were awarded the Nobel Prize for the invention of the transistor. The award covered both the point-contact and junction varieties. The former, invented by Bardeen and Brattain in 1947, and the latter, invented by Shockley, in 1948.

Fig. 2-5. Photo of first commercial silicon transistors, 1954

In 1954, the first commercial silicon transistors appeared from the Texas Instruments Corporation. Silicon is the second most plentiful element in the earth's crust—it is a constituent of beach sand and rock, but is inferior to germanium in the speed of transit of minority carriers. Silicon dioxide, easily manufactured and worked, is an excellent and stable insulator for integrated circuits. Silicon transistors are operable at higher temperatures than are those of germanium. Silicon became the material of choice for nearly all solid-state circuit devices, the only exception being a new compound semiconductor, gallium arsenide, used in devices operating at hundreds of billions of cycles per second.


Television’s Popularity

INSERT: Fig. 2-27, 2-28, and 2-29. Photos of early televisions

About the time of the merger, television was becoming a true mass medium with powerful effects on politics and culture. The wide use of this technology was a triumph of technology and standards, for which IEEE members play key roles. John Logie Baird and Herbert Ives, demonstrated color television using mechanical scanning in 1928 and 1929. In the electronic era, in the 1940s, IEEE member Peter C. Goldmark, an engineer at the Columbia Broadcasting System, worked on color television and helped shape the modern television industry. He developed a field-sequential color signal, in which the picture was scanned three times, through red, green, and blue filters, in the 1940s. At the receiver, the signal was reproduced on a black-and-white picture tube that was viewed through a similar synchronized rotating disk having three filter segments.

INSERT: Fig. 2.30. Photo. John Logie Baird and Herbert Ives demonstrating color television in 1928 and 1929
Fig. 2-31. Photo, Peter C. Goldmark and his television work

If the monochrome rate of thirty pictures per second was maintained, then ninety pictures would have to be scanned each second for color. Goldmark compromised by reducing the color-sequence scanning rate to sixty per second and the scanning lines from 525 to 343 per frame. He demonstrated such a system to the press in September 1940, but his system would be incompatible with monochrome (black and white) receivers already in the hands of the public, which, by 1948, numbered two million and were sold at the rate of 400,000 per month. Engineers had to develop a color system that used the same scanning rates and channel as the monochrome service. They did it in part by reducing the detail in color transmissions to more closely match the eye's ability to see, saving channel bandwidth without sacrificing image quality.

How were the two-color-difference signals to be transmitted? The basic idea came from RCA: a single carrier wave can carry two sets of information independently if the carrier is modulated in "phase quadrature" so that the two modulations do not interact. RCA proposed to carry the two color-difference signals on an additional subcarrier frequency, sent within the existing 6-MHz monochrome channel. Frank Gray of Bell Laboratories obtained a patent in 1930 for a method of using gaps in the frequency spectrum to transmit color-difference signals. His contribution to the National Television System Committee (NTSC) color scheme was recognized in 1953, when he received the IRE’s Zworykin Prize. The NTSC chose 3.579545 Hz for the color subcarrier.

INSERT:Fig. 2-32. Frank Gray receiving the IRE Zworykin Prize, 1953

By the middle of 1953, after forty-two months of steady work financed largely by industry, the second NTSC reached agreement on compatible color standards to recommend to the Federal Communications Commission of the United States, which approved it on 21 July. The standards were set, but the market was not yet ripe. The price of color receivers was high and there was little programming. Over the next decade or so, however, RCA, under Sarnoff, spent more than $100 million in research, development, manufacturing, programming, and broadcasting. By 1964, the color television industry began its spectacular growth.

After World War II, the French introduced an advanced monochrome system with scanning at 819 lines, 25 pictures per second, on 14-MHz channels. In 1949, representatives of other European countries began meetings under the leadership of a Swiss engineer, Walter E. Gerber, to decide on postwar monochrome standards. They proposed standards with 625 lines scanned at 25 pictures per second to fit in a 7-MHz channel. England and France came to 625-line scanning later. In the early 1960s, the Phase Alternate Line (PAL) system for color originated in Germany, borrowing much from NTSC technology: a color subcarrier synchronized with the scanning race, quadrature color modulation, and constant-luminance transmission for the color-difference signals.

The French, determined to use their own designs, developed a system known as Sequential Color and Memory (SECAM) on an 8-MHz channel. It used the same scanning rates as the PAL method, but was otherwise so different that receivers designed for both PAL and SECAM were complicated. In SECAM, the color-difference signals were transmitted sequentially, one during one line scan, the other during the next line scan. At the receiver, these color-difference signals were made to coincide in time by storing one signal until the other arrives. This process, like chat used in the PAL system, made the receiver intrinsically more expensive than NTSC receivers. Both PAL and SECAM produced excellent color reception, but PAL became the favorite among the national broadcasting services. The close ties of the Japanese electronics industry with the U.S. market led to the adoption of the NTSC system for the Japanese public.

It has been the rule in the standardization activities of the AIEE, IRE, and IEEE to define methods of measurement, but to avoid entering the jungle of conflicting industrial claims for performance of equipment and systems. During the work of the two NTSCs, the IRE provided rooms for meetings and identification of members for the committee and its panels, but nothing more. From the beginning, the RMA (now the Electronics Industry Association), a trade organization, was the responsible body in television standardization in the United States.

By 1983, there were nearly six hundred million television sets in use throughout the world, one for every eight of the 4.8 billion men, women, and children on earth. This spread of technology was bound to have a major impact on the two relevant engineering associations.

Early Computers

Charles Babbage, a prolific inventor of mechanical calculating machines, is credited with making the first calculating machine based on mechanical principles. In 1823, he obtained support from the British government to assist in development of a Difference Engine capable of calculating logarithms and trigonometric functions to thirty decimal places. He later conceived of a general-purpose machine that could be programmed to do any calculation, with instructions fed by punch cards. He called it an Analytical Engine.

Plans for this engine embodied in mechanical form all the elements of the electronic stored-program computer. Its central element, which he called a "mill," later called the arithmetic logic unit (ALU), was intended to add, subtract, carry and borrow, and shift numbers for multiplication and division. It had a memory unit and a control unit for linking the mill and the memory. Numbers were to be entered into and extracted from the mill by punched cards, invented in 1802 by Joseph Marie Jacquard to control the weaving of patterns in a loom.

In 1842, Babbage approached the government again for funding, but the government refused to fund work on a new machine. Although he did not succeed in building his inventions, Babbage’s vision of computing had a formative effect on the field. He knew that the instructions he planned to use to direct his machine's operations, being in numerical form, could be manipulated or "computed." Properly controlled, this process would yield numbers that were new instructions. His inspiration came when he realized that such manipulations could take place within the machine. In modern terms, he knew that a computer can change its program according to its findings at particular stages in its computations.

Fig. 2.34, Charles Babbage
Fig. 2-35, Ada Lovelace
Fig. 2-36. Jacquard Loom

Babbage collaborated with the mathematician Ada Lovelace, the Lady Augusta Ada, Countess of Lovelace, born in 1815 to poet Lord Byron. Lovelace penned a lengthy note to her translation of a description of Babbage’s Analytical Engine that articulated many bedrock principles of computer engineering and foreshadowed the computers of the twentieth century. She wrote about the possibility of general-purpose computers that could modify their own programming. She explained that the machines could apply the principles of logical calculation, not merely to numbers, but to symbols as well. Lovelace also articulated the notion of branching algorithms and subroutines that years later would become a programming staple.

Development of mechanical computers continued through World War II, until the electromechanical era of computation gave way to electronic computers. The Germans used an electromechanical machine called Enigma to produce what they believed was an unbreakable code. After the Polish Cipher Bureau smuggled one of the Enigma machines to Britain, the British developed a computer to defeat Enigma.

J. Presper Eckert, Jr. and John W. Mauchly built the first American general-purpose electronic computer at the Moore School of Electrical Engineering of the University of Pennsylvania, during the years from 1937 to 1943. They called their computer ENIAC, for Electrical Numerical Integrator and Calculator. For speed, they used 18,000 vacuum tubes, operating in the decimal system, and the program was established by a tangle of wires between plug boards. ENIAC seldom operated for half an hour before one of its thousands of tubes failed in the midst of a computation, but it was still faster than any machine previously built.

INSERT: Fig. 2.37, 2-38, 2-39. Photo, early computers, Moore School; Iowa State’s project

The team at the Moore School was fortunate to have the services of John von Neumann. Born in Hungary in 1903, he was a prodigy in many fields, contributing to the theory of implosion developed at Los Alamos, which was crucial to the success of the atomic bomb. While at the Moore School, he developed the relationship between arithmetic operations, storage and retrieval of data, and the concept of the stored program. His basic arrangement of the stored-program computer comprised the arithmetic logic unit, the memory, the control unit, and input/output devices. The design, known as a "von Neumann" computer was displaced by newer concepts of computer architecture in the 1980s.

It is impossible to imagine the digital age without the contributions of Claude Shannon. In 1937, in his Master's thesis from M.I.T., Shannon showed the value of two-variable (binary) algebra, using the symbols 0 and 1, in the design of electrical switching circuits. Gottfried Leibniz had explored such an algebra in 1703, but he had been concerned mostly with number theory. In 1854, the Englishman George Boole published An Investigation Into the Laws of Thought, outlining what came to be known as Boolean algebra. In the information era, Boolean algebra became the foundation for symbolic logic and for the subsequent art of computer design.

Shannon used Boolean two-variable algebra because he saw the analogy with the operation of an electrical switch as a two-state device, off and on, and he pointed out that these states could be associated with the 0/1 (true/false) symbolism of Boole. With Shannon's application of logic to switching theory, it was possible to design switching circuits from logic statements, circuits carrying out the basic logical operations of AND, OR, NOR, and NAND (negative AND). It was also possible to reduce apparatus complexity in logic circuits, and thereby to design for minimum cost. Logic circuits are the foundation on which computers are built.

Shannon later developed theories of communications that provided the foundation for error correction. Ralph Hartley, in his early communication theory, had assumed negligible noise in the frequency channel. In 1948, Shannon, who held a doctorate and was then working at the Bell Laboratories, published two seminal papers. In the first, he expanded the communication law to obtain a numerical measure of the uncertainty with which a message arrives at its destination in the presence of noise. Perfect reception—complete information accuracy—means zero uncertainty. But noise always introduces a certain amount of uncertainty into the received signal. Shannon's law showed that for a given level of accuracy there is an inverse tradeoff between bandwidth and signal-to-noise ratio. A small signal-to-noise ratio is acceptable if bandwidth is increased, and vice versa. The latter step is costly, however. Shannon placed information studies on a rigorous statistical basis, which is why he is considered by many to be the father of information theory.

In his second article, Shannon showed that form of a signal influences the accuracy of reception. Continuous (analog) signals use all manner of waveforms that must be differentiated from noise. Transmission with pulses (digital trans­mission) uses rigidly prescribed waveforms, and the receiver must decide only whether a pulse is present or absent. Binary-code signals are therefore receivable with greater accuracy than are analog signals.

INSERT: Fig. 2.40. Photo, space communication tech

Space communication is based on Shannon's laws. Huge dish antennas capture large amounts of weak signals to enhance the signal-to-noise ratios and thus improve the accuracy of the received information. In 1981, NASA probe Voyager II arrived in the vicinity of Saturn, some nine hundred million miles from Earth. Its cameras returned magnificent still pictures of Saturn and its rings. These were made possible by an extremely wide bandwidth and the use of one redundant bit for every bit carrying useful information. The transmission errors were thereby reduced to less than one bit in every 10,000 bits received.

As a result of these theoretical advances, Shannon, long an IEEE member and Fellow, was awarded the IEEE Medal of Honor. However, the challenge was in finding a hardware technology that could match these theoretical limits.

INSERT: Fig. 2.41. Photo, Shannon and IEEE Medal of Honor

The Integrated Circuit

During the 1950s, the assembly of electronic equipment was a time-consuming process, slowed by the complexity of the circuits; typical computers, for example, were using 100,000 diodes and 25,000 transistors. A way was needed to meet these complex needs with simplified components and to compress thousands of devices into a small space with simple and reliable connections to the outside world.

In February 1959, Jack St. Clair Kilby of Texas Instruments applied for a patent describing two circuits in which were included junction transistors with resistors and capacitors formed within a chip of germanium, the parts interconnected by gold wires external to the chip. In July, Robert Noyce of Fairchild Instruments applied for a patent in which he described planar elements on a chip of germanium interconnected by deposited aluminum strips. Noyce's patent contained all the methods used in the production of integrated circuits. The others named above made major contributions, but it was Noyce who brought it all together in a workable system.

In 1967, Noyce left Fairchild and, with Gordon Moore, founded the Intel Corporation. Two years later, M. E. Hoff addressed a Japanese customer's desires for customized calculator integrated circuits. The work resulted in what he described as "a general-purpose computer programmed to be a calculator." It included four chips in a set as the 4004 microprocessor. It was followed by the 8008, an 8-bit microprocessor, the first in what became a great variety of microcomputers to be used where the problem arises rather than bringing the problem to a central computer. In 1973, a patent was issued to G. W. Boone, also of Intel, for a "Computing System CPU" (central processing unit) that placed on one chip all of the elements of a computer except its main memory. By 1982, 32-bit microprocessors were in production, and soon after 64-bit microprocessors.

In 1978, the IEEE awarded Noyce the Medal of Honor. Two others similarly honored are John Bardeen (1971) and William Shockley (1980). Thanks to these pioneers, everything was in place to merge the fields of communication and computing

INSERT: Fig. 2-41. Photo: Noyce receiving the Medal of Honor, 1978.

INSERT: Fig. 2-42. Photo. John Bardeen receiving the Medal of Honor, 1971.

INSERT: Fig. 2-43. Photo, William Shockley receiving the Medal of Honor, 1980.

INSERT: Fig. 2-44. Photo, Jack Kilby receiving the Medal of Honor, 1986.

INSERT: Fig. 2-45. Photo, Gordon Moore, receiving the Medal of Honor, 2000.

The Network of Networks

As computers took hold as tools of scientific research, engineers began to think of how to use these scarce resources more efficiently. The effort to solve this problem started wheels in motion that eventually led to the internet. The most part, the IEEE Computer Society and its cousin, the Association for Computing Machinery, played midwife to this innovation, providing forums in their publications and conferences for engineers to hash out their ideas.

In the 1950s, computers were big and expensive and programmed with punched cards or tape. As more researchers requested time on the machines, programs were queued and run in “batches” to maximize the amount of users on computer could accommodate. An operator generally controlled which programs were run.

Joseph C. R. Licklider was an early student of what he called “man-computer symbiosis,” the notion that the ultimate role of computers was to augment human thinking. Licklider considered batch processing, which kept the programmer at arm’s length, to be a less than satisfactory way of interacting with computers. In 1962, he joined the U.S. Defense Department’s new research arm, the Advanced Research Projects Agency (ARPA), where he had great influence on the development of time-sharing technologies. Licklider encouraged his colleagues at Bolt, Beranek, and Newman (BBN), his former employer, and elsewhere to develop time-sharing as a way to give users direct access to what was still an expensive resource. By allowing multiple users to work directly with the computer, generally on terminals with keyboards and cathode-ray tubes, time-sharing took advantage of the fact that any single programmer tended to occupy the central processing unit (CPU) only in short bursts of activity. This meant that by juggling many programs at once a CPU could keep itself occupied. In the 1960s, mainframe computers at companies and universities were commonly tethered to terminals. These connections relied on IEEE standard 488 for eight-bit parallel data buses.

Although time-sharing allowed many people from one organization to share computing resources, it required physical proximity. In 1966, ARPA began to work on a way of allowing remote users to share mainframe resources. ARPA awarded a contract to BBN to develop a packet-switched network protocol. Under the leadership of Robert Kahn, BBN designed a “subnet” of Intermediate Network Processors (IMPs) that would operate invisibly to the users of the network and handle all the networking tasks. To begin with, there were to be four nodes—at UCLA, Stanford Research Institute (SRI), the University of California at Santa Barbara, and the University of Utah in Salt Lake City. The network was laid out like a triangle in California with a branch out to Utah—a departure from a true decentralized network for pragmatic purposes. Each node was built from a Honeywell 516 minicomputer.

Fig. 2.46, Robert Kahn

Figure 2.47. map with the four nodes

By March 1970, ARPANET, as the network was called, was up and running with seven nodes, including, significantly, a node at BBN in Boston, the first transcontinental link. Engineers at BBN programmed its node to monitor other network nodes and to take control of them remotely for the purposes of maintenance and testing—the first remote maintenance and diagnostics software. By 1971, most of the problems had been worked out of ARPANET. Kahn organized a public demonstration at the International Conference on Computer Communication in Washington, D.C. in October 1972. He and his colleagues got manufacturers to supply terminals, and worked to get all the machines up and running in the hotel, which included wiring the hotel, and get it all operating for the day of the conference. The task of writing testing scenarios for the demonstration fell to Robert Metcalf, a young Ph.D. candidate from Harvard. He came up with the idea of displaying images at the Washington, D.C. hotel that actually resided on a PDP-11 hundreds of miles away on the MIT campus—pictures of a plane landing on an aircraft carrier.


ARPANET showed that packet-switching in a decentralized network was a viable technology. The next step, many engineers thought, was to develop the technology for wireless transmission over radio and satellites, which would open network possibilities to mobile and remote devices. The world in the late 1970s was ready to be networked. The first public wireless packet-switched network was built under the direction of Norman Abramson of the University of Hawaii. The goal was to connect users on the Hawaiian archipelago with computers in the central university campus. ALOHAnet, as it was called, came into operation in June 1971. The IEEE Board of Directors commemorated this technological achievement of ALOHA as an IEEE Milestone in 2020.

Fig. 2-48. Photos Norman Abramson and ALOHAnet

ALOHAnet attracted the attention of Metcalfe, a brilliant and rebellious graduate student at Harvard. Shortly after starting work at the Xerox Palo Alto Research Center (PARC), Metcalfe built an ARPANET interface for a PDP-10. In late 1972, ARPA asked him to be an ARPANET facilitator, which involved traveling around the country and describing the ARPANET to potential users. He stumbled on an article in the 1970 conference proceedings of the American Federation of Information Processing Societies about ALOHAnet, written by Abramson, on a queuing theory model of how the ALOHAnet would perform under increasing traffic loads. Metcalfe based his design of Ethernet on the ALOHAnet scheme, with some important changes. For instance, ALOHAnet’s designers assume the network would have an infinite number of users, and that people would keep typing even if they received no acknowledgement from the mainframe. Metcalfe later developed a card that would plug into the then-new personal computer and link up to a network of personal computers and to ARPANET. Metcalfe named this network the Ethernet. “The Ethernet is essentially a queuing theoretic invention with some hardware and software derived from that,” Metcalfe recalled. Unlike ALOHANET, which used radio transmission, Ethernet sent signals over coaxial cables, which provided much faster connection speeds. Whereas ALOHANET ran at 4800 or 9600 bits per second, Ethernet could easily run at 2.94 megabits per second.

In 1980, Robert M. Metcalfe began working on IEEE project 802 to persuade manufacturers to adopt the Ethernet as an open industry standard. The effort ran into trouble early on. “It became a huge standards bureaucracy of unbelievable impact,” said Metcalfe. “It was three years of horrible, ugly infighting. It could not be avoided.”[9]

The first problem is that companies did not want to be seen by the U.S. Justice Department of conspiring to violate anti-trust rules. The Department of Justice at the time was engaged in a lawsuit against IBM for anti-trust violations. Although DEC, Intel, and Xerox agreed in principle to adopt Ethernet as a joint standard, they were reluctant to meet to iron out the details.

Eventually, the companies were persuaded that if they could satisfy certain conditions, they would not have to fear action by the Federal government. Those conditions included: forbidding marketing from participating in the talks, to avoid any temptation to set prices or divide territory, opening the meeting, including a representative of the federal government to be sure nothing untoward happened, and sticking to the goal of developing an open industry standard.

DEC, Intel and Xerox formed a consortium to develop an Ethernet standard, which wrote specifications. They submitted them to the IEEE 802 Committee, which formed in December 1980. But IBM and General Motors, which joined the consortium late, objected to the specifications that the group had drafted. After a year of infighting, the IEEE committee struck a compromise. It would split into three committees to draft standards based on different networking schemes—the 802.3 worked on Ethernet, 802.4 worked on a token-bus local area network, and 802.5 worked on the IBM token ring. Of the three standards, the Ethernet 802.3 was the one that survived over the long haul.

Metcalfe had left Xerox in 1979 to start 3Com, which would commercialize the Ethernet technology. Metcalfe collaborated with Seeq Technology to design Ethernet chips, which 3Com put into circuit boards. IBM announced its personal computer in August of 1981. In September 1982, 3Com shipped the EtherLink—an Ethernet card for a personal computer—and the product proved popular. “It is an exhilarating time when people want what you've got. It is really fun,” recalled Metcalfe.

3Com went public in March of 1984. Six years later, Metcalfe retired. In 1988 the IEEE awarded him its Alexander Graham Bell Medal and, in 1996, the Medal of Honor for his leadership in developing Ethernet.

By choosing the word “ether,” Metcalfe later said, he opened up the possibility that Ethernet could be operated not only over coaxial cable, but across other mediums, including twisted-pair telephone lines, optical fibers, and eventually Wi-Fi. In fact, the wireless IEEE 802.11 standard caught on later years and is still widely used.

Internet 2.0

As politics roiled IEEE, the technologies developed by its members continued to advance and profoundly shape society and IEEE. ARPANET successfully connected disparate computers. But now Robert Kahn found himself looking at all the networks springing up around him and realized that none of them could pass information to any of the others. “Here I am sitting with the notion of multiple networks, and I’m trying to think, how would we actually do anything interesting with them if we don’t connect them? If I have a radio net, what am I going to do with the radio net? I can maybe plug a terminal into a little interfacing computer, but I’ve got to get to some big machine to do anything interesting.” [10]

One problem in the beginning was “how to take the functionality that needed to be in the computers and actually get them into all these different machines.” The problem was similar to what Vint Cerf and Steve Crocker and others had worked on in developing the original host protocol for the ARPANET. But connecting two computers was one thing, connecting two networks required another layer of complexity. How would you direct where a packet should go?

The use of radio and satellites complicated things further. Whereas ARPANET was a reliable network, how would an inter-network deal with transmissions disrupted when a mobile computer passed behind a mountain or through a tunnel or if the signal was jammed? How would you correct errors in propagation?

Kahn knew they needed a better protocol that involved an error detection and retransmission or correction scheme and a more robust addressing mechanism. “I had a pretty good idea of what we needed to do, but I didn’t know how to take that and actually figure out how to get it in the first machine,” he said. “That’s why I asked Vint [Cerf] to work with me on this whole mission.”

In the spring of 1973, Kahn came out to Stanford University and sought out Vint Cerf, who had recently finished his Ph.D. at UCLA and joined Stanford as a new faculty member. The two engineers knew each other from Kahn’s trip to UCLA to test the ARPANET, and they had clicked. “Vint is the kind of guy who likes to roll up his sleeves and let’s get with it. I thought that was a breath of fresh air,” said Kahn.

Kahn briefed Cerf on the problem of connecting disparate networks. In September, they issued a paper to the International Network Working Group that presented the basic idea behind what is now called TCP, Transmission Controlled Protocol. “Then we went back home and holed up in one of the hotels in the Palo Alto area,” recalls Cerf. “We essentially wrote the paper, which was published in May of 1974 in IEEE Communications as the seminal paper on the design the Internet and its TCP protocols.” Kahn recalls that they sketched out the design on paper. Cerf had his secretary type it up, then he threw away the original manuscript—the handwritten artifact was lost to posterity. He also remembers that “the synergy between the two of us caused it to be far better than either one of us could ever have done alone.”

DARPA gave contracts to Stanford, University College London, and BBN to do some initial implementations and testing. They started to roll out the network experimentally in late 1975. By 1977, they were running test on three nodes. Eventually they broke Internet Protocol (IP) out separately. The Internet remained in an experimental form until Kahn and Cerf decided that they would cut the ARPANET over to the use of TCP/IP in 1983.

The Internet started out with just the three long-haul networks: the ARPANET, the packet radio net, and the satellite net. Personal computers had been invented but were not in widespread use, and in any event they were not powerful enough to run the TCP/IP protocols.

The big surprise to Kahn and Cerf was the rapid rise of local-area networks. Even before ARPANET made the actual cut-over to TCP/IP, people were plugging local area networks into the system experimentally. Sun Microsystems came out with a workstation that had TCP/IP bundled into the package. “Instead of working with what we thought might be five or ten big nets that needed to be connected,” said Kahn, “we suddenly had the prospect of hundreds of thousands, and eventually millions. “Of course the architecture of the Internet was just sufficiently general purpose that it didn’t matter whether it was a LAN or anything else, you could just sort of plug it in and keep going.” “The Internet was sort of a lot of LANs connected to the Arpanet with linkages to satellite nets,” said Kahn.

INSERT: Fig. 2-49. Photo, Vinton Cerf and Robert Kahn receiving the IEEE Alexander Graham Bell Medal, 1997

The proliferation of LANs created an addressing problem. The original fifteen-bit addresses had been adequate for a network of sixty-four or one hundred and twenty nodes, but that was no longer adequate. They expanded the addressing to 32 bits, with the first 8 bits specifying which network and the other twenty-four specifying which machine on that network. “We thought surely this is good enough, and very soon we realized it wasn’t,” says Kahn.

Cerf continued to work on day-to day networking while Kahn focused on other projects at ARPA. Cerf set up a group of experts, which he called the Internet Configuration Control Board (ICCB), to keep the technical community informed of DARPA’s progress. They would in turn help people that wanted to get configured to the Internet, which in those days was no small task—it called for modifying a computer’s operating system and linking to all its applications. The ICCB consisted of twelve people with hard-core experience in building things. Cerf and Kahn met with them regularly.

When Cerf left in late 1982 for MCI to engineer its network, Barry Leiner took over the task. The meetings were getting unwieldy—although the Board was only twelve people, a few hundred others were invited to listen to the deliberations. Leiner and Dave Clark, who was then chairing the ICCB, set up the Internet Activities Board (the IAB), with ten task forces underneath it in areas such as routing, end-to-end protocols, privacy and security; the number of task forces eventually grew to more than fifty.

1980s tech

Transformational Technical Developments in the 1980s

IEEE celebrated its centennial against a backdrop of technical innovations that greatly changed the world. Advances in information storage and transmission, in particular, were changing almost every aspect of business and leisure activities. Very Large Scale Integration (VLSI) was one of those transformational technologies. VLSI greatly increased the number of circuits that could be included in a single microprocessor, as well as the number and flexibility of the operations it could perform. Although it became a major force in the 1980s, VLSI was a technology that took decades to come to fruition. An important moment in the history of VLSI came in 1965. A young engineer at IBM was asked to work on the problem of how to design a central processing unit that could execute more than one instruction per clock cycle. An IBM competitor, Control Data Corp., had worked out a way of breaking a problem into parts and executing many of them simultaneously, while keeping track of which parts of the problem were dependent on the outcome of other parts. IBM’s Advanced Computing System project was part of an effort to improve on this technology. Lynn Conway invented a method of issuing multiple out-of-order instructions for each machine cycle.

In December 2000, Conway told Paul Wallich of Scientific American, “It required a lot of transistors, but it was very fast because all the checking could be done in parallel.” Conway convinced her IBM teammates by building a software simulation of the circuit, but circuit integration technology of that time was not advanced enough to manufacture the hardware cheaply and reliably. The CPU would have required more than 6000 integrated circuits connected by wires. IBM scuttled the project in 1968, and the work remained secret for many years.

In addition, Conway had been ostracized at IBM because of her gender identity. She was born male and identified as female from a very early age. She joined IBM as Robert Conway, but later began the transitioning process. “When she finally underwent surgery to become a woman,” Wallich wrote, “IBM fired her, and local child-welfare authorities barred her from contact with her family,” including her two children. In 1973, Conway landed at Xerox’s Palo Alto Research Center (PARC), a then-new research lab with a more progressive culture than IBM’s. Her supervisor, Burt Sutherland, introduced her to Carver Mead, a semiconductor researcher at California Institute of Technology, who had been working on reducing transistor sizes. Mead and Conway began a prolific partnership. They codified myriad circuit-design rules that had proliferated in the industry into a single methodology for integrated circuits. The work became Introduction to VLSI Systems, the classic textbook for VLSI engineers, published in 1979.

Conway played a central role in the rapid adoption of VLSI, which spawned an industry of startups. In 1985, she joined the faculty of the University of Michigan at Ann Arbor and became an IEEE Fellow. In 1989, she was elected to the National Academy of Engineering—the highest professional recognition an engineer can receive—and received the IEEE Computer Society's Computer Pioneer Award.

INSERT: Fig. 3.33. Photo Conway receiving the IEEE Comp Soc’s Comp Pioneer Award

VLSI techniques were used more widely throughout the decade. Designs that had seemed difficult to manufacture reliably only a few years before suddenly seemed easy as more and more devices could be integrated onto one circuit. Dense integrated circuits rapidly transformed many industries at once, leading to a startling acceleration of capabilities on a broad front. Digital signal processing was one of the main beneficiaries. In the early 1980s, long-distance switches were digital and local switches were analog; and while speech signals were transmitted digitally on the local lines, they were analog over the long-distance trunks. David G. Messerschmitt, an engineer at Bell Labs and IEEE Fellow, had advocated developing technology that would enable AT&T Long Lines, the long distance carrier, to move to an all-digital network.

Messerschmitt favored changing to all-digital transmissions, but Bell Labs executives , perhaps gun shy after the failure of PicturePhone, the company’s video phone service, were reluctant to make the required investment in research and development. In 2003, he told John Vardalas, historian at the IEEE History Center, “In a monopoly like the Bell System the risk is asymmetrical…When expending resources to do something new there is a chance of failure, whereas if it is not done there is no chance that someone else will do it and therefore no danger of looking bad.”[11] He left Bell Labs in 1977 for the University of California at Berkeley, where he found a strong group of engineers who were looking at the new VLSI technologies and trying to apply them to systems problems. For the next few years, Messerschmitt developed techniques that eventually had a major impact on the telephone industry. Switch capacitor filters and charge redistribution codecs made it more economical to do things digitally within the network. These technologies found a home at Bell Northern Research in Canada, which had decided to build an all-digital network. IEEE awarded Messerschmitt, a member of the National Academy of Engineering, the IEEE Alexander Graham Bell Medal in 1999 for his contributions to communications theory and practice, including VLSI for signal processing, and simulation and modeling software.

INSERT: Fig. 3.34. Photo Messerschmitt receiving the Alexander Graham Bell Medal, 1999

Consumer electronics was another beneficiary of VLSI technology. Atari, Nintendo, Sony, and many other firms were bringing out video games based on powerful new microprocessors and inexpensive memory. Research in high-definition television and large-screen displays would soon transform the familiar TV into a high-tech product. Portable audio systems proliferated. Optical storage platforms such as CDs and laserdiscs created new markets overnight. IEEE members played important roles in these technologies. In 1983, the IEEE Consumer Electronics Society (CESoc) was formed, but it traces its roots to the IRE Professional Group on Broadcast and Television Receivers (BTRC)/Consumer Electronics Group, which held its first meeting in November 1949. It became the IEEE Professional Technical Group on Broadcast and Television Receivers in 1963.

One of the biggest consumer electronics success stories of the 1980s was the compact disc player. Made possible by the solid-state laser that could read pits, or holes, in a piece of plastic, the CD became the dominant recording medium for high-quality audio for more than twenty years, displacing vinyl records and cassette tapes. CDs also served as a medium of computer storage. A single CD could store nearly seventy-five minutes of sound (enough for a single, uninterrupted recording of Beethoven’s Ninth Symphony). Each second of audio took up 1.5 million bits—about a thousand times more data than contained on a 5 1/4 inch floppy disk, which could hold about 720 kilobytes. In 1993, IEEE awarded its Masaru Ibuka Consumer Electronics Award to Shigeru Nakajima and Kees Immink for their work on compact disc technology at Sony Corporation and Phillips, respectively. Immink, an IEEE Fellow, was also awarded the Edison Medal in 1999.

INSERT: Fig. 3.35. Photo, Nakajima and Immink receiving consumer electronics award, 1993

The formation of the IEEE Power Electronics Society in 1983 reflected the transformative effect that semiconductor technology was having on that field as well. Semiconductor switches developed in the late 1970s, and the early 1980s brought new efficiencies to the field. One of the biggest advances was the insulated-gate bipolar transistor (IGBT), a switch that found wide use not only in power but also in robots, lights, cars, trains, televisions, and renewable energy. The switch was invented in 1980 by B. Jayant Baliga, an engineer at GE. His managers had challenged him and his colleagues to develop electronics that could be used to vary the speed of induction motors, which until then were left to the vicissitudes of the frequency of the power source. This lack of control meant that motors in many appliances operated less efficiently than they otherwise might have. Baliga got the idea for a transistor that could serve as a medium-voltage switch from working with thyristors—semiconductor switches used for very high voltages. Thyristors were awkward devices—they were good at opening a circuit but poor at closing one, tending to “latch up,” or leak current. Baliga thought he could combine the qualities of MOSFETs and bipolar transistors to make a better switch that could turn on and off more cleanly. The trouble was, MOSFETs worked at only low voltages, and bipolar transistors entailed peripheral circuitry. In 1988, Baliga joined the faculty of North Carolina State University to continue his research in power semiconductor technology and create the Power Semiconductor Research Center. He is an IEEE Fellow and received the 2014 IEEE Medal of Honor.

INSERT: Fig. 3.36. Photo, Baliga receiving the 2014 IEEE Medal of Honor

star wars

Star Wars and the Reagan Defense Buildup

President Ronald Reagan's strategic defense initiative was a controversial program among IEEE members. On the one hand, the program employed thousands of engineers working on some of the most advanced technologies ever devised for tracking missiles. The program began on the heels of a severe recession in the early 1980s that had cost electrical engineers dearly in jobs. The Strategic Defense Initiative (SDI), and the Reagan defense buildup in general, came as a much-needed boost to employment prospects. Nevertheless, some members had qualms about the ambitious program, not only on ethical but also on technical grounds. At the IEEE Electronics and Aerospace Systems Conference (EASCON) in October 1985, Carl Barus, professor of engineering at Swarthmore College who chaired committee in the Society on the Social Implications of Technology, proposed a session at the conference “in which the merits of the whole SDI concept would be debated on technical strategic legal and ethical grounds.” This proposal was turned down on the grounds that it was off the topic of the conference, which was to report on technical progress. Barus argued that this violated section 6.8 of the IEEE policy and procedures manual, which stipulate that ‘the presentation of socio technical material” in meetings and publication. Section 6.8 States in part “every reasonable effort should be made to provide for adequate and timely presentations of different viewpoints.”

INSERT: Fig. 3.47. Photo, EASCON, Oct. 1985

The “Star Wars” program was ambitious technically. Research alone on the technology would cost $30 billion, and building a working system would take hundreds of billions. Tracking and shooting down missiles was notoriously difficult. Charles Seitz, prof. of computer science at Caltech, argued that by organizing a control system that was modular and hierarchical, the project could be successful. Engineers were considering using a “hierarchical tree-structured organization of the defense system,” reported Paul Wallich, “in which clusters of weapons and sensors [would] be controlled by one or more computing nodes, which in turn would relay information upward to battle management nodes responsible for broader strategic issues.” Because the various components would not coordinate with one another directly, targeting would not require input from distant Systems, which would reduce the amount of communication that would have to occur.

Joseph Wise, professor of computer science at MIT, insisted that there were too many unknowns to be able to say for sure if this system could work. David Parnas, was even more skeptical about the ability of engineers to make nuclear weapons “impotent and obsolete,” as Reagan had set as a goal. “People are wasting time on a pipe dream,” he said. The key problem is that it would be impossible to know how well the system would work in action. Software development techniques, as they existed at the time, were not adequate for producing a system that could be proven to work. ” After all, you can make the argument that if 10,000 monkeys on typewriters could eventually produce the Encyclopaedia Britannica, 10,000 programmers could produce a battle management program,” said Wise. “But with the Encyclopaedia Britannica you can tell whether or not they did it right.” Furthermore, said Parnas, if the U.S. Star Wars program caused Russia to spend more money on countermeasures, bombers, cruise missiles and other military hardware, the result could put the United States at a disadvantage. He was, history shows, wrong on this account. As we now know, the Soviet Union went bankrupt trying to keep up with the arms race that Reagan had started. In addition, no engineer ever had to prove that the defensive system actually worked.

estrin bio

Estrin’s story shows a strong cultural bias against women entering engineering in the middle of the twentieth century, and what it took to overcome it. She was born in 1924 in Brooklyn, New York, where her father had a small company in the wholesale shoe business. After the Great Depression, he went on the road to companies in the northeast. Her mother, a socially outgoing and strong-willed woman who was active in the Democratic Party, set the expectation that young Thelma would have a professional career. “She wanted me to become a lawyer,” Estrin told historian Janet Abbate in her 2002 oral history recording session.[12]

The New York public schools tended to push girls into typing and other secretarial skills, but Estrin took the academic track—she found that she was good at math. Both of Estrin’s parents died before she graduated from high school, and a family friend, a physician, suggested that she attend City College of Business Administration. When World War II broke out, Estrin took a war-training course and found a job at Radio Receptor Company, small firm that produced radio equipment. After a few years working as a machinist with mechanical lathes and milling machines, she developed an interest in engineering. She and her husband Jerry Estrin, whom she met at City College and married before her eighteenth birthday, both entered the University of Wisconsin when the war ended. She was the only woman in her engineering class (one other started, but soon quit). Tau Beta Pi, the engineering honor society, did not admit her in her freshman year because, she said, she was Jewish and female (they admitted her husband, who was also Jewish). She protested to the dean. At the time, however, the society only offered women badges rather than full membership. She earned her Ph.D. in 1951, on the mathematics of finding the capacitance of annular plate capacitors.

Estrin’s husband got a job at the Institute for Advanced Studies in Princeton working for John von Neumann. She was in the job market. RCA interviewed her, but would not hire her because there was no ladies room for professional women (only secretaries). Through a friend, she got a job at Columbia Medical School in New York City, commuting four hours each day. It got her started in the medical engineering field. With the birth of her first child, she gave up the job because of the commute, and found a part-time job teaching at Rutgers University. Then she went to Israel with her husband and helped him build the WEIZAC computer at the Weizmann Institute. The work was not considered prestigious, especially in Israel, where scientists tended not to appreciate the value of computers for scientific calculations.

Estrin followed her husband, again, to Los Angeles, where he went to work for UCLA. After teaching math and drafting at a state college for a few years, she got a job at the Brain Research Institute (BRI) to organize a conference on electroencephalography—how to find the “secrets of the brain” that an electroencephalogram might convey. The idea was to take analog brain signals and digitize them so they could be analyzed by computer. She also established a data processing laboratory to interest brain researchers in using computation, with funding from NIH, and taught classes on biomedical computing for the engineering school. Digital computation held great advantages over analyzing analog signals, she realized, “because of the accuracy you could obtain. It was much more reliable. The digital world was coming forth and computer science emerged as a popular discipline.” From 1970 to 1980, she was director of BRI’s computer institute.

Estrin had to insist on a faculty appointment at BRI. Despite her years of experience, she was still considered a “research engineer” in the Anatomy Department of UCLA. She wanted an academic appointment, so she moved to UCLA’s Computer Science Department in 1980, taking a non-tenured job as Professor in Residence. It was a struggle, she said, “to get someplace I thought I should have been.” One of the classes she taught, to freshman and sophomores, during this time was “Engineering and Society.”

In 1982, she accepted a two-year appointment to the National Science Foundation. “I received it because the Director of the NSF who was Afro-American,” she said. “I had known him through the IEEE. He was the one who asked me to apply.” After she left, computer science became its own division. Her husband stayed behind in Los Angeles for her first year and then took a sabbatical year and followed her—for a change—to Washington D.C. When she returned to UCLA, she became Assistant Dean for Continuing Education in the engineering school.

LED history

A Challenge to Incandescent Lights

For more than a century, the incandescent light bulb stayed pretty much the same as the ones that Thomas Edison had made. However, incandescents were inefficient, wasting as heat much of the electricity used to light them. The impetus to finding a replacement came about when governments began to encourage replacements. The most energy efficient technology was standing ready: light-emitting diodes (LEDs).

The beginnings of LEDs go back to 1907, when H.J. Round of Marconi Labs discovered electroluminescence, in which a substance emits light when an electric current is passed through it. Years later, with the advent of quantum mechanics, the phenomenon was understood to occur when electrons in the semiconductor recombine with electron holes, releasing photons, or light particles—the color corresponding to the bandgap of the semiconductor. LEDs that emitted infrared light were developed in the 1960s and quickly became a component of circuits, particularly those used in consumer electronics.

INSERT: Fig. 4.40. LEDs

If certain problems could be overcome, LEDs offered advantages over incandescent lights for general lighting. By some estimates, LEDs require ten percent of the energy of incandescent bulbs for a given output of light. They also lasted longer and were smaller, but there were two problems in using LEDs to replace incandescent lights. One was color: no substance had yet been found that produced white light in the visible spectrum. White light could be made by combining three primary colors. Whereas scientists had developed red and green LEDs in the 1980s, blue remained elusive.

Shuji Nakamura set out to solve that. After receiving as master's degree from the University of Tokushima in Japan, in 1979, Shuji Nakamura joined Nichia Chemical Industries Ltd. While working at Nokia, in 1988 Nakamura began a yearlong position as a visiting research associate at the University of Florida, where he learned how to perform metal organic chemical vapor deposition (MOCVD). Using this technique, he began research on group-III nitride materials. By 1995, he had found a way of making a blue LED out of gallium nitride. Robert N. Hall said,

That’s an astonishing achievement…[A] semiconductor engineer at GE, and IEEE fellow and award-winner, during an interview in 2004. Nakamura “somehow talked his boss into giving him a million dollars or so, so he can go to Florida and learn how to run MOCVD apparatus, take it back to Japan, and use this on gallium nitride and try to make lasers, and he succeeded. It’s an amazing story.[13]

Nakamura’s invention revolutionized lighting technology. Engineers now had materials for all three primary colors of white light LEDs. What remained was developing methods of combining them into one lighting element, which proceeded quickly after Nakamura’s discovery.

Nakamura, along with Isamu Akasaki and Hiroshi Amano, shared a Nobel Prize in 2014. Nakamura also received the IEEE Jack A. Morton Award (1998) and the Benjamin Franklin Medal (2002). In 2003, he was elected as a member of the National Academy of Engineering (NAE) in the United States.

INSERT: Fig. 4.41. Photograph of Isamu Akasaki and Hiroshi Amano

Commercial white-light LEDs combining the three colors were developed in the mid-1990s, and began to emerge in commercial products--ranging from flashlights to backlights for cellphone screens, televisions and computer displays——in the 2000s, growing to a multi-billion-dollar industry. But when it came to replacing incandescent bulbs and other energy-intensive sources of general lighting, LEDs were still not ready for the market. LEDs tended to lose their efficiency at currents high enough to produce light for purposes of illumination—a phenomenon known as “droop.”

Many engineers, including Nakamura, struggled to explain this mysterious power drop. Various theories were proposed and then discarded. In February 2007, researchers at Philips Lumileds Lighting Co. claimed that they had solved the problem. They presented a paper that year arguing that the problem was caused by a type of interaction between electrons and “holes” that produce no light. A few years later, E. Fred Schubert at the Rensselaer Polytechnic Institute, in Troy, New York, working with Samsung, came up with a theory that attributed droop to a leakage of electrons, and claimed to have raised power output by twenty-five percent at high currents.

Since that time, engineers have worked to refine the quality of the light that white-light bulbs produce to more closely match the warmth of incandescents. Fueled by steady improvements and regulations requiring improved energy efficiency, the industry grew steadily and ultimately challenged the incandescent light bulb.

dotcom bubble

Dotcom Bubble (1995-2001) and Bust (2001-2002)

In the mid-to-late 1990s, as access to the World Wide Web became more common around the world, all eyes turned to internet-based companies as the future of commerce, leading to excessive market speculation, reckless "fad" investments, and an intense focus on marketing over substance. Entrepreneurs’ overly optimistic expectations of the potential of the Internet created the infamous “Dotcom bubble” (also known as the “Internet bubble”) of the latter half of the 1990s. During a six-year period spanning from 1995 to 2001, the stock market experienced excessive speculation in internet-related companies. The used and adoption of the Internet increased rapidly. In 2001 through 2002, the bubble burst, and equities entered a bear market. The crash saw the tech-heavy NASDAQ index, which had risen five-fold between 1995 and 2000, tumble from a peak of 5,048.62 on 10 March 2000, to 1,139.90 on 4 October 2002, a 76.81% fall. The number of initial public offerings plummeted. Many of the most promising companies filed for bankruptcy including, WorldCom and

INSERT: Fig. 5.2. IEEE Spectrum cover regarding Dotcom boom and bust

smart grid

Smart Power Grids

INSERT: Fig. 5.9. photo/map depicting the power grid failure

INSERT: Fig. 5.10. Photo, national newspaper headlines about blackout and disruption

INSERT: Fig. 5.11. Photo, life at a standstill without electricity?

While IEEE and the world struggled to recover from the shock of 9/11 and the economic downturn, it received another blow. This time it came from one of the founding technologies of the electrical engineering field a century before: the power grid. On 14 August 2003, a power surge in Ontario, Canada caused the local power grid to wobble. The fluctuating flow of electricity on the transmission wires caused a cascade of failures, starting on Ontario and then proceeding on to six U.S. states. The blackout put two hundred and sixty five generators out of commission, cutting off power for about fifty million people. Amtrak trains stalled on the Washington-New York-Boston corridor. Gas stations were unable to pump fuel. Oil refineries shut down, leading to a spike in gas prices for days after. Factories and stock markets closed.

INSERT: Fig. 5.12. Photo, IEEE Spectrum front cover(s)

Power returned to most of the region within twenty-four hours, but this largest blackout in U.S. history triggered a public reckoning for power engineers and utility operators. They had to explain to the public how they could have allowed such a disaster to occur. There was much finger pointing. New York blamed Canada; Canada blamed Pennsylvania, and Ohio came in for some criticism as well. Two hours into the blackout, former U.S. Energy Secretary Bill Richardson, then governor of New Mexico, went on television to declare that the United States was “a superpower with a Third World electrical grid.” However, while blackouts dealt short-term blows to the economy and possibly to the public perception of electrical engineers and engineering, engineers who were aware of chaos theory as it had been applied to power grids were less easily distracted. Earlier work on dynamic systems suggested that power blackouts were an inevitable part of such complex systems.

In the 1990s, James Thorpe of Cornell University built small-scale physical models of power grids, and found that they tended to fail in fractal patterns. John Doyle, an electrical engineer at Caltech followed up that work by studying data collected by the North American Electric Reliability Council of Princeton, New Jersey, which promoted voluntary standards for the electric power industry. It had been collecting hard data on blackouts since 1984. Doyle found that failures tended to follow a power law curve, a signature of a complex system. Doyle, an IEEE member, received numerous awards, including the IEEE W.R.G. Baker Prize Paper Award, two IEEE Control Systems Society George S. Axelby Outstanding Paper Awards, and the IEEE Power Engineering Society Hickernell Prize Award. The 2003 blackout fit Thorpe’s power-law theory. He reckoned that a blackout of the magnitude of the 2003 event should occur once every thirty-five years on average, and it had been thirty-eight years since the blackout of 1965, which deprived thirty million people in the northeastern U.S. and Canada of power.

The blackout led to calls for investment in the U.S. power grid. The Electric Power Research Institute (EPRI) favored control devices that could moderate power flows and even shut them down in the event of a surge. During the financial crisis of 2008-2009, the Federal government made four billion dollars available in its stimulus package for smart grid technology. EPRI argued that smart grids could cut down on the frequency of power outages. Sensors and meters in customers’ homes could give the utilities real-time information on electricity usage and power outages as they developed, making the systems more reliable.

EPRI also said that smart grid technology could make it possible to integrate solar panels on the rooftops of customer’s homes into the grid—a practice known as net metering. In 2005, all U.S. utilities were required to offer net metering “upon request.” As more states offered net metering for their residents, many utilities began to object to the policy, arguing that customers who fed electricity from solar panels back into the grid were not paying their fair share of the utility company’s overhead. A study by EPRI found that the full cost of deploying digital controls on the U.S. grid would cost seventeen billion to twenty-four billion dollars a year. About seventy percent of that costs it estimated would go to upgrade substations, lines, poles, meters, billing, and communication systems.

The stimulus funding allocated to smart grids came as utility planners debated how best to prepare the grid for a ramping up of renewable energy. With wind-turbine farms and solar plants cropping up in the southwest, planners were anticipating having to move vast amounts of power to the industrial areas to the north and east. Renewables and net metering would strain the capabilities of the current grid, which had undergone little fundamental change since the beginning.

Many implementation decisions that still survive today were first made using the limited emerging technology available 120 years ago… Some of the now-obsolete power grid assumptions and features, such as centralized unidirectional transmission and distribution, represent a vision of what was thought possible in the 19th century… In part, the situation is a result of an institutional risk aversion: Utilities naturally feel reluctant to use untested technologies on a critical infrastructure they have been charged with defending against any failure, wrote IEEE staff and volunteer leader Mel Olken in IEEE Power and Energy Magazine (March/April 2009).

A confluence of technologies was increasing the possibilities for managing electrical grids. Engineers could take direct phasor measurements of voltages on power lines and synchronize them using Global Positioning System (GPS) data. The result was a quick and more responsive way of managing activity on the grid. The advent of GPS made it possible to take real-time measurements and transmit them to a central office, where they could be combined with other measurements from other parts of the grid and analyzed. In the March/April 2009 issue of the IEEE Power and Energy Magazine, Jay Giri et. al. wrote:

These measurements are crucial to the fast detection and monitoring of system security indices and to triggering a dynamic security assessment process…The objective is to provide the power system and the EMS operator with timely, pertinent, and relevant visual information, along with enhanced tools, to detect and identify imminent problems as early as possible. Wide-area control, to some extent, also refers to automatic healing capabilities because it proposes decisive smart control actions and topology changes with the goal of maintaining the integrity of the entire interconnected grid under adverse conditions. Schemes for dynamic islanding and smart, ‘optimal’ load-shedding based on actual conditions are being developed to maintain the integrity of the transmission system.

flat panel tv history

Display Revolution

The cathode-ray tube was the original computer display of choice. However, it was not long before flat-panel displays, with their thin form factor, began to edge the CRT out. By the middle of the 2000s, the quality of liquid crystal displays surpassed that of CRTs for the first time, according to Adrianus J. S. M. De Vaan in the Journal of the Society of Information Displays. At about the same time, sales of LCDs surpassed CRTs as well.

INSERT: Fig. 5.17. Photo, RCA’s David Sarnoff Research Center, Princeton, in 1960s

INSERT: Fig. 5.18. Photos, collages of consumer products with LCD (with dates)

INSERT: Fig. 5.19. Photo, special conferences and proceedings assoc. with LCD technical and commercial development

LCDs are based on MOSFETs—in particularly, a version called thin-film transistors (TFTs) developed in the early 1960s at RCA’s David Sarnoff Research Center in Princeton, New Jersey, U.S.A. Between 1964 and 1968, a team of engineers and scientists led by George H. Heilmeier devised a method for electronic control of light reflected from liquid crystals and demonstrated the first liquid crystal display. The displays were first used for low-cost, low-power applications such as watches and calculators.

Over the years, engineers refined the technology and improved the quality of images, increasing resolution, adding color, and increasing the size. In 1988, Sharp introduced a 14-inch Thin-Film-Transistor Liquid-Crystal Display for televisions the size of many CRTs. The product was flat, consumed little power, and weighed less than the equivalent CRT. The color reproduction was on par with CRTs and could be used also in high ambient light. The invention cleared the way for LCDs to be used in televisions and computers.

Stephen Forrest, Richard Friend and Ching Wan Tang did pioneering research in organic light-emitting diodes (OLEDs) that has resulted in the development and quick commercialization of flat-panel displays. Their work was used in state-of-the-art high definition televisions and was incorporated into portable electronic devices. OLED displays began replacing small liquid crystal displays (LCDs) in handheld electronics such as cell phones, MP3 players, and digital cameras because they consumed less power, were thinner and lighter, and could be made using inexpensive manufacturing processes such as inkjet printing. OLEDs also have exceptional video image qualities that have enhanced the quality of solid-state general lighting.

Methods of biomimicry have also been used to reduce the power consumption. The idea was to reduce the amount of visual information the displays needed to handle by eliminating information that the human eye could not perceive. It was a visual version of methods used in audio and telephony, in which the systems are designed to match the human ear in frequencies, reducing the transmission of extraneous information.

As LCDs gained a larger place in the mass market, other display technologies began to emerge. Military researchers were producing volumetric displays that rendered images in a 3-D space rather than on a flat screen. Commercial firm began to develop low-cost technologies that rendered 3-D images without the use of eyewear for consumer markets. One technique, known as swept volume, created images using lasers that bounce off a rotating screen. Another scheme used a projector that shone through a stack of LCDs.

The first applications were aimed at scientific, engineering, and medical applications, such as “a doctor guiding a catheter inside a beating heart, a geologist developing plans to extract oil from deep underground reservoirs, or a baggage screener looking for knives and bombs in carry-on luggage,” wrote Allen Sullivan in IEEE Spectrum (April 2005).

Researchers were also looking to bypass the display hardware and project images directly onto the retina. Using semiconductor lasers or light-emitting diodes to transmit the three primary colors, these displays offered the advantage of making possible so-called augmented reality, in which images are superimposed over the user’s normal field of vision, without interfering with their view of the real world. By using cameras and other sensors, augmented reality could allow users to see through solid objects or receive information without having to direct their gaze to a screen.

Early versions of such scanned-beam displays were introduced in the automobile industry to display repair data for technicians. Microvision Inc. of Bothell, Wash. introduced such a system to auto dealers in 2004 at the National Automobile Dealers Association Convention and Exposition in Las Vegas. The technician wore a display mounted on a visor and wore a wireless computer with a touch-pad control on the belt.

“Like a high-tech monocle, a clear, flat window angled in front of the technician’s eye reflects scanned laser light to the eye,” wrote John R. Lewis in IEEE Spectrum (May 2004). “That lets the user view automobile diagnostics, as well as repair, service, and assembly instructions superimposed onto the field of vision.” The information came from the automaker’s web site and transmitted over an IEEE 802.11b Wi-Fi network.



Predictions of the end of Moore’s Law have abounded ever since electronics pioneer and co-founder of Intel Gordon Moore coined it in 1965. The law, which is really a particularly insightful observation that has since been recognized as an IEEE Milestone, states that the density of integrated circuits doubles roughly every eighteen months. For the next half century, semiconductor engineers continued to invent ways of overcoming the physical obstacles to greater circuit density.

Although Moore’s Law held up well for integrated circuits, it did not apply to other important aspects of electronic systems—particularly resistors, capacitors, antennas, waveguides, crystals and amplifiers. These passive components often sat on printed circuit boards, in all their bulk, next to sleek ICs containing tens or even hundreds of millions of transistors. The denser integrated circuits became, the high the relative cost of passive components in a system.

In the 2000s, as sales of smartphones and other products with miniaturized systems soared, the problem of packaging became an urgent matter of competitive advantages. Engineers were also looking down the barrel of even more highly miniaturized devices in the years to come: capsules that could monitor a person’s health from inside the body and communicate with devices outside wirelessly, sensors that could be scattered throughout a smart city, and so forth. The need to come up with better ways of integrating all components became increasingly apparent.

In the 2000s, researchers began looking at ways of combining ICs with micrometer-scale thin-film versions of discrete components. Rao R. Tummala, an IEEE Fellow who did pioneering work on plasma displays and ceramic and thin-film multichip packaging, began developing “systems on package” (SOP) technology. His group eliminated individual component packages, which were bulky and required relatively slow writing connections. Instead, they embedded thin-film components only a few micrometers thick directly into a systems package. They also stripped ICs of their bulky packaging and embedded into the SOP package.

Fig. 5.33. Photo, Rao R. Tummala, IEEE Fellow

Tummala and his team were able to use a wide range of semiconductors—silicon, gallium arsenide, silicon germanium and others—and combine different technologies depending on which was best suited for a particular circuit. The packages included not only silicon ICs but also passive components. The development also opened the way for incorporating biosensors made of nanotubes and microelectromechanical systems (MEMS), as they were developed. “Because we’re not forced to use any particular technology, the time it takes to design and fabricate a system is much shorter than before, and time to market is shorter as well,” wrote Tummala in IEEE Spectrum. Tummala was instrumental in the establishment of the first National Packaging Research Center in the United States for leading-edge research, cross-disciplinary education, and global industry collaborations. He has authored textbooks that shaped the modern packaging field.

Other researchers developed ways of stacking circuit one on top another, putting, say, memory components on top of applications-specific ICs, reducing the distance that signals need to travel between them. “The sets stack like Lego blocks, typically with logic on the bottom and memory on top,” wrote Pushkar Apte, W. R. Bottoms, William Chen and George Scalise in IEEE Spectrum (Feb 2011), who called their technology Package in a Package (POP). “Such structures are adaptable—manufacturers, when necessary, can vary the memory density by swapping out the piece of the stack that holds the memory components, for example.” Engineers would test each layer of the package before stacking them.

Semiconductor companies began to sell their ICs pre-packaged into systems to get their customers to buy one-stop-shop solutions. These kinds of packaged were used in smartphone manufacturers, for example. These systems packaging approaches were a departure from the integration technologies of the 1970s, in which designers had to compromise the optimal semiconductor processing technology used to make individual components for the sake of fitting them all onto one chip. Those compromises sacrificed performance and power consumption. The systems packaging technology incorporated disparate manufacturing technologies, which allowed designers to mix and match their components without compromising on manufacturing.

  1. David Porter Heap, Report on the International Exhibition of Electricity, Paris, August to November 1881 (Washington, D.C.: Government Printing Office, 1884).
  2. IEEE Milestone, No. 27, Volta’s Electrical Battery Invention, 1799. Dedicated 15 Sept. 1999, Cuomo, Italy.,_1799.
  3. IEEE Milestone, No. 86, Maxwell’s Equations, 1860-1871. Dedicated 13 Aug. 2009. This IEEE Milestone commemorates James Clerk Maxwell’s unified theory of electricity, magnetism, and light.,_1860-1871.
  4. IEEE Milestone, No. 13, Demonstration of Practical Telegraphy, 1838. Dedicated 7 May 1988.,_1838.
  5. IEEE Milestone, No. 32, County Kerry Transatlantic Cable Station, 1866. Dedicated 13 July 2000, County Kerry, Ireland.,_1866.
  6. IEEE Milestone, No. 5, Landing of the Transatlantic Cable, 1866. Dedicated 15 June 1985, Heart's Content, Newfoundland.,_1866.
  7. IEEE Milestone, No. 1, The Vulcan Street Plant, 1882. Dedicated 15 Sept. 1977, Appleton, Wisconsin. ASME National Historic Engineering Landmark, jointly designated with ASCE and IEEE.,_1882.
  8. IEEE Milestone, No. 25, Westinghouse Radio Station, KDKA. Dedicated on 1 June 1994.,_1920.
  9. Robert Metcalfe, an oral history conducted in 2004 by Robert Colburn, IEEE History Center, Hoboken, NJ, USA. Quotations attributed to Metcalfe are from this oral history.
  10. Robert Kahn, an oral history conducted in 2004 by Michael Geselowitz, IEEE History Center, Hoboken, NJ, USA. Quotations attributed to Kahn are from this oral history.
  11. David G. Messerschmitt, an oral history conducted in 2003 by John Vardalas, IEEE History Center, Hoboken, NJ, USA. Quotations attributed to Messerschmitt are from this oral history.
  12. Thelma Estrin, an oral history conducted in 2002 by Janet Abbate, IEEE History Center, Hoboken, NJ, USA. See two additional Thelma Estrin oral histories. Thelma Estrin, an oral history conducted in 1992 by Frederik Nebeker, IEEE History Center, Hoboken, NJ, USA. Thelma Estrin, interviewed by Deborah Rice, Profiles of SWE Pioneers Oral History Project, Walter P. Reuther Library and Archives of Labor and Urban Affairs, Wayne State University, 16 March 2006.
  13. See Shuji Nakamura, ETHW,; and Robert N. Hall, an oral history conducted in 2004 by Hyungsub Choi, IEEE History Center, Hoboken, NJ, USA.