The birth and rise of Ethernet: A history

Today, no company would consider using anything except Ethernet for its wired local-area network. But it wasn't always that way. Steven Vaughan-Nichols tracks the history of Ethernet, and its once-upon-a-time networking protocol competitors.

Nowadays, we take Ethernet for granted. We plug a cable jack into the wall or a switch and we get the network. What's to think about?

It didn't start that way. In the 1960s and 1970s, networks were ad hoc hodgepodges of technologies with little rhyme and less reason. But then Robert "Bob" Metcalfe was asked to create a local-area network (LAN) for Xerox's Palo Alto Research Center (PARC). His creation, Ethernet, changed everything.

Back in 1972, Metcalfe, David Boggs, and other members of the PARC team assigned to the networking problem weren't thinking of changing the world. They only wanted to enable PARC's Xerox Alto—the first personal workstation with a graphical user interface and the Mac's spiritual ancestor—to connect and use the world's first laser printer, the Scanned Laser Output Terminal.

It wasn't an easy problem. The network had to connect hundreds of computers simultaneously and be fast enough to drive what was (for the time) a very fast laser printer.

Metcalfe didn't try to create his network from whole cloth. He used previous work for his inspiration. In particular, Metcalfe looked to Norman Abramson's 1970 paper about the ALOHAnet packet radio system. ALOHAnet was used for data connections between the Hawaiian Islands. Unlike ARPANET, in which communications relied on dedicated connections, ALOHAnet used shared UHF frequencies for network transmissions.

ALOHAnet addressed one important issue: how the technology coped when a collision happened between packets because two radios were broadcasting at the same time. The nodes would rebroadcast these "lost in the ether" packets after waiting a random interval of time. While this primitive form of packet collision avoidance worked relatively well, Abramson's original design showed that ALOHAnet would reach its maximum traffic load at only 17 percent of its potential maximum efficiency.

Metcalfe had worked on this problem in grad school, where he discovered that with the right packet-queuing algorithms, you could reach 90 percent efficiency of the potential traffic capacity. His work would become the basis of Ethernet's media access control (MAC) rules: Carrier Sense Multiple Access with Collision Detect (CSMA/CD).

For PARC, though, a wireless solution wasn't practical. Instead, Metcalfe turned to coaxial cable. But rather than call it CoaxNet or stick with the original name, Alto Aloha network, Metcalfe borrowed an obsolete phrase from 19th century scientific history: ether. In 19th century physics, "luminiferous ether" was the term used for the medium through which light traveled.

"The whole concept of an omnipresent, completely passive medium for the propagation of magnetic waves didn't exist," Metcalfe explained in a 2009 interview. "It was fictional. But when David [Boggs] and I were building this thing at PARC, we planned to run a cable up and down every corridor to actually create an omnipresent, completely passive medium for the propagation of electromagnetic waves. In this case, data packets." Appropriately enough, the first nodes on the first Ethernet were named Michelson and Morley, after the scientists who had discovered the non-existence of ether.

On May 22, 1973, Metcalfe wrote a memo to PARC management explaining how Ethernet would work. The coaxial cable was laid in PARC's corridors, and the first computers were attached to this bus-style network on November 11, 1973. The new network boasted speeds of 3 megabits per second (Mbps) and was an immediate hit.

Metcalfe's first Ethernet sketch

For the next few years, Ethernet remained a closed, in-house system. Then, in 1976, Metcalfe and Boggs published a paper, "Ethernet: Distributed Packet-Switching for Local Computer Networks." Xerox patented the technology, but unlike so many modern companies, Xerox was open to the idea of opening up Ethernet to others.

Metcalfe, who left Xerox to form 3Com in 1979, shepherded this idea and got DEC, Intel, and Xerox to agree to commercialize Ethernet. The consortium, which became known as DIX, had its work cut out for it. Aside from internal conflicts (gosh, we've never seen any of those since then, have we?), the IEEE 802 committee, which DIX hoped would make Ethernet a standard, wasn't about to rubber-stamp Ethernet. It took years, but on June 23, 1983, the IEEE 802.3 committee approved Ethernet as a standard. That is to say, Ethernet's CSMA/CD was approved. There were some slight differences between 802.3 and what had by then evolved into Ethernet II (a.k.a. DIX 2.0).

By now, Ethernet had reached a speed of 10 Mbps and was on its way to becoming wildly popular. (At least among networking geeks, the people who could name the seven layers of TCP/IP off the top of their head. Our sort of folks, that is.) In part, that was because the physical design was improving. The first Ethernet used 9.5-mm coaxial cable, also called ThickNet, or as we used to curse it as we tried to lay out the cables, Frozen Yellow Snake. To attach a device to this 10Base5 physical media, you had to drill a small hole in the cable itself to place a "vampire tap." It was remarkably hard to deploy.

Get the latest from Frost & Sullivan — Infrastructure Economics: What IT Leaders Can Learn from the CFO Organization (and Vice Versa)

So-called Thinnet (10Base2) uses cable TV-style cable, RG-58A/U. This made it much easier to lay out network cable. In addition, you could now easily attach a computer to the network with T-connectors. But 10Base2 did have one major problem: If the cable was interrupted somewhere, the entire network segment went down. In a large office, tracking down the busted connection that had taken down the entire network was a real pain in the rump. I speak from experience.

By the 1980s, both 10Base5 and 10Base2 began to be replaced by unshielded twisted pair (UTP) cabling. This technology, 10BaseT, and its many descendants (such as 100Base-TX and 1000Base-T) is what most of us use today.

In the early '80s, Ethernet faced serious competition from two other networking technologies: token bus, championed by General Motors for factory networking, and IBM's far more popular Token Ring, IEEE 802.5.

Token Ring's bandwidth usage was more efficient. Its larger packet sizes—Token Ring at 4 Mbps had a packet size of 4,550 bytes, compared with 10 Mbps Ethernet's 1,514 bytes—made it effectively faster than Ethernet. And 16 Mbps Token Ring was clearly faster to (relative) laymen who couldn't get their heads around true line speed.

Another Ethernet challenger was Attached Resource Computer Network (ARCNET). Initially created in the 1970s as a proprietary network by Datapoint Corp., like Ethernet and Token Ring, ARCNET was opened up in the 1980s. ARCNET was also a token-based networking protocol, but it used a bus rather than a ring architecture. In its day, the late '70s, its simple bus-based architecture and 2.5 Mbps speeds made it attractive.

Several things assured that Ethernet would win. First, as Urs Von Burg describes in his book, The Triumph of Ethernet, DEC decided early on to support Ethernet. This gave the fledging networking technology significant support in the IEEE standardization process.

Ethernet was also a far more open standard. IBM's Token Ring was open in theory, but Metcalfe has said that in reality, non-IBM Token Ring equipment seldom worked with IBM computers. Ethernet soon had more than 20 companies supporting it. Its cost-competitive, standards-based products worked together. (Most of the time. With late-1980s networks, most of us tended to choose one hardware vendor for Ethernet cards and stick to that brand.)

ARCNET, which only moved up to 20 Mbps in 1992 with ARCNET Plus, was slower than both by the late 1980s and early 1990s. In no small part because it was both open and had many developers working on it, Ethernet also quickly closed the technology gap with Token Ring.

In particular, 10BaseT, which became an IEEE standard in 1990, allowed the use of hubs and switches. This freed Ethernet from its often cumbersome bus architecture and offered the flexibility of star architecture. This change made it much easier for network administrators to manage their networks and gave users far more flexibility in placing their PCs. By the early 1990s, 10BaseT Ethernet was also much cheaper than Token Ring, no matter which metric you used.

The final nail in Token Ring's coffin came with the widespread introduction of Ethernet switching and 100 Mbps Ethernet. Today, there may still be old Token Ring networks running, but they're historical curiosities. At the same time, 802.11n and other Wi-Fi technologies have become immensely popular. But to supply those Wi-Fi access points with network connectivity, Ethernet will always have a role.