The Advanced Research Projects Agency (ARPA) was founded in 1958 almost immediately after the Sputnick crisis in 1957 by President Dwight D. Eisenhower. The intent was purely to fund American technological endeavors to keep us ahead of the Soviet Union’s better-funded scientists. Eisenhower didn’t trust the hierarchical military complex, but loved the scientific community. He surrounded himself with scientists in his administration and was the first president to host dinners with scientists and engineers as the guests of honor. He was also the first to directly connect science to defense. Both he and his Secretary of Defense Neil McElroy saw a great benefit to having “unfettered” scientific research. They were firm believers that scientists should be well-funded to pursue almost anything of interest. This, they firmly believed, would produce remarkable, though unpredictable, results.
Thus the intent behind ARPA was to pull oversight of government-funded research out of the hands of the military and into the hands of a civilian appointed by the administration. ARPA’s mission was to fund “high-risk, high-gain” research that the military would normally not fund. ARPA hit its golden years in the 1960s, funding projects that eventually would lead to vast advances not only in military technology, but in graphical user interfaces and computer networking.
At this time, the field of computer science was barely a twinkle in anyone’s eye. Computers were large, expensive calculators that required constant maintenance to keep running. They were only just beginning the transition from “batch” processing to “time-sharing”. Batch processing being the cumbersome task of punching code into program cards and then having a computer operator feed the cards into the computer one batch at a time. Users would often have to wait a day or more for the batch to be processed. Time-sharing, on the other hand, provided users with direct access to the computer through interactive terminals, allowing them to get their results back immediately. However, this technology inevitably led to users stepping on each other by consuming too many computing resources when running more complicated calculations.
From a military perspective, investing in any computer was difficult. Every manufacturer used a different set of control and programming languages as well as hardware to perform the same basic set of functions. Since the DoD was the biggest buyer of computers in the world at the time, the DoD couldn’t simply buy from a single manufacturer to simplify and reduce the cost of training and maintenance without breaking federal regulations requiring all manufacturers to be given equal opportunity.
In 1966, these frustrations drove the director of the Information Processing Techniques Office (IPTO) under ARPA, Bob Taylor, to request funding for what he considered the “terminal problem”. All of the universities doing research under his perview were demanding their own computer to meet the growing demands of their researchers. On top of that, lack of a common method to share research between universities was leading to duplicate research. It occurred to Taylor that a possible solution to this problem was to simply try tying all the existing computers together so that computing resources could be shared across organizations. He asked that ARPA provide the funding for a test network of four nodes and then build from there as needed. Such a network, he argued, solved not only the lack of computing resources, but the problems the DoD was having with selecting which computer manufacturer to contract. If they all could be networked together with the same interface, then the underlying manufacturer no longer mattered. Additionally, if multiple computers could be connected it would increase the redundancy of the system, improving reliability in case one computer went down. Taylor’s grand off-the-cuff idea was approved and he was given a million dollars to go off and make it happen.
Sharing computing resources like this had never been done before. At the time, plenty was known about how to send voice communications over a complicated network, demonstrated by AT&T’s then monopoly over the telephone network. Taylor pulled in computer scientist Larry Roberts, another former employee of MIT Lincoln Labs, to be his program manager and tackle the problem of passing digital information over the wire.
Fun Fact: In 1965, AT&T was actually approached with the idea of creating a digital network of computers by an engineer at RAND, Paul Baran. Baran had spent a good 5 years developing the core tenents of computer networking that would eventually be used by both ARPANET and the Internet today. His goal was to develop a network that would be robust enough to survive a military strike. The Air Force had even agreed to pay AT&T to build and maintain the network, AT&T said no.
Roberts’ team quickly came to the conclusion that the network had to be made of identical nodes. Each node would handle routing and pass data directly to the hosts at the universities for processing. Roberts presented his idea for the ARPANET at a computer conference where it was well received and he was exposed to computer networking concepts being developed by other researchers, specifically that of Paul Baran of RAND and Donald Watts Davies of NPL. Both of these men had arrived at the idea of packet-switching as a way to pass data around a network of computers. The revolutionary idea breaks digital messages into “packets”, allowing you to flood the network with pieces of the message which are then reassembled at the destination computer. This idea solved the “bursty” nature of the existing data communications that relied on a circuit-switch to open a communications line, allowing only one call to use the line at a time. With packet-switching, multiple communications could use the same line to get to their final destination.
With the general concept now laid out, all that was left was to build it. ARPA contracted out the building of the IMPs (Interface Message Processors) to Bolt Beranek and Newman (BBN), a small consulting firm in Cambridge, Massachusetts. There a team led by Frank Heart, another former MIT Lincoln Labs employee, developed the software for the IMPs which would then be manufactured by Honeywell and distributed to the univerties by Heart’s team. The final program ended up being a miniscule 6000 words of Honeywell 516 assembly. They then developed a host-to-IMP specification for university teams at each site to use to develop the custom interface for their flavor of host.
The first four host sites were at UCLA, SRI, UC Santa Barbara, and the University of Utah. The teams at each of these sites had spent quite a bit of time thinking about host-to-host protocols. While BBN’s focus was on transporting packets over the network, the university teams cared about what message they would be sending and how they would send and receive it. A group of representatives from each site met regularly to discuss their plans for the communications channel. This group, which referred to themselves as the Network Working Group (NWG) developed the “layered” approach for protocols, now formalized in the OSI model.
Fun Fact:The International Organization for Standardization (ISO) developed the OSI model in the early 1980s and began to push that it be adopted over TCP/IP as the standard for computer networking despite the fact that TCP/IP and the Internet were already in use. Once the OSI standard became official, most major companies and countries, including the U.S., agreed to it. However, TCP/IP was already entrenched as the foundation of the Internet. And with the dawn of Ethernet and the popular operating system UNIX which was distributed with built-in support for TCP/IP, networking exploded and TCP/IP wasn’t going anywhere. Today, the OSI model is still referenced as the seven layer networking model, however, it is more a frame of reference and has little technical impact to how things are actually implemented.
By September 1969, the first IMP arrived at UCLA, two months later SRI got its IMP. By the end of 1969, all four IMPs were up and running at each university and connected over AT&T lines. Aside from some minor congestion issues, ARPANET was considered a success. In 1970, BBN connected it’s own host onto the network, completing the first cross-country communications link. From there, BBN pounded on the network, collecting critical data on network performance and limitations that they could use to then improve the system. By the summer, four more IMPs at MIT, RAND, System Development Corp, and Harvard were added to the network. Lincoln Labs, Stanford, Carnegie-Mellon and Case Western Reserve were added to the network by the end of the year.
From there the technology took off. Terminals were designed to directly connect to the network without a host. The IMPs were upgraded. Better routing algorithms, flow control schemes and diagnostics were developed. In 1971, the Telnet protocol made it’s debut, contributing significantly to ARPANET’s rapid expansion. The next key protocol developed was the File Transfer Protocol (FTP) in 1972. With that in place a pseudo-email protocol was developed by Ray Tolinson, piggybacking on top of FTP and resulting in the development of the first mail clients. By 1973, email had taken off, contributing three-quarters of all ARPANET traffic.
Between 1973 and 1975, ARPANET expanded by about one node a month. During this time other networks had been developed including the first wireless network (called packet-radio) and a satellite network. In order to connect these networks together a new common underlying protocol needed to be created. In 1974, the Transmission Control Protocol (TCP) was introduced to aid the expansion of the ARPANET. A piece of TCP was later broken off in 1978 to form the Internet Protocol (IP) so that TCP’s focus is on breaking messages into datagrams and error-checking them and IP’s focus is on routing. Following shortly after, the email piece that had been piggybacking on top of FTP was also broken out into SMTP (Simple Mail Transfer Protocol).
ARPA, now DARPA, had sold off maintenance of the ARPANET to the Defense Communications Agency (DCA) in 1975. By the mid-1980s, other networks had begun to pop up run by groups like the National Science Foundation (NSF), IBM, Bell Laboratories and NASA. Soon these private networks added routers, now being mass produced by other private companies, to connect into ARPANET. Several private networks in Europe and Canada also were spun up and connected to the U.S. networks. Now having to support thousands of individual computers, new problems in naming conventions arose, spawning yet another protocol: the domain name system (DNS). DNS spawned over a year of heated debate over naming conventions before settling on the top level domains of edu, com, mil, int, net, gov and org.
This ever growing set of networks, now being collectively referred to as the Internet, had now grown well beyond the original ARPANET. ARPANET, with its early 1970s technology, looked archaic compared to its much faster, more powerful spawn. In 1985, the original ARPANET hardware was officially decommissioned. ARPANET had served its purpose. The Internet remained.
Throughout my 10 year career I have worked as a web developer, systems administrator, software engineer, security analyst and now cybersecurity engineer. I currently develop software applications to automate security vulnerability and compliance scanning and reporting for a multinational financial institution.