Introduction
Published 1997
Barry Grand. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff
The Cyberspace has revolutionized the computer and communications world like nothing earlier. The invention of the telegraph, telephone, radio, and reckoner set the phase for this unprecedented integration of capabilities. The Cyberspace is at once a globe-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Cyberspace represents one of the almost successful examples of the benefits of sustained investment and commitment to research and development of data infrastructure. Beginning with the early on research in parcel switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. Today, terms like “[electronic mail protected]” and “http://world wide web.acm.org” trip lightly off the tongue of the random person on the street.1
This is intended to be a brief, necessarily brief and incomplete history. Much material currently exists about the Cyberspace, covering history, applied science, and usage. A trip to almost any bookstore will find shelves of cloth written nearly the Internet.ii
In this paper,3 several of the states involved in the development and evolution of the Cyberspace share our views of its origins and history. This history revolves around four distinct aspects. There is the technological development that began with early on research on packet switching and the ARPANET (and related technologies), and where current inquiry continues to expand the horizons of the infrastructure along several dimensions, such as scale, operation, and higher-level functionality. At that place is the operations and management aspect of a global and complex operational infrastructure. There is the social attribute, which resulted in a broad community of Internauts working together to create and evolve the applied science. And at that place is the commercialization aspect, resulting in an extremely constructive transition of research results into a broadly deployed and available information infrastructure.
The Cyberspace today is a widespread data infrastructure, the initial prototype of what is oftentimes called the National (or Global or Galactic) Data Infrastructure. Its history is complex and involves many aspects – technological, organizational, and community. And its influence reaches non but to the technical fields of computer communications but throughout order as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.
Origins of the Internet
The first recorded description of the social interactions that could be enabled through networking was a series of memos written past J.C.R. Licklider of MIT in August 1962 discussing his “Galactic Network” concept. He envisioned a globally interconnected set of computers through which everyone could apace access data and programs from any site. In spirit, the concept was very much similar the Internet of today. Licklider was the commencement head of the computer enquiry programme at DARPA,4 starting in Oct 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.
Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards figurer networking. The other central stride was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 in California with a low speed dial-up phone line creating the first (yet small) wide-surface area figurer network ever built. The result of this experiment was the realization that the fourth dimension-shared computers could work well together, running programs and retrieving information equally necessary on the remote motorcar, just that the excursion switched telephone organization was totally inadequate for the job. Kleinrock’s conviction of the need for bundle switching was confirmed.
In late 1966 Roberts went to DARPA to develop the estimator network concept and speedily put together his plan for the “ARPANET”, publishing it in 1967. At the conference where he presented the paper, there was likewise a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL piece of work every bit well as that of Paul Baran and others at RAND. The RAND group had written a paper on bundle switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without whatsoever of the researchers knowing about the other piece of work. The discussion “parcel” was adopted from the piece of work at NPL and the proposed line speed to be used in the ARPANET blueprint was upgraded from ii.4 kbps to l kbps.v
In August 1968, later Roberts and the DARPA funded customs had refined the overall structure and specifications for the ARPANET, an RFQ was released past DARPA for the development of ane of the key components, the packet switches called Interface Message Processors (IMP’s). The RFQ was won in December 1968 past a grouping headed past Frank Middle at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP’s with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Assay Corporation, and the network measurement system was prepared by Kleinrock’due south team at UCLA.6
Due to Kleinrock’s early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the get-go host computer was connected. Doug Engelbart’due south project on “Augmentation of Human Intellect” (which included NLS, an early hypertext system) at Stanford Research Found (SRI) provided a 2d node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to accost mapping as well as a directory of the RFC’southward.
I month afterward, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock’s laboratory to SRI. Ii more than nodes were added at UC Santa Barbara and University of Utah. These final 2 nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the internet. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Cyberspace was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this 24-hour interval.
Computers were added quickly to the ARPANET during the following years, and piece of work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In Dec 1970 the Network Working Group (NWG) working nether Due south. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the menstruum 1971-1972, the network users finally could begin to develop applications.
In Oct 1972, Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial “hot” application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic electronic mail message send and read software, motivated past the demand of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the offset email utility program to listing, selectively read, file, forrad, and reply to messages. From at that place email took off as the largest network application for over a decade. This was a harbinger of the kind of activeness nosotros meet on the World Wide Web today, namely, the enormous growth of all kinds of “people-to-people” traffic.
The Initial Internetting Concepts
The original ARPANET grew into the Cyberspace. Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but shortly to include packet satellite networks, ground-based bundle radio networks and other networks. The Internet every bit nosotros now know it embodies a key underlying technical thought, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture simply rather could be selected freely by a provider and made to interwork with the other networks through a meta-level “Internetworking Architecture”. Upwardly until that time there was merely ane full general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit betwixt a pair of cease locations. Recall that Kleinrock had shown in 1961 that parcel switching was a more efficient switching method. Forth with package switching, special purpose interconnection arrangements betwixt networks were another possibility. While there were other limited means to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering terminate-to-cease service.
In an open up-architecture network, the individual networks may exist separately designed and adult and each may have its own unique interface which it may offer to users and/or other providers. including other Internet providers. Each network tin be designed in accordance with the specific environment and user requirements of that network. There are mostly no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.
The idea of open-architecture networking was get-go introduced past Kahn shortly afterwards having arrived at DARPA in 1972. This work was originally part of the bundle radio program, only subsequently became a separate plan in its own right. At the fourth dimension, the programme was chosen “Internetting”. Key to making the packet radio system work was a reliable end-end protocol that could maintain constructive communication in the face of jamming and other radio interference, or withstand intermittent blackout such equally acquired by existence in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to bargain with the multitude of different operating systems, and continuing to use NCP.
However, NCP did not have the ability to address networks (and machines) farther downstream than a destination IMP on the ARPANET and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide stop-to-end reliability. If any packets were lost, the protocol (and presumably whatever applications it supported) would come up to a grinding halt. In this model NCP had no stop-end host error command, since the ARPANET was to exist the but network in existence and it would be so reliable that no error control would exist required on the part of the hosts. Thus, Kahn decided to develop a new version of the protocol which could see the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device commuter, the new protocol would be more like a communications protocol.
4 ground rules were critical to Kahn’s early thinking:
- Each distinct network would have to stand on its ain and no internal changes could be required to any such network to connect information technology to the Cyberspace.
- Communications would exist on a best effort basis. If a packet didn’t make it to the final destination, it would soon be retransmitted from the source.
- Black boxes would exist used to connect the networks; these would later be called gateways and routers. In that location would be no data retained by the gateways almost the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
- There would be no global control at the operations level.
Other central issues that needed to exist addressed were:
- Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.
- Providing for host-to-host “pipelining” and so that multiple packets could be enroute from source to destination at the discretion of the participating hosts, if the intermediate networks immune it.
- Gateway functions to let it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
- The need for terminate-finish checksums, reassembly of packets from fragments and detection of duplicates, if any.
- The demand for global addressing
- Techniques for host-to-host flow command.
- Interfacing with the diverse operating systems
- At that place were as well other concerns, such as implementation efficiency, internetwork functioning, but these were secondary considerations at starting time.
Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum entitled “Communications Principles for Operating Systems“. At this signal he realized it would be necessary to learn the implementation details of each operating organization to have a chance to embed whatever new protocols in an efficient style. Thus, in the spring of 1973, afterward starting the internetting effort, he asked Vint Cerf (then at Stanford) to work with him on the detailed pattern of the protocol. Cerf had been intimately involved in the original NCP blueprint and development and already had the knowledge about interfacing to existing operating systems. And then armed with Kahn’s architectural approach to the communications side and with Cerf’s NCP feel, they teamed up to spell out the details of what became TCP/IP.
The give and take was highly productive and the first written version of the resulting arroyo was distributed as INWG#39 at a special meeting of the International Network Working Grouping (INWG) at Sussex University in September 1973. Afterwards a refined version was published in 19747. The INWG was created at the October 1972 International Reckoner Communications Briefing organized by Bob Kahn, et al, and Cerf was invited to chair this group.
Some bones approaches emerged from this collaboration between Kahn and Cerf:
- Communication between two processes would logically consist of a very long stream of bytes (they chosen them octets). The position of whatever octet in the stream would be used to identify information technology.
- Catamenia control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that signal.
- It was left open equally to exactly how the source and destination would hold on the parameters of the windowing to be used. Defaults were used initially.
- Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 $.25 designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was conspicuously in need of reconsideration when LANs began to appear in the late 1970s.
The original Cerf/Kahn paper on the Cyberspace described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual excursion model) to a datagram service in which the application made directly use of the underlying network service, which might imply occasional lost, corrupted or reordered packets. However, the initial endeavour to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early piece of work on advanced network applications, in detail packet vox in the 1970s, made clear that in some cases packet losses should not exist corrected by TCP, but should exist left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided but for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow command and recovery from lost packets. For those applications that did non desire the services of TCP, an culling chosen the User Datagram Protocol (UDP) was added in order to provide direct admission to the bones service of IP.
A major initial motivation for both the ARPANET and the Internet was resource sharing – for case allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET. Connecting the ii together was far more than economic that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very of import applications, email has probably had the almost significant touch of the innovations from that era. Electronic mail provided a new model of how people could communicate with each other, and changed the nature of collaboration, commencement in the building of the Net itself (equally is discussed below) and later for much of gild.
There were other applications proposed in the early days of the Internet, including parcel based voice communication (the precursor of Net telephony), various models of file and disk sharing, and early “worm” programs that showed the concept of agents (and, of course, viruses). A key concept of the Internet is that it was not designed for only one application, but as a general infrastructure on which new applications could be conceived, equally illustrated later by the emergence of the Globe Wide Web. It is the general purpose nature of the service provided by TCP and IP that makes this possible.
Proving the Ideas
DARPA permit three contracts to Stanford (Cerf), BBN (Ray Tomlinson) and UCL (Peter Kirstein) to implement TCP/IP (it was simply chosen TCP in the Cerf/Kahn paper but contained both components). The Stanford team, led past Cerf, produced the detailed specification and inside about a year in that location were three contained implementations of TCP that could interoperate.
This was the outset of long term experimentation and evolution to evolve and mature the Internet concepts and engineering. Beginning with the get-go three networks (ARPANET, Package Radio, and Bundle Satellite) and their initial research communities, the experimental environment has grown to incorporate substantially every form of network and a very broad-based research and development customs. [REK78] With each expansion has come new challenges.
The early implementations of TCP were done for large time sharing systems such every bit Tenex and TOPS 20. When desktop computers get-go appeared, information technology was thought by some that TCP was too big and complex to run on a personal figurer. David Clark and his research grouping at MIT fix out to show that a compact and elementary implementation of TCP was possible. They produced an implementation, offset for the Xerox Alto (the early on personal workstation developed at Xerox PARC) so for the IBM PC. That implementation was fully interoperable with other TCPs, but was tailored to the application suite and performance objectives of the personal calculator, and showed that workstations, as well as large time-sharing systems, could exist a part of the Internet. In 1976, Kleinrock published the first volume on the ARPANET. It included an emphasis on the complexity of protocols and the pitfalls they frequently introduce. This volume was influential in spreading the lore of package switching networks to a very wide community.
Widespread development of LANS, PCs and workstations in the 1980s immune the nascent Internet to flourish. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network engineering science in the Net and PCs and workstations the dominant computers. This change from having a few networks with a modest number of time-shared hosts (the original ARPANET model) to having many networks has resulted in a number of new concepts and changes to the underlying applied science. Showtime, it resulted in the definition of iii network classes (A, B, and C) to accommodate the range of networks. Class A represented large national scale networks (small number of networks with large numbers of hosts); Class B represented regional calibration networks; and Grade C represented local area networks (large number of networks with relatively few hosts).
A major shift occurred as a event of the increase in scale of the Net and its associated direction bug. To brand information technology easy for people to use the network, hosts were assigned names, so that it was not necessary to retrieve the numeric addresses. Originally, there were a fairly limited number of hosts, so it was feasible to maintain a single table of all the hosts and their associated names and addresses. The shift to having a big number of independently managed networks (east.g., LANs) meant that having a single table of hosts was no longer viable, and the Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI. The DNS permitted a scalable distributed machinery for resolving hierarchical host names (eastward.chiliad. world wide web.acm.org) into an Internet address.
The increase in the size of the Internet also challenged the capabilities of the routers. Originally, there was a single distributed algorithm for routing that was implemented uniformly past all the routers in the Cyberspace. As the number of networks in the Cyberspace exploded, this initial pattern could not expand equally necessary, so it was replaced past a hierarchical model of routing, with an Interior Gateway Protocol (IGP) used inside each region of the Internet, and an Outside Gateway Protocol (EGP) used to tie the regions together. This design permitted unlike regions to use a different IGP, then that unlike requirements for toll, rapid reconfiguration, robustness and scale could be accommodated. Not only the routing algorithm, but the size of the addressing tables, stressed the capacity of the routers. New approaches for address aggregation, in particular classless inter-domain routing (CIDR), have recently been introduced to control the size of router tables.
As the Internet evolved, 1 of the major challenges was how to propagate the changes to the software, particularly the host software. DARPA supported UC Berkeley to investigate modifications to the Unix operating system, including incorporating TCP/IP adult at BBN. Although Berkeley later rewrote the BBN code to more than efficiently fit into the Unix system and kernel, the incorporation of TCP/IP into the Unix BSD system releases proved to be a critical element in dispersion of the protocols to the research community. Much of the CS inquiry customs began to use Unix BSD for their mean solar day-to-mean solar day computing environment. Looking back, the strategy of incorporating Net protocols into a supported operating system for the research community was one of the key elements in the successful widespread adoption of the Internet.
I of the more than interesting challenges was the transition of the ARPANET host protocol from NCP to TCP/IP as of January 1, 1983. This was a “flag-day” style transition, requiring all hosts to catechumen simultaneously or be left having to communicate via rather ad-hoc mechanisms. This transition was carefully planned within the community over several years before it actually took identify and went surprisingly smoothly (but resulted in a distribution of buttons proverb “I survived the TCP/IP transition”).
TCP/IP was adopted as a defence standard three years earlier in 1980. This enabled defence force to begin sharing in the DARPA Internet technology base and led directly to the eventual sectionalization of the military and not- military communities. By 1983, ARPANET was being used past a significant number of defence R&D and operational organizations. The transition of ARPANET from NCP to TCP/IP permitted it to be split into a MILNET supporting operational requirements and an ARPANET supporting research needs.
Thus, past 1985, Net was already well established equally a applied science supporting a broad community of researchers and developers, and was start to exist used by other communities for daily computer communications. Electronic mail was being used broadly beyond several communities, often with different systems, merely interconnection betwixt different mail systems was demonstrating the utility of broad based electronic communications between people.
Transition to Widespread Infrastructure
At the same fourth dimension that the Cyberspace technology was existence experimentally validated and widely used amongst a subset of computer science researchers, other networks and networking technologies were beingness pursued. The usefulness of estimator networking – especially electronic mail service – demonstrated by DARPA and Department of Defense contractors on the ARPANET was not lost on other communities and disciplines, so that past the mid-1970s estimator networks had begun to spring up wherever funding could be plant for the purpose. The U.Due south. Section of Energy (DoE) established MFENet for its researchers in Magnetic Fusion Free energy, whereupon DoE’southward Loftier Free energy Physicists responded by building HEPNet. NASA Space Physicists followed with Bridge, and Rick Adrion, David Farber, and Larry Landweber established CSNET for the (academic and industrial) Computer science community with an initial grant from the U.S. National Science Foundation (NSF). AT&T’southward free-wheeling dissemination of the UNIX estimator operating organisation spawned USENET, based on UNIX’ congenital-in UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman devised BITNET, which linked academic mainframe computers in an “e-mail as card images” image.
With the exception of BITNET and USENET, these early on networks (including ARPANET) were purpose-built – i.e., they were intended for, and largely restricted to, closed communities of scholars; in that location was hence little pressure for the private networks to exist compatible and, indeed, they largely were not. In addition, alternating technologies were being pursued in the commercial sector, including XNS from Xerox, DECNet, and IBM’south SNA.8 It remained for the British JANET (1984) and U.S. NSFNET (1985) programs to explicitly announce their intent to serve the unabridged higher education community, regardless of discipline. Indeed, a condition for a U.S. academy to receive NSF funding for an Internet connection was that “… the connection must exist made bachelor to ALL qualified users on campus.”
In 1985, Dennis Jennings came from Ireland to spend a twelvemonth at NSF leading the NSFNET program. He worked with the customs to help NSF make a disquisitional decision – that TCP/IP would be mandatory for the NSFNET program. When Steve Wolff took over the NSFNET programme in 1986, he recognized the need for a broad surface area networking infrastructure to support the full general bookish and research community, along with the need to develop a strategy for establishing such infrastructure on a footing ultimately independent of directly federal funding. Policies and strategies were adopted (see beneath) to attain that end.
NSF also elected to support DARPA’s existing Internet organizational infrastructure, hierarchically arranged under the (then) Net Activities Board (IAB). The public declaration of this option was the joint authorship past the IAB’south Internet Engineering and Architecture Job Forces and by NSF’south Network Technical Informational Group of RFC 985 (Requirements for Internet Gateways ), which formally ensured interoperability of DARPA’s and NSF’s pieces of the Internet.
In addition to the selection of TCP/IP for the NSFNET program, Federal agencies fabricated and implemented several other policy decisions which shaped the Cyberspace of today.
- Federal agencies shared the cost of common infrastructure, such equally trans-oceanic circuits. They also jointly supported “managed interconnection points” for interagency traffic; the Federal Internet Exchanges (Fix-E and FIX-W) built for this purpose served as models for the Network Access Points and “*IX” facilities that are prominent features of today’s Internet architecture.
- To coordinate this sharing, the Federal Networking Councilix was formed. The FNC also cooperated with other international organizations, such as RARE in Europe, through the Analogous Committee on Intercontinental Inquiry Networking, CCIRN, to coordinate Net back up of the research community worldwide.
- This sharing and cooperation between agencies on Internet-related issues had a long history. An unprecedented 1981 agreement between Farber, acting for CSNET and the NSF, and DARPA’southward Kahn, permitted CSNET traffic to share ARPANET infrastructure on a statistical and no-metered-settlements basis.
- Subsequently, in a similar mode, the NSF encouraged its regional (initially academic) networks of the NSFNET to seek commercial, non-bookish customers, expand their facilities to serve them, and exploit the resulting economies of scale to lower subscription costs for all.
- On the NSFNET Courage – the national-scale segment of the NSFNET – NSF enforced an “Acceptable Utilize Policy” (AUP) which prohibited Courage usage for purposes “not in support of Research and Education.” The predictable (and intended) result of encouraging commercial network traffic at the local and regional level, while denying its access to national-scale transport, was to stimulate the emergence and/or growth of “individual”, competitive, long-haul networks such equally PSI, UUNET, ANS CO+RE, and (later) others. This process of privately-financed augmentation for commercial uses was thrashed out starting in 1988 in a series of NSF-initiated conferences at Harvard’s Kennedy School of Regime on “The Commercialization and Privatization of the Net” – and on the “com-priv” list on the net itself.
- In 1988, a National Research Quango committee, chaired by Kleinrock and with Kahn and Clark as members, produced a report commissioned by NSF titled “Towards a National Inquiry Network”. This report was influential on then Senator Al Gore, and ushered in high speed networks that laid the networking foundation for the future information superhighway.
- In 1994, a National Research Council written report, again chaired by Kleinrock (and with Kahn and Clark as members again), Entitled “Realizing The Data Future: The Net and Beyond” was released. This written report, commissioned by NSF, was the certificate in which a blueprint for the evolution of the information superhighway was articulated and which has had a lasting affect on the way to think nearly its evolution. It anticipated the critical problems of intellectual property rights, ethics, pricing, education, architecture and regulation for the Internet.
- NSF’s privatization policy culminated in April, 1995, with the defunding of the NSFNET Backbone. The funds thereby recovered were (competitively) redistributed to regional networks to buy national-scale Internet connectivity from the at present numerous, individual, long-haul networks.
The backbone had fabricated the transition from a network built from routers out of the research customs (the “Fuzzball” routers from David Mills) to commercial equipment. In its 8 1/two year lifetime, the Backbone had grown from six nodes with 56 kbps links to 21 nodes with multiple 45 Mbps links. Information technology had seen the Net grow to over 50,000 networks on all seven continents and outer space, with approximately 29,000 networks in the United States.
Such was the weight of the NSFNET program’s ecumenism and funding ($200 one thousand thousand from 1986 to 1995) – and the quality of the protocols themselves – that by 1990 when the ARPANET itself was finally decommissioned10, TCP/IP had supplanted or marginalized most other wide-area estimator network protocols worldwide, and IP was well on its fashion to becoming THE bearer service for the Global Data Infrastructure.
The Office of Documentation
A key to the rapid growth of the Net has been the complimentary and open up access to the basic documents, especially the specifications of the protocols.
The beginnings of the ARPANET and the Internet in the university research customs promoted the bookish tradition of open publication of ideas and results. However, the normal bicycle of traditional academic publication was too formal and too slow for the dynamic substitution of ideas essential to creating networks.
In 1969 a key step was taken by S. Crocker (then at UCLA) in establishing the Request for Comments (or RFC) serial of notes. These memos were intended to exist an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. As the File Transfer Protocol (FTP) came into use, the RFCs were prepared as online files and accessed via FTP. Now, of course, the RFCs are easily accessed via the World Wide Web at dozens of sites effectually the globe. SRI, in its role as Network Data Center, maintained the online directories. Jon Postel acted as RFC Editor also as managing the centralized assistants of required protocol number assignments, roles that he continued to play until his death, October xvi, 1998.
The effect of the RFCs was to create a positive feedback loop, with ideas or proposals presented in ane RFC triggering another RFC with additional ideas, and and then on. When some consensus (or a least a consistent gear up of ideas) had come together a specification certificate would be prepared. Such a specification would then exist used every bit the base for implementations by the various enquiry teams.
Over time, the RFCs accept become more focused on protocol standards (the “official” specifications), though in that location are notwithstanding advisory RFCs that draw alternating approaches, or provide background information on protocols and engineering issues. The RFCs are now viewed as the “documents of record” in the Internet engineering and standards customs.
The open access to the RFCs (for complimentary, if you have any kind of a connection to the Net) promotes the growth of the Internet because information technology allows the actual specifications to be used for examples in higher classes and by entrepreneurs developing new systems.
E-mail has been a significant cistron in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet technology. The very early on RFCs often presented a set up of ideas developed by the researchers at ane location to the balance of the community. Afterward email came into use, the authorship pattern changed – RFCs were presented by joint authors with mutual view independent of their locations.
The use of specialized electronic mail mailing lists has been long used in the evolution of protocol specifications, and continues to exist an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Cyberspace engineering. Each of these working groups has a mailing list to discuss one or more than draft documents nether development. When consensus is reached on a draft document it may be distributed every bit an RFC.
Every bit the electric current rapid expansion of the Net is fueled by the realization of its capability to promote information sharing, we should sympathize that the network’due south first function in information sharing was sharing the information about its own design and operation through the RFC documents. This unique method for evolving new capabilities in the network will continue to be critical to hereafter evolution of the Internet.
Germination of the Broad Community
The Internet is equally much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs besides equally utilizing the community in an effective way to push the infrastructure forward. This community spirit has a long history commencement with the early on ARPANET. The early ARPANET researchers worked as a close-knit community to accomplish the initial demonstrations of parcel switching engineering described earlier. Too, the Bundle Satellite, Packet Radio and several other DARPA computer science research programs were multi-contractor collaborative activities that heavily used whatsoever available mechanisms there were to coordinate their efforts, starting with electronic mail and adding file sharing, remote access, and somewhen World Wide Web capabilities. Each of these programs formed a working group, starting with the ARPANET Network Working Group. Because of the unique role that ARPANET played as an infrastructure supporting the various inquiry programs, every bit the Internet started to evolve, the Network Working Group evolved into Net Working Group.
In the late 1970s, recognizing that the growth of the Cyberspace was accompanied by a growth in the size of the interested research community and therefore an increased demand for coordination mechanisms, Vint Cerf, and so manager of the Internet Program at DARPA, formed several coordination bodies – an International Cooperation Lath (ICB), chaired by Peter Kirstein of UCL, to coordinate activities with some cooperating European countries centered on Packet Satellite research, an Net Enquiry Group which was an inclusive group providing an environment for general commutation of data, and an Internet Configuration Control Board (ICCB), chaired by Clark. The ICCB was an invitational body to assist Cerf in managing the burgeoning Cyberspace activity.
In 1983, when Barry Leiner took over management of the Net research plan at DARPA, he and Clark recognized that the continuing growth of the Internet community demanded a restructuring of the coordination mechanisms. The ICCB was disbanded and in its place a structure of Task Forces was formed, each focused on a detail expanse of the technology (east.g. routers, end-to-end protocols, etc.). The Internet Activities Board (IAB) was formed from the chairs of the Task Forces.
It of course was only a coincidence that the chairs of the Task Forces were the same people as the members of the quondam ICCB, and Dave Clark continued to act equally chair. After some changing membership on the IAB, Phill Gross became chair of a revitalized Internet Engineering Task Force (IETF), at the time merely one of the IAB Task Forces. As we saw above, by 1985 there was a tremendous growth in the more practical/engineering side of the Internet. This growth resulted in an explosion in the attendance at the IETF meetings, and Gross was compelled to create substructure to the IETF in the course of working groups.
This growth was complemented by a major expansion in the customs. No longer was DARPA the just major player in the funding of the Internet. In add-on to NSFNet and the various United states and international government-funded activities, interest in the commercial sector was beginning to grow. Also in 1985, both Kahn and Leiner left DARPA and at that place was a significant subtract in Internet activeness at DARPA. As a result, the IAB was left without a main sponsor and increasingly causeless the mantle of leadership.
The growth continued, resulting in fifty-fifty further substructure within both the IAB and IETF. The IETF combined Working Groups into Areas, and designated Area Directors. An Internet Applied science Steering Group (IESG) was formed of the Expanse Directors. The IAB recognized the increasing importance of the IETF, and restructured the standards process to explicitly recognize the IESG as the major review body for standards. The IAB also restructured and then that the rest of the Task Forces (other than the IETF) were combined into an Internet Research Task Strength (IRTF) chaired by Postel, with the old task forces renamed as research groups.
The growth in the commercial sector brought with it increased concern regarding the standards process itself. Starting in the early 1980’southward and continuing to this day, the Internet grew beyond its primarily enquiry roots to include both a broad user community and increased commercial activity. Increased attention was paid to making the process open and off-white. This coupled with a recognized demand for community support of the Internet somewhen led to the formation of the Internet Society in 1991, nether the auspices of Kahn’s Corporation for National Research Initiatives (CNRI) and the leadership of Cerf, then with CNRI.
In 1992, yet another reorganization took identify. In 1992, the Internet Activities Board was re-organized and re-named the Internet Architecture Board operating under the auspices of the Net Society. A more “peer” relationship was defined between the new IAB and IESG, with the IETF and IESG taking a larger responsibleness for the approving of standards. Ultimately, a cooperative and mutually supportive relationship was formed between the IAB, IETF, and Internet Social club, with the Internet Guild taking on as a goal the provision of service and other measures which would facilitate the work of the IETF.
The recent evolution and widespread deployment of the World Wide Web has brought with it a new customs, as many of the people working on the World wide web accept not thought of themselves every bit primarily network researchers and developers. A new coordination system was formed, the World Wide Web Consortium (W3C). Initially led from MIT’s Laboratory for Information science by Tim Berners-Lee (the inventor of the WWW) and Al Vezza, W3C has taken on the responsibleness for evolving the diverse protocols and standards associated with the Web.
Thus, through the over two decades of Internet activity, we have seen a steady evolution of organizational structures designed to support and facilitate an ever-increasing community working collaboratively on Internet issues.
Commercialization of the Technology
Commercialization of the Internet involved not only the evolution of competitive, private network services, just also the evolution of commercial products implementing the Internet engineering. In the early on 1980s, dozens of vendors were incorporating TCP/IP into their products because they saw buyers for that approach to networking. Unfortunately they lacked both real data about how the technology was supposed to work and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions: SNA, DECNet, Netware, NetBios. The DoD had mandated the use of TCP/IP in many of its purchases merely gave piffling help to the vendors regarding how to build useful TCP/IP products.
In 1985, recognizing this lack of information availability and appropriate training, Dan Lynch in cooperation with the IAB arranged to hold a 3 day workshop for ALL vendors to come up learn about how TCP/IP worked and what it still could not do well. The speakers came mostly from the DARPA research community who had both developed these protocols and used them in mean solar day-to-day work. Almost 250 vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open up about the way things worked (and what nevertheless did non work) and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two-way give-and-take was formed that has lasted for over a decade.
After ii years of conferences, tutorials, pattern meetings and workshops, a special event was organized that invited those vendors whose products ran TCP/IP well enough to come together in ane room for three days to show off how well they all worked together and as well ran over the Internet. In September of 1988 the first Interop trade testify was built-in. 50 companies made the cut. five,000 engineers from potential client organizations came to run into if information technology all did piece of work as was promised. It did. Why? Considering the vendors worked extremely difficult to ensure that everyone’s products interoperated with all of the other products – even with those of their competitors. The Interop trade show has grown immensely since then and today information technology is held in 7 locations around the earth each year to an audition of over 250,000 people who come to learn which products work with each other in a seamless manner, learn nearly the latest products, and discuss the latest technology.
In parallel with the commercialization efforts that were highlighted past the Interop activities, the vendors began to attend the IETF meetings that were held 3 or 4 times a twelvemonth to discuss new ideas for extensions of the TCP/IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the authorities, these meetings now frequently exceed a thousand attendees, mostly from the vendor customs and paid for by the attendees themselves. This self-selected group evolves the TCP/IP suite in a mutually cooperative way. The reason information technology is so useful is that it is equanimous of all stakeholders: researchers, end users and vendors.
Network management provides an example of the coaction between the inquiry and commercial communities. In the commencement of the Internet, the accent was on defining and implementing protocols that achieved interoperation.
As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale. Manual configuration of tables was replaced by distributed automatic algorithms, and better tools were devised to isolate faults. In 1987 it became clear that a protocol was needed that would allow the elements of the network, such as the routers, to be remotely managed in a uniform mode. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP) , HEMS (a more than complex pattern from the research community) and CMIP (from the OSI customs). A series of meeting led to the decisions that HEMS would be withdrawn every bit a candidate for standardization, in order to help resolve the contention, but that work on both SNMP and CMIP would get forward, with the idea that the SNMP could be a more than almost-term solution and CMIP a longer-term arroyo. The market could choose the one it institute more suitable. SNMP is now used almost universally for network-based management.
In the terminal few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. The Internet has now become almost a “commodity” service, and much of the latest attention has been on the use of this global information infrastructure for support of other commercial services. This has been tremendously accelerated past the widespread and rapid adoption of browsers and the Www technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in engineering science have been aimed at providing increasingly sophisticated information services on top of the basic Cyberspace data communications.
History of the Future
On October 24, 1995, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the cyberspace and intellectual holding rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term “Internet”. “Net” refers to the global information arrangement that — (i) is logically linked together by a globally unique accost infinite based on the Cyberspace Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to back up communications using the Transmission Command Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-uniform protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.
The Internet has changed much in the two decades since it came into beingness. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network calculator. Information technology was designed before LANs existed, but has accommodated that new network technology, every bit well every bit the more than recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned email and more recently the Www. But well-nigh important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of almanac investment.
One should not conclude that the Internet has at present finished irresolute. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television receiver industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if information technology is to remain relevant. It is now irresolute to provide new services such as real time transport, in order to support, for example, sound and video streams.
The availability of pervasive networking (i.due east., the Internet) along with powerful affordable computing and communications in portable course (i.e., laptop computers, ii-fashion pagers, PDAs, cellular phones), is making possible a new paradigm of nomadic computing and communications. This evolution will bring usa new applications – Internet telephone and, slightly further out, Net television. Information technology is evolving to permit more than sophisticated forms of pricing and toll recovery, a perhaps painful requirement in this commercial world. It is irresolute to accommodate nevertheless another generation of underlying network technologies with different characteristics and requirements, e.g. broadband residential admission and satellites. New modes of access and new forms of service volition spawn new applications, which in turn volition drive further evolution of the net itself.
The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will exist managed. Equally this paper describes, the architecture of the Cyberspace has ever been driven by a cadre group of designers, merely the form of that grouping has inverse as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders – stakeholders now with an economic every bit well as an intellectual investment in the network.
We at present see, in the debates over control of the domain proper noun space and the class of the next generation IP addresses, a struggle to find the next social structure that will guide the Net in the hereafter. The form of that structure will be harder to observe, given the big number of concerned stakeholders. At the aforementioned fourth dimension, the industry struggles to find the economical rationale for the big investment needed for the futurity growth, for example to upgrade residential access to a more than suitable technology. If the Internet stumbles, information technology will not be because nosotros lack for technology, vision, or motivation. It will be because we cannot ready a management and march collectively into the time to come.
Timeline
Footnotes
1 Peradventure this is an exaggeration based on the atomic number 82 author’due south residence in Silicon Valley.
ii On a contempo trip to a Tokyo bookstore, one of the authors counted 14 English language magazines devoted to the Internet.
three An abbreviated version of this article appears in the 50th anniversary upshot of the CACM, February. 97. The authors would like to express their appreciation to Andy Rosenbloom, CACM Senior Editor, for both instigating the writing of this article and his invaluable assistance in editing both this and the abbreviated version.
four The Advanced Research Projects Agency (ARPA) changed its proper noun to Defense force Advanced Research Projects Bureau (DARPA) in 1971, then back to ARPA in 1993, and back to DARPA in 1996. We refer throughout to DARPA, the current proper noun.
5 It was from the RAND study that the simulated rumor started claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, but the unrelated RAND study on secure vocalism considered nuclear war. Notwithstanding, the later piece of work on Internetting did emphasize robustness and survivability, including the adequacy to withstand losses of large portions of the underlying networks.
6 Including amongst others Vint Cerf, Steve Crocker, and Jon Postel. Joining them after were David Crocker who was to play an important function in documentation of electronic mail protocols, and Robert Braden, who developed the first NCP and then TCP for IBM mainframes and besides was to play a long term role in the ICCB and IAB.
7 This was subsequently published every bit Five. G. Cerf and R. E. Kahn, “A protocol for packet network intercommunication”, IEEE Trans. Comm. Tech., vol. COM-22, V v, pp. 627-641, May 1974.
8 The desirability of email interchange, still, led to one of the first “Internet books”: !%@:: A Directory of Email Addressing and Networks, by Frey and Adams, on electronic mail address translation and forwarding.
ix Originally named Federal Inquiry Net Coordinating Committee, FRICC. The FRICC was originally formed to coordinate U.S. inquiry network activities in support of the international coordination provided by the CCIRN.
ten The decommissioning of the ARPANET was commemorated on its 20th ceremony by a UCLA symposium in 1989.
References
P. Baran, “On Distributed Communications Networks”, IEEE Trans. Comm. Systems, March 1964.
V. Yard. Cerf and R. Eastward. Kahn, “A protocol for package network interconnection”, IEEE Trans. Comm. Tech., vol. COM-22, V v, pp. 627-641, May 1974.
S. Crocker, RFC001 Host software, Apr-07-1969.
R. Kahn, Communications Principles for Operating Systems. Internal BBN memorandum, Jan. 1972.
Proceedings of the IEEE, Special Effect on Parcel Communication Networks, Volume 66, No. 11, November 1978. (Guest editor: Robert Kahn, associate invitee editors: Keith Uncapher and Harry van Trees)
Fifty. Kleinrock, “Data Menses in Large Communication Nets”, RLE Quarterly Progress Written report, July 1961.
L. Kleinrock, Advice Nets: Stochastic Message Period and Delay, Mcgraw-Loma (New York), 1964.
Fifty. Kleinrock, Queueing Systems: Vol II, Computer Applications, John Wiley and Sons (New York), 1976
J.C.R. Licklider & W. Clark, “On-Line Man Estimator Communication”, August 1962.
50. Roberts & T. Merrill, “Toward a Cooperative Network of Time-Shared Computers”, Fall AFIPS Conf., Oct. 1966.
Fifty. Roberts, “Multiple Computer Networks and Intercomputer Communication”, ACM Gatlinburg Conf., Oct 1967.
Which of These Was Not a Result of the Internet
Source: https://www.internetsociety.org/internet/history-internet/brief-history-internet/
Originally posted 2022-08-05 08:02:52.