answersLogoWhite

0


Best Answer

If we look back in time, the internet was developed from study into the Time multiplexing of data on hardwire networks by Paul Baran in the early 1960's. At this time sending data required that a physical line be laid between two computers to allow them to communicate. But a line for just two computers was expensive and inefficient of the lines were not used continuously.

Therefore study was put into a method known as time division multiplexing. This required that a computer put information into a "Packet" of data which had all the information contained in it for it to to arrive at its destination.

You can think of it a bit like a letter. When we send a letter we put the destination on the envelope. Inside we write our message and sign our name and add return address.

Each one of these packets of data belonging to more than one machine was then sent over the wire at a known interval i.e "clocked" onto the line. The server named in the packet then responded and the process continued. The problem with this was that it was not as efficient as it could be. Many clocked cycles had no data on them, but it did mean that more than one computer could be listening to the line at the same time. the Time Division multiplexing solved one problem but it just didn't make efficient use of the available bandwidth. There were no collisions of data but lots of time was wasted waiting for your slot to send data.

Then another idea was considered, what if the sending computer just tried to send if the line was quiet? It seems like a reasonable idea. The computer would listen to the line and if there was nothing being sent it would just "Go for it!". It was much more efficient the TDM Quiet stations could be left on the sidelines while "chatty" stations could use all of the available bandwidth.

But what if two stations sent at the same time? What happens then?

Well what happens in the real world if two people try speaking at the same time? They stop talking and either negotiate who will talk first, or randomly wait until a space is available to talk. This is what computers on a network do too! This technology is called CMSA (Carrier sense multiple access).

Network computers listen for a clear line, then generate a random time period, then send their packet. If no response or error is heard or reported from the destination machine, they add a set amount of time generate a random time add the two together then send. The system keeps "backing off " the connection until the data is acknowledged at the destination machine.

Pre 1962, the Military wanted a system that in the event of a nuclear war, would be robust enough to survive losing multiple connections and still get the data through (This was after all the cold war!).

They engaged ARPA run by Jack Ruina who in turn created the Information Processing Techniques Office to create a network which could join the mainframes at Cheyenne mountain, the Pentagon and SAQ HQ together to share information.

Larry Roberts and leonard Kleinrock joined the IPTO (Information Processing Techniques Office) for development of a system which was called ARPANET which was completed later in 1968. This in turn was progressed into a network called CSNET sponsored by the National Science foundation to link Universities into ARPAnet which weren't already connected. Then NSFNET (The National Space Foundation Network) became the backbone for the regional supercomputers). In 1983 The US Military removed their machines form the ARPANET and shifted them to MILNET, which then freed up the network to be used as a publically accessible network.

This resulting network became the basis of what we now call the Internet.

User Avatar

Wiki User

9y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is the concerpt of internet developed from?
Write your answer...
Submit
Still have questions?
magnify glass
imp