Tuesday, June 28, 2011

on the 'Rethinking the design of the Internet: 2 The end to end arguments vs. the brave new world'

The author of the paper reiterates the design principles that have been guiding the development of the Internet up to the present, called end-to-end arguments. End to end arguments in the context of the Internet, follow the notion of making the functions of the lower layers of the Internet infrastructure as simple as possible. Any application-specific features should be pulled out of the core infrastructure and should be implemented at the end systems instead. It proceeds by arguing that these design principles have been the key driving factor of the advances and innovations that the Internet has been experiencing since its early days. This position paper was written in the face of increasing interests of third parties i.e. private entities, governments, demanding the inclusion of new features which would allow more “better” mechanisms for providing security, privacy, accountability, etc. The paper cited a situation wherein implementing “eavesdropping” mechanism at the lower level of the infrastructure would still proved useless, after the fact, that end to end points of the communication are free to apply any available mechanism i.e. encryption, etc., to the messages being exchanged. Instead of providing the benefits one expected from it, it would only add complexity to the core network which in turn would increase the cost of deploying new applications to the Internet.

on the 'The Design Philosophy of the DARPA Internet Protocols'

The paper enumerated and described the various goals behind the DARPA Internet Project which gave birth to what we know now as the Internet; at the same time discussed their relations to the mechanisms which were chose to achieve those goals. The Internet started out as a military funded research project under the Defence Advanced Research Projects (DARPA) of the DoD of the USA.

The main goal of its inception was to provide a way for multiplexed internetwork communications among existing heterogeneous and disparate network infrastructures. Packet switching was chose as the technique for multiplexing since most of the existing networks employ packet switches.

Several second level goals were considered in its design, which proved to have great effects to what have become of it now. Survivability which relates to service availability and continuity tops among the second level goals of the design of the Internet. For example, any interruptions in some part of the network should not disrupt the usability of the whole infrastructure. Also interruptions at the lower layers of the infrastructure should be hidden or abstracted from the application level. Support for multiple services came second. So the designers of the Internet wanted to bring as many services to the Internet as possible. If we remember, reliability was the critical design requirement of the TCP. For some services i.e. real-time applications, the reliability of the TCP comes with a cost, performance degradation. The decision to formally define the boundary (layering) between TCP and IP was made, and another transport layer protocol was created, which is the User Datagram Protocol. UDP provides application a low level interface in performing their needs of internetwork communications, resulting to better control, flexibility and performance. Also, they wanted that the Internet will be able to accommodate various types of networks. Other goals which set at the bottom of the Internet priority list where (4) a mechanism for distributed management of its resources should be provided, (5) it must be cost effective, (6) host attachment must be easy, (7) resources in the Internet must be accountable. Interestingly, (7) has not been fully realized until now.

Surprisingly, there was no explicit mention of security in the original design of the Internet. Survivability was there, but I believe its notion relates more on the physical aspect of the infrastructure. The inclusion of the idea of a datagram as a building block element I think is one of the great realizations of the designers of the Internet. It gives developers better control and flexibility in meeting the networking aspects of the applications that they develop. Fate-sharing is another design decision that has proved to be critical in the development of the Internet. The notion of maintaining state information of communications at the end points only, enabled hosts to be less dependent on the performance of intermediary points in the networks. This provides service continuity on cases when some disruption happens at subset of the network.

on the 'A Protocol for Packet Network Intercommunication'

It was 37 years ago when Turing awardees Vinton Cerf and Robert E. Khan published the seminal paper describing TCP (with implicit mention of IP) which eventually led to the development of the Internet.

The paper proposed a protocol which would allow internetwork communications between processes on hosts residing in different packet switching networks. The paper stated the issue of how would such protocol handles communications between existing and planned packet switching network infrastructure, which would likely be different from one another. So it was here that the idea of having a standard protocol, which is as simple and reliable as possible, for inter process and network communications came in. Since reliability is one of the top concerns of the protocol, a mechanism for detection of ‘lost’ packets and their retransmissions was also included. A sender TCP will wait first for the receiver TCP to acknowledge bytes of messages it previously sent. If it does not receive such acknowledgment within a defined timeout, it re-transmits the unacknowledged bytes. State information of the connection between two communicating processes are kept only at both ends of the communication link, thus making the tasks of intermediary points as simple as possible i.e. only handles forwarding of packets and fragmentation if needed. Provisions for flow control was also included which is based on a window strategy. With this flow control mechanism, the receiver TCP will be able to advertise the number of bytes of data (the window) it can handle to the sender TCP, hence controlling the amount of data that flows between the receiver and sender TCP. One should really appreciate the completeness of the mechanisms or features of the protocol suggested in this paper, considering that most of them were given entirely in their pure theoretical sense. Amazing!