Tuesday, July 19, 2011

on IP: IPv4 and IPv6

IP is the main protocol at the network layer of the Internet. Essentially, every data sent by any top level layer i.e. transport and application layer, gets sent as IP datagrams over the Internet. IP datagrams is the building block for internetwork communications provided by IP. IP is meant to be a best effort protocol for sending data over a network, hence it is inherently unreliable. An advantage of this particular design decision is that implementation of IP in network interfaces and routers is relatively simple. Moreover, IP is also connectionless, meaning that IP does not maintain any state information of the datagrams coming its way. Each datagram is handled independently from one another. Datagrams of the same message gets delivered to its destination on possibly many different paths and may arrive at its destination out of order [1]. Hence, a protocol like TCP is needed on top of IP to provide a reliable service needed by most Internet applications.

An IP address identifies uniquely each device i.e. hosts, routers, connected to or in the Internet. IP uses a rather simple and intuitive mechanism in routing datagrams from a source to its destination. Routing is done on a hop-by-hop basis. A routing table is maintained by hosts and routers which they use in forwarding a datagram to the next-hop router or network interface indicated in the routing table entry associated with the datagram’s destination IP address. Using ICMP, a router can build its routing table through advertisement and solicitation messages from other routers [1].

The current widely deployed version of IP, IPv4, uses 32-bit IP addresses amounting to approximately 4.3 billion addresses. With the rapid growth in the deployment of applications, services, hosts, etc. on the Internet, exhaustion of available addresses in IPv4 seems inevitable. As an answer to this likely possibility, the Internet Engineering Task Force developed IPv6, which offers a much larger address space, to succeed IPv4 [3]. IPv6 uses a 128-bit addressing scheme allowing about 2128 unique IP addresses. Aside from having a much larger address space and changes in the IP datagram format, other changes were incorporated to IPv6 which include among others: IMCPv6 for automatic host configuration upon connection to a IPv6 network and network level security through mandatory IPSec implementation [2]. Initial deployment of the service has been performed in countries like the USA, CANADA, JAPAN, and CHINA with JAPAN enjoying full government support while CHINA showcased it in the 2008 Beijing Summer Olympics.

References:

[1] Stevens ,W. R. (1993). Internet Protocol. In B. Kernighan (Ed.). TCP/IP Illustrated Volume 1 (). Addison Wesley.

[2] Das, K. IPv6 – The Next Generation Internet. IPv6.com [@http://ipv6.com/articles/general/ipv6-the-next-generation-internet.htm]

[3] IPv6. Wikipedia. [@http://en.wikipedia.org/wiki/IPv6]

Tuesday, July 5, 2011

on "Congestion Avoidance and Control [2]"

The paper describes a congestion avoidance/control algorithm which has following features:

1. a connection re/starts slow, packet transmission rates starts low and then gradually increases, until such time the connection achieves its state of 'equilibrium'. This prevents the connection from sending big bursts of packets which makes it prone to failure because of constant packet retransmissions.

2. has a 'better' round-trip time variance estimation, which allows it to estimate a more realistic retransmit timeout interval, rto, for succeeding packets. This leads to the variability of the RTT variance used of the rto computation with respect to the medium of communication i.e. satellite links, which leads to increase in performance.

3. when congestion really happens, it employs an exponential retransmit timer backoff, which allows the system to really come into its normal state, no matter what.

4. for congestion avoidance, it uses an increase/decrease algorithm with additive increase and multiplicative decrease components. Unlike in [1] which uses a binary feedback mechanism (incorporated as a bit information in the packet header) in determining the state of the system, their algorithm depends on some assumptions about the inherent properties of "lost packets". That is, lost packets are lost essentially because, the network is congested. So if a connection experiences lost packets, this means that the network is experiencing congestion and it should decrease its load. On the other hand, if the connection continuously receives ACKs, then that means it can try increasing its load. To achieve fairness, the gateway would just have to dropped packets coming from mis-having (abusive) hosts, which in turn would 'trick' the host into believing that the network is experiencing congestion, thus have to decrease its load. Just curious, did this solution work? I still prefer [1], in terms of the feedback mechanism.

Ref:

[1] D.-M. Chiu and R. Jain, "Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks", Computer Networks and ISDN Systems, Vol. 17, 1989, pp. 1-14.

[2] V. Jacobson, "Congestion Avoidance and Control", SIGCOMM '88, Sept. 1988, pp. 314-329.

on the "Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks[1]"

The paper presented a mathematical analysis of increase/decrease algorithms for congestion avoidance in computer networks. Congestion avoidance algorithms allow a network to operate at an optimal level of low delay and high throughput. The authors evaluated the set of increase/decrease algorithms based on the following criteria:

1. the algorithm should allow the communication system to operate at a level of optimal resource utilization (high efficiency).
2. the algorithm not only ensures efficient utilization of shared network resources, but see to it that there is fairness in the allocation of such resources among the users of the system.
3. the algorithm should be distributed to make the tasks of the system and the users simple as possible.
4. the algorithm, starting from an arbitrary initial state, should achieve goal 1 and 2 as fast as possible.


They focused their analysis to a set of increase/decrease algorithms which uses linear controls as control functions. A control function is used by a user of the system in increasing or decreasing its load utilization.

Their analysis used graphical vector representation of the different control combinations to identify the configurations of feasible linear controls that would allow the system to reach the goal of optimal resource utilization and fairness resource allocation as fast as possible. Using this approach, they found out that a simple linear control with an additive increase and multiplicative decrease components is enough for the system to achieve high efficiency and fairness.


Ref:

[1] D.-M. Chiu and R. Jain, "Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks", Computer Networks and ISDN Systems, Vol. 17, 1989, pp. 1-14.