Tuesday, 20 January 2015

Bufferbloat solution

Solution to buffer bloat problem can be divided into two categories
1. Active Queue Management        ( AQM )
2. End to End Congestion Control  ( E2E )

AQM technique such as RED (Random Early Detection) or DRR (DeficitRound-Robin) selectively schedule/mark/drop packets to both reduce size.

Random early detection , also known as random early discard or random early drop. RED monitors the average queue size and drops (or marks when used in conjunction with ECN) packets based on statistical probabilities. If the buffer is almost empty, all incoming packets are accepted. As the queue grows, the probability for dropping an incoming packet grows too. When the buffer is full, the probability has reached 1 and all incoming packets are dropped. In the conventional tail drop algorithm, a router or other network component buffers as many packets as it can, and simply drops the ones it cannot buffer. If buffers are constantly full, the network is congested. Tail drop distributes buffer space unfairly among traffic flows. RED addresses these issues.
Despite multiple AQM techniques, adoption has been slow so far. E2E solution instead generally employ a delay based controller : TCP Vegas,TCP-LP and LEDBAT.TCP-Vegas is one of the first protocol that was know to have a smaller sending rate than standard TCP when both protocol share a bottleneck.However, while TCP Vegas aims at being more efficient than standard TCP, both NICE,TCP-LP and LEDBAT aims at lower priority with respect to TCP. The main difference b/w NICE,Vegas,TCP-LP and LEDBAT is that the former two react on round-trip-delay (RTT), while the latter two on one-way-delay (OWD) difference. OWD measurement is preferred  to a RTT measurement as congested reverse path may also result in an erroneous assessment of the congestion state of the forward path. Our research topic is LEDBAT which E2E solution for gauging the urgent issue of buffer bloat delays.

Saturday, 17 January 2015

Pig with cookie

Problem of Bufferbloat


BUFFERBLOAT

Problem of Bufferbloat

Bufferbloat is the existence of excessively large and frequently full buffer problem inside the network. As we know, we need buffer to provide space to queue packets waiting for transmission for minimizing data loss.The existence of cheap memory and misguided desire to avoid packet loss have led to larger and larger buffer being deployed in the host,routers, and switches that make up the Internet: this is exactly a recipe for bufferbloat and for the existence of significance delays in the network, unacceptable for delay sensitive application.



What a bufferboat jam might look like if TCP/IP packets were cars


as left figure illustrate relation b/w throughput and delay respecting the size of buffer in a network. System throughput is fastest rate at which the count of packet transmitted to the destination by the network is equal to the number of packets sent into the network. As the number of packet in the flight is increased the throughput increased until the packets are being sent and received at the bottleneck rate. After this, more packets in flight will not increase the received rate, but if a network has large buffer along the path, they can fill with these extra packets and increase delay.