Nagle's algorithm

Nagle's algorithm is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It was defined by John Nagle while working for Ford Aerospace. It was published in 1984 as a Request for Comments (RFC) with title Congestion Control in IP/TCP Internetworks (see RFC 896).

The RFC describes what he called the "small-packet problem", where an application repeatedly emits data in small chunks, frequently only 1 byte in size. Since TCP packets have a 40-byte header (20 bytes for TCP, 20 bytes for IPv4), this results in a 41-byte packet for 1 byte of useful information, a huge overhead. This situation often occurs in Telnet sessions, where most keypresses generate a single byte of data that is transmitted immediately. Worse, over slow links, many such packets can be in transit at the same time, potentially leading to congestion collapse.

Nagle's algorithm works by combining a number of small outgoing messages and sending them all at once. Specifically, as long as there is a sent packet for which the sender has received no acknowledgment, the sender should keep buffering its output until it has a full packet's worth of output, thus allowing output to be sent all at once.

Algorithm

The RFC defines the algorithm as

inhibit the sending of new TCP segments when new outgoing data arrives from the user if any previously transmitted data on the connection remains unacknowledged.

Where MSS is the maximum segment size, the largest segment that can be sent on this connection, and the window size is the currently acceptable window of unacknowledged data, this can be written in pseudocode as

if there is new data to send then
    if the window size ≥ MSS and available data is ≥ MSS then
        send complete MSS segment now
    else
        if there is unconfirmed data still in the pipe then
            enqueue data in the buffer until an acknowledge is received
        else
            send data immediately
        end if
    end if
end if

This algorithm interacts badly with TCP delayed acknowledgments (delayed ACK), a feature introduced into TCP at roughly the same time in the early 1980s, but by a different group. With both algorithms enabled, applications that do two successive writes to a TCP connection, followed by a read that will not be fulfilled until after the data from the second write has reached the destination, experience a constant delay of up to 500 milliseconds, the "ACK delay". For this reason, TCP implementations usually provide applications with an interface to disable the Nagle algorithm. This is typically called the TCP_NODELAY option. On Microsoft Windows the TcpNoDelay registry switch decides the default.

A solution recommended by Nagle is to avoid the algorithm sending premature packets by buffering up application writes and then flushing the buffer:[1]

The user-level solution is to avoid write–write–read sequences on sockets. Write–read–write–read is fine. Write–write–write is fine. But write–write–read is a killer. So, if you can, buffer up your little writes to TCP and send them all at once. Using the standard UNIX I/O package and flushing write before each read usually works.

Nagle considers delayed ACKs a "bad idea", since the application layer does not usually respond within the time window.[2] On the protocol level, he recommends disabling "delayed ACK" (for example, by TCP_QUICKACK on Linux) instead of his algorithm, as "quick" ACKs do not incur as much overhead as many small packets do.[3] On Microsoft Windows, Setting TcpAckFrequency to 1 in the registry would have the same effect. A related undocumented value, TcpAckFrequency, controls the max time-to-delay for ACKs. It can be set to zero to disable delayed ACKS as well.[4]

Negative effect on larger writes

The algorithm applies to data of any size. If the data in a single write spans 2n packets, the last packet will be withheld, waiting for the ACK for the previous packet.[5] In any request-response application protocols where request data can be larger than a packet, this can artificially impose a few hundred milliseconds latency between the requester and the responder, even if the requester has properly buffered the request data. Nagle's algorithm should be disabled by the requester in this case. If the response data can be larger than a packet, the responder should also disable Nagle's algorithm so the requester can promptly receive the whole response.

In general, since Nagle's algorithm is only a defense against careless applications, it will not benefit a carefully written application that takes proper care of buffering; the algorithm has either no effect, or negative effect on the application.

Interactions with real-time systems

Applications that expect real-time responses and low latency can react poorly with Nagle's algorithm. Applications such as networked multiplayer video games or the movement of the mouse in a remotely controlled operating system, expect that actions are sent immediately, while the algorithm purposefully delays transmission, increasing bandwidth efficiency at the expense of latency. For this reason applications with low-bandwidth time-sensitive transmissions typically use TCP_NODELAY to bypass the Nagle-delayed ACK delay.[6]

Another option is to use UDP instead.

Operating systems implementation

Most modern operating systems implement Nagle's algorithms. In AIX[7] and Linux and Windows [8] it is enabled by default and can be disabled on a per-socket basis using the TCP_NODELAY option.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.