Observations on Buffering

None of the following is particularly novel, but the reminder has been useful:

The previous three points suggest that traffic buffers should be measured in seconds, not in bytes, and managed accordingly. Less obviously, buffer management needs to be considerably more sophisticated than the usual "grow buffer when full, up to some predefined maximum size."

Point one also implies a rule that I see honoured more in ignorance than in awareness: you can't make a full buffer less full by making it bigger. Size is not a factor in buffer fullness, only in buffer latency, so adjusting the size in response to capacity pressure is worse than useless.

There are only three ways to make a full buffer less full:

  1. Increase the rate at which data exits the buffer.

  2. Slow the rate at which data enters the buffer.

  3. Evict some data from the buffer.

In actual practice, most full buffers are upstream of some process that's already going as fast as it can, either because of other design limits or because of physics. A buffer ahead of disk writing can't drain faster than the disk can accept data, for example. That leaves options two and three.

Slowing the rate of arrival usually implies some variety of back-pressure on the source of the data, to allow upstream processes to match rates with downstream processes. Over-large buffers delay this process by hiding back-pressure, and buffer growth will make this problem worse. Often, back-pressure can happen automatically: failing to read from a socket, for example, will cause the underlying TCP stack to apply back-pressure to the peer writing to the socket by delaying TCP-level message acknowledgement. Too often, I've seen code attempt to suppress these natural forms of back-pressure without replacing them with anything, leading to systems that fail by surprise when some other resource โ€“ usually memory โ€“ runs out.

Eviction relies on the surrounding environment, and must be part of the protocol design. Surprisingly, most modern application protocols get very unhappy when you throw their data away: the network age has not, sadly, brought about protocols and formats particularly well-designed for distribution.

If neither back-pressure nor eviction are available, the remaining option is to fail: either to start dropping data unpredictably, or to cease processing data entirely as a result of some resource or another running out, or to induce so much latency that the data is useless by the time it arrives.

Some uncategorized thoughts: