Previous Entry Share Next Entry
A quick question about TCP/IP
I was reading about TCP, congestion, etc. because of some presentation slides (which I wish I had the notes for, because I'd like the explanations that go with some of them.

And I realised that there was an inefficiency I was seeing. And the reason may be historical, or (entirely likely) something I'm just overlooking.

Why does _every_ packet requires an acknowledgement packet? Chances are that most packets are travelling as part of a set of packets - probably the maximum possible for the window size. So rather than taking up bandwidth sending back a load of them, why not delay a trivial amount of time after the first packet arrives and then acknowledge the highest number for which you have all of the previous packets?

You wouldn't want to delay long - but then you shouldn't need to - you'd generally expect that the next n packets would be arriving immediately behind each other, so acknowledging only only once every n milliseconds should work pretty effectively.

So if packets 2-6 arrive, send an ACK saying "*6" (or some equivalent signal), rather than five separate acks? I know ACK packets are tiny, but it still seems wasteful.

Anyone who's knowledgable in the area care to fill me in?

Original post on Dreamwidth - there are comment count unavailable comments there.

It isn't required that every packet have its own separate ACK. In those slides there's mention of delaying your ACK in case you send a reply shortly afterwards (in which case the ACK and your returned data can be combined into one segment), and also on page 159 of that PDF there's an example which does show one ACK every two packets. It's only required that everything get acked eventually.

I know of no reason you shouldn't be able to to do exactly what you suggest.

Aaah, so it is allowed right now. And there's nothing stopping you from stuffing multiple sequence numbers into the same ACK. The diagrams just rarely show it :->


TCP ACKs (without SACK) are cumulative: they ack all data up to byte N. SACK allows you to say to a sender that you have data up to N then a gap then more data, so the sender knows it doesn't need to re-send the later data.

An ack for every other packet is usual for a bulk data transfer with current TCP algorithms.

It is also worth noting that the presentation does not include modern TCP features such as SACK, which greatly reduces the unnecessary retransmissions that you can see on slide 96. These will still occur even if you have decent congestion control.

A large chunk of the presentation is talking about the state of TCP in the mid 1980s before Van Jacobson fixed it. The people working on the Internet at that time were surprisingly ignorant of control theory. Here's a good summary of the state of the art at that time:

Van Jacobson's pearls of wisdom from the late 1980s / early 1990s are good introductions to how TCP congestion control works:

And his rant on queues is helpful for getting your head around ack clocking:

Historical, mostly.

It was created back in the early 70s, where the speed of transfer was magnitudes lower than today.

It's also meant to be massively redundant - when the US military got involved, the idea was for a network that was resilient to nuclear attack.

With today's speed, you run into the potential problem of missing a packet in the middle - in your example, send back 6, but you actually don't have packet 3.

Well yes, but then you wouldn't ACK "*6" - you'd ack "*2", "4", "5","6".

apparently, yes a single ack can be for multiple packets.

"ACK can get complicated. It isn't for every data packet, but for however many have been received so there might be one ACK every 8 packets. The sending side has a window which is how much it will send before it must receive an ACK. Then there is Selective ACK which is used to say "Received bytes 2000-8000, but not 0-2000" "

You are viewing andrewducker