- Online Community
Post Reply Forums » Programming Questions » TCP delay on Linux?

rss feed Print
TCP delay on Linux?
Aaron Bolyard
Member #7,537
July 2006


I have a problem I'm trying to diagnose. In my project, there's a bunch of programs that communicate over TCP over some custom message format. I have a central server that handles filtering messages, publishing, connections, etc.

Here's an example:

Client A -> Server -> Client B

The server will receive the messages without any problems from Client A, but the messages received by Client B will eventually pile up until it gets too far behind.

I've tried it without Client B doing any processing on received messages and it still gets backed up. Again, this behavior doesn't occur on FreeBSD (my primary platform), only Linux. It only happens when many thousands of messages are sent within <20 ms (or so). The size of each send is varied--could be a few dozen bytes, could be a few hundred kilobytes.

I'm just wondering if there's some strange networking configuration that could cause such behavior, because the code is identical on FreeBSD and Linux. If not, the problem is probably elsewhere.


Chris Katko
Member #1,881
January 2002

1) Is there a reason you're not combining tiny packets to reduce overhead?

2) Did you know the TCP stack can do it automatically to reduce overhead, called Nagle's Algorithm? It should be an option to turn it off in POSIX when you create the socket, see TCP_NODELAY.


In any request-response application protocols where request data can be larger than a packet, this can artificially impose a few hundred milliseconds latency between the requester and the responder, even if the requester has properly buffered the request data.

3) Does the affect show up with UDP?

I'm sure you know that TCP is horrendous for predictability, as any network engineer can attest. (ala Google)

“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs

Aaron Bolyard
Member #7,537
July 2006

1) I have discrete messages I send. I can't combine them or latency skyrockets. Clients are supposed to process the messages as soon as possible, there mustn't be a long delay.

2) I've disabled that with 'ip::tcp::no_delay'. Should have mentioned. I use Boost.Asio.

3) I'm not porting my server-client system to UDP so I don't know.

I'm sure you know that TCP is horrendous for predictability, as any network engineer can attest. (ala Google)

I'm not crossing the network boundary so most problems of TCP should be less relevant.

Again, the problem only exists on Linux builds which is why I was wondering if it's something to do with some arcane network configuration option or something.

edit: The bug was elsewhere. On the Linux build, a certain feature was generating hundreds of times more events than it should have causing congestion.

Post Reply
Go to: