The SO question has the source codes of a simple server and client that 
demonstrate and isolate the problem. Basically I'm timing the latency of a 
ping-pong (client-server-client) message. I start by sending one message 
every 1 millisecond. I wait for 200k messages to be sent so that the 
HotSpot has a chance to optimize the code. Then I change my pause time from 
1 millisecond to 30 seconds. For my surprise my write and read operation 
become considerably slower.

I don't think it is a JIT/HotSpot problem. I was able to pinpoint the 
slower method to the native JNI calls to write (write0) and read. Even if I 
change the pause from 1 millisecond to 1 second, problem persists.

I was able to observe that on MacOS and Linux.

Does anyone here have a clue of what can be happening?

Note that I'm disabling Nagle's Algorithm with setTcpNoDelay(true).

SO question with code and output: 
http://stackoverflow.com/questions/43377600/socketchannel-why-if-i-write-msgs-quickly-the-latency-of-each-message-is-low-b

Thanks!

-JC

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to