You certainly raised some interesting questions in this reply. After reading your email I've thought and rethought the problem and in fact, it doesn't make any sense to expect the individual lifetime to be as low as the average on a flood situation.
If I make a single request that takes 5ms seconds from client to server and back, then the average RTT is 5ms, equal to that single packet's RTT. This will always be true, because a single request, assuming that the server is not serving another requests at that time, will always have resources to be handled instantly. Now if I flood the server with 1000 of those requests in a very short interval, say, 50ms, the resources to handle those requests are - naturally - limited, so a few of them will be selected for processing and the rest will be queued. So while that first handful of requests will have an RTT nearing 5ms, the others which have been queued will have an ever increasing RTT - even though the *actual processing time* was something like 1ms. This is why the sequential test (send packets only after the previous ones have been received) was showing the individual RTT average perfectly matching the global average RTT. This is more than obvious, don't know what got into me to think that I could simply flood the server with a bazillion requests only to see them all answered at the same time in a computer with 2 cpus :P I believe what was clouding my judgement was the fact that I was only looking at responsiveness and not throughput - since in this particular scenario, throughput is secondary. If you have any advice on how to improve responsiveness, I'd be very grateful! It's not much of a big deal in the real scenario, as packets come with a 10/20ms interval, so that clogging effect should be minimal. Thanks for your enlightenment. Best regards, Bruno On Sat, Jul 25, 2009 at 12:04 AM, Emmanuel Lecharny<[email protected]> wrote: > Bruno, > > I want to give you some result we get on Apache Directory Server, which is > based on MINA. > > Currenly on my laptop, a 2Ghz centrino, I'm able to process around 4000 > search requests per second. That mean a request is handled in *less than a > 1/4 millisecond*. And trust me on that, the processing is quite complex. > > Whatever you do with currentMillis on the message level is irrelevant. > Measuring the execution time for a single message will at best gives you a > consistent duration only when the message is stuck for more than a few tens > of ms. Otherwise, you will get a random value between 0 and 20 ms, depending > on your platform. > > There is of course no guarantee that every message will be processed in less > than 200 ms, this is Java... It's just that statistically, it will be the > case for 99,999% of the messages. > > So the best you can do to measure the time it takes to process a single > message is to do what you are doing, but only gather times when a message is > processed in more than, say, 50 ms. Otherwise, simply consider that the > message has been processed 'fast enough'. > > When you'll get the number of messages processed in more than 50 ms, you > will also see if this is acceptable from your PoV. > > last thing, as Joshua Bloch says in his book "Effective Java" : > > "...Finally, note that System.nanoTime is used to time the activity rather > than System.currentTimeMillis. For interval timing, always use > System.nanoTime in preference to System.currentTime-Millis. System.nanoTime > is both more accurate and more precise, and it is not affected by > adjustments to the system’s real-time clock...." (Item 69, page 276) > > > Bruno de Carvalho wrote: >> >> Meanwhile, I've updated the code and implemented another flag, >> SEQUENTIAL. With this flag, packet N+1 will only be sent to the server >> after packet N has been received. With this flag enabled you will >> better grasp what I'm experiencing - sequential mode has equal global >> and per-packet averages, while on non-sequential mode this doesn't >> happen. >> >> The update file is at http://bruno.factor45.org/LifetimeIssues.java >> >> I'll try to dig into MINA's code and see what I can come up with. >> >> >> Best regards, >> Bruno >> >> On Fri, Jul 24, 2009 at 3:18 PM, Christopher >> Popp<[email protected]> wrote: >> >>> >>> Hello, >>> >>> >>>>> >>>>> Speaking in numbers, executing the test multiple times, I get a >>>>> constant >>>>> global average of ~1ms lifetime, but individual lifetime measurement >>>>> averages ranges 40~80ms. >>>>> >>> >>> I downloaded your code and gave it a run...it printed out the following >>> with the defaults (no sleep). >>> >>> [SERVER] Binding Server to localhost/127.0.0.1:20002 TCP >>> [SERVER] Bound to localhost/127.0.0.1:20002 >>> [SERVER] Session created: /127.0.0.1:2598 >>> [CLIENT] Session created: localhost/127.0.0.1:20002 >>> [CLIENT] Session opened: localhost/127.0.0.1:20002 >>> [CLIENT] Connected to server, sending message. >>> [SERVER] Session opened: /127.0.0.1:2598 >>> [SERVER] Test completed, took 63ms to receive 100 packets (63ms >>> after sleep discounts). >>> [SERVER] Calculated average lifetime was 00.63ms. >>> [SERVER] Individual average lifetime was 3.03ms. >>> [CLIENT] Session closed: localhost/127.0.0.1:20002 >>> [SERVER] Session closed: /127.0.0.1:2598 >>> [SERVER] Received a total of 5300b >>> >>> Maybe I am missing something, but it seems pretty reasonable to me. I >>> ran it under Windows XP on a laptop with a dual core processor. >>> >>> Chris >>> >>> >>> >>> >> >> > > > -- > -- > cordialement, regards, > Emmanuel Lécharny > www.iktek.com > directory.apache.org > > >
