On Fri, 2009-07-24 at 10:42 +0200, Emmanuel Lecharny wrote:
> Bruno de Carvalho wrote:
> > Hi,
> >
> >
> > Before I begin, let me just throw out a big 'thank you' to the folks
> > that made MINA what it is. It's truly a remarkable library and besides
> > using and abusing it, I've also been recommending it to everyone I
> > know ;)
> >
> > On to the problem, I'm having an issue regarding message processing
> > times. The test code involves a client and a server, both launched from
> > within the same application. Before the client floods the server with N
> > objects, the test initiation instant is saved with
> > System.currentTimeMillis().
> > When the last packet is received at server side, the time diff is done
> > and an average of the time it takes for each packet to be sent from the
> > client to the server (I'll refer to this process as 'lifetime') is
> > calculated by totalTestTime/numberOfPackets.
> >
> > If I calculate this using another approach, that keeps each packet's
> > before-send instant in an array and calculates, upon reception, it's
> > individual lifetime (time from client to server), I get average values
> > way above the global average.
> >
> > Speaking in numbers, executing the test multiple times, I get a constant
> > global average of ~1ms lifetime, but individual lifetime measurement
> > averages ranges 40~80ms.
> >
> > If I introduce a sleep as low as 4-5ms between sending each packet from
> > the client to the server, the results become consistent: global lifetime
> > average tends to match individual lifetime average. So it looks as if
> > the server is choking with many simultaneous packets. Is there any way
> > around this?
> >
> > The code is ultra-simple and available at
> > http://bruno.factor45.org/LifetimeIssues.java in case someone wants to
> > see what I'm talking about (change ".java" for ".zip" for full project
> > with libs, ready to run).
> >
> > I thought it could be threads-to-cpu issue, so I've tested with client
> > and server in two different applications (only difference is that
> > measurement is also made on the client side, with the server mirroring
> > packets). Same thing happens. Even tried with client and server on
> > different machines, just to find that it still happens.
> >
> > I'm basically looking for a way to support heavy bursts without that
> > per-packet performance penalty. Is it possible?
> >   
> You have to know one important thing about System.currentMillis : its 
> accurancy is around 10 ms. That means you won't be able to measure 
> something which last les than 10ms, you will most of the time get 10ms. 
> If it last 11ms in real life, you will either get 10ms or 20ms.
> 
> You should rather use System.nanotime(), assuming that you may also get 
> negative values...

Thanks for the quick feedback.

I am aware of the lack of ultra-fine-grain precision of
System.currentTimeMillis() but in this case I believe it's no
explanation for what's happening here. Still, I've followed your
suggestion and implemented the timing with nanoTime() and I'm still
getting the same results.


Best regards,
  Bruno


> 
> 
> >
> > Best regards,
> >   Bruno 

Reply via email to