On 26/07/2009, at 10:19 PM, Emmanuel Lecharny wrote:
Matthew Phillips wrote:
On 26/07/2009, at 6:29 PM, Emmanuel Lecharny wrote:
Matthew Phillips wrote:
Hello,
I'm using IoSession.getReadBytesThroughput () and
getWrittenBytesThroughput () to output stats from a MINA-based
application, and even under heavy load these are always 0.0. Is
there something I need to enable?
I'm not sure they are still active. We had some issues with
concurrent updates of those counters.
I can see that might be tricky with MINA's highly concurrent
architecture. I was planning to implement something like the
counters myself as part of a "bad client" throttling/disconnection
filter. Is it planned to fix the built-in counters before 2.0
final, or should I proceed to do it myself?
IMO, such counters should be thought as a filter, not a base element
of the MINA architecture.
Having them hard coded make them a bottleneck right now, and
introduce some cost that is not generally wanted.
So if you think about building them as a filter, I must say that
it's probably the best idea. We will probably do that some time in
the future anyway...
That would sound reasonable, especially given a filter in the right
place would (mostly) solve the concurrency problem. But it would be
problematic for the filter to just call IoSession.updateThroughput (),
because (a) the counts are long's and therefore not atomically updated
and (b) the AbstractIoSession.updateThroughput () method updates a
number of related counters with no sync block, which would mean that
other threads reading throughput numbers could see garbled results
(especially if they read a counter half way through it being updated).
As an alternative, I could have the filter keep its own stats in an
AtomicLong.
I'm sure I'm not telling you anything new here, but it does seem that
any reasonable attempt at handling misbehaved clients would need such
stats, and handling DOS's (accidental or deliberate) is something
nearly every MINA-based app will need to do if it gets deployed in a
production environment.
Cheers,
Matthew.