On 30/01/2009, at 9:56 AM, Paul Davis wrote:

The way that stats are calculated currently with the dependent
variable being time could cause some issues in implementing more
statistics. With my extremely limited knowledge of stats I think
moving that to be dependent on the number of requests might be better.
This is something that hopefully someone out there knows more about.
(This is in terms of "avg for last 5 minutes" vs "avg for last 100
requests", (the later of the two making stddev type stats
calculateable on the fly in constant memory.)

The problem with using # of requests is that depending on your data, each request may take a long time. I have this problem at the moment: 1008 documents in a 3.5G media database. During a compact, the status in _active_tasks updates every 1000 documents, so you can imagine how useful that is :/ I thought it had hung (and neither the beam.smp CPU time nor the IO requests were a good indicator). I spent some time chasing this down as a bug before realising the problems was in the status granularity!

Antony Blakey
-------------
CTO, Linkuistics Pty Ltd
Ph: 0438 840 787

The ultimate measure of a man is not where he stands in moments of comfort and convenience, but where he stands at times of challenge and controversy.
  -- Martin Luther King


Reply via email to