On Wed, Oct 31 2012, Eoghan Glynn wrote:

> Yep the sum of local maxima is not lossy as long as the requested
> duration completely encapsulates the compute agent outage (and the
> instance doesn't restart during the outage).

Actually, if there's one restart, it still _can_ be safe in certain
cirumstances such as in a case like:

Time | Value
0    | 1000
1    | 3000 (agent down)
2    | 0    (agent down)
3    | 80
4    | 100

If in this particular case, with the case where your agent was down at
t1 and t2. The API will detect the counter reseted while the agent was
down. With cumulative model, the loss is again less than the computed
delta model.

OTOH, both models fails to get some data with a case like:

Time | Value
0    | 1000
1    | 3000 (agent down)
2    | 0    (agent down)
3    | 8000
4    | 10000

> However I was more thinking of the scenario where the duration
> requested  via the API is say t1..t4 in your example above.
> In any case, do we need a new measurement type, in addition to the
> existing CUMULATIVE type, that captures the non-monotonic nature of
> the measure and alerts the API that special handling is required to
> compute say max-min?
> Something like TRANSIENT_CUMULATIVE, if that's not too much of a
> mouthful.

We discussed it already with Doug, and came to conclusion that we
didn't, because the monotonic case is just a special case of the non
monotonic one. So applying the computing method to the non-monotonic
case will solve all problem.

Julien Danjou
-- Free Software hacker & freelance
-- http://julien.danjou.info

Attachment: pgpFZzJXgNfiW.pgp
Description: PGP signature

Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to