On 06/15/10 01:55 PM, Karen Tung wrote:
On 06/14/10 08:11 AM, Dave Miner wrote:
Generally OK with your responses, but more comments on this one:

On 06/10/10 07:41 PM, Karen Tung wrote:
Hi Dave,

Thank you for reviewing the document.  Please see my responses inline.

On 06/09/10 09:09, Dave Miner wrote:
...

I'm disappointed that the methodology for determining checkpoint
progress weighting is still TBD.  I'd thought this was one of the
things we were trying to sort out in prototyping, but perhaps I
assumed too much.  When can we expect this to be specified?

The prototype confirm that we can have each of the checkpoints report
it's own weight, and the engine used
these weight to normalize the process reported by the checkpoints.  We
also shown in the prototype that we
can use the logger for progress reporting.

I don't have plans to work on the methodology in detail in the short
term.  It would involve
specifying exactly which machine with what configuration should be used
as the standard,
and also provide the mapping between a performance number generated from
that
machine to the weight.  In order to do this accurately, I think it would
involve more research
and experimentation to determine what would work for most cases.  If we
have the
code in the engine to accept and interpret weights provided by
checkpoints, when we
eventually have the methodology in place, we can just change the value
returned by the get_performance_estimate() function in the checkpoints,
which should
have a very minimal impact.  At the mean time, we can have the
checkpoints return
the "guess" weight like we do now.


I don't think we need to define a mapping of a performance number to a
weight.  The weights are relative to each other in a sequence, not to
some absolute standard, so all you should need from the checkpoint is
a number that you can then compute as a ratio against the sum of all
the estimates.  I think you can define a configuration that's fairly
easily available (T2000 LDOM with 1 GB of memory or something) and go.

Dave

Hi Dave,

You mean, we would ask people to run their checkpoint on a system with a
standard configuration,
and have the get_progress_estimate() function return the number of
minutes it took for the
checkpoint to execute on that system?


I'd imagine seconds would be a more useful unit in general, but yes, that would be one way, at least initially. Another would be to collect installation logs from a wide variety of systems with the checkpoint execution times logged then aggregate and average that data.

Dave
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to