Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes: > based on lots of customer and internal datacenter activity, in some > cases spanning nearly a decade ... an initial set of 1000 benchmarks > were defined for calibrating the resource manager ... selecting a wide > variety of workload profiles and configuration profiles. these were > specified and run by the automated benchmarking process.
re: http://www.garlic.com/~lynn/2006b.html#14 Expanded Storage http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Expanded Storage http://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Expanded Storage http://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Expanded Storage and minor addenda to actual (implementation) benchmarking results corresponding to theory/model/prediction ... the modified predictor not only specified the workload profile (things like batch, interactive, mixed-mode, etc) and configuration ... but also scheduling priority. so not only did the actual (implementation) overall benchmarking results had to correspond to theory/model/predictation ... but each individual virtual machine measured benchmark resource use (cpu, paging, i/o, etc) also had to correspond to the theory/model/prediction for that virtual machine ... including any variations introduced by changing the individual virtual machine scheduling priority. a side issue was when i released the resource manager ... they wanted me to do an updated release on the same schedule as the monthly PLC releases for the base product. my problem was that i was responsible for doing all the documentation, classes, support, changes, maintenance, and (initially) answering all trouble calls ... basically as a sideline hobby ... indepedent of other stuff I was supposed to be doing at the science center (aka i was part of the development organization ... at the time, occupying the old SBC building in burlington mall). I argued for and won ... only having to put out a new release every three months instead of along with every monthly PLC. part of this was that it was just a sideline hobby ... the other was that i insisted that I repeat at least 100-200 benchmarks before each new minor (3 month) release to validate that nothing had affected the overall infrastructure (and major changes to the underlying system might require several hundred or thousands of benchmarks to be repeated). -- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
