Hi Gang:

I think Terje has shown us that profiling yum is easy enough, and has some pretty useful output.

Given that yum's performance is a frequent topic on this and other mailing lists, maybe its time we set up some infrastructure to regularly checkout the source code, run it through some typical operations, and then graph the results over time?

I'm thinking we could do something like have a chroot (or maybe even a VIRTUAL APPLIANCE) that the profiled yum operates in, against a local set of repos.

So, what would be a good set of packages to use for the repo, and what commands should get run? In honor of Terje, 'yum install xpdf' has to be one of them.

Output could be a graph per yum command, plotting date vs time to run the operation, with each command running against a few repos of different sizes.

I was thinking of just using print_stats() to get profiling output, then messing with that to draw some graphs, unless the kgrind style output can be manipulated without running X.

What do people think?

-James
_______________________________________________
Yum-devel mailing list
[email protected]
https://lists.dulug.duke.edu/mailman/listinfo/yum-devel

Reply via email to