Philip J. Mucci wrote:
Hi Will,

Good show...I remember that one test I did that showed things failing
was multiple copies of the same code running simultaneously. FWIW.

Phil

Yes, running separate concurrent measurements and making sure that they are kept separate is an important test. That is next on the implementation list. Want to build up a list that tests possible ways that libpfm/pfmon could fail, so that we can just run the testsuite and be pretty confident we know what is working and what is broken.

Automation of the testing is needed because QA engineers at Red Hat (and I assume at other places) don't have time to manually run each of the tests. Plus this will make it easier for people that are not experts in libpfm/pfmon the ability to make sure that things are working on their machine.

Phil, did you see how the machine specific event for the tests are selected? Does that fit with what you suggested in previous email?

-Will


On Tue, 2006-11-07 at 16:42 -0500, William Cohen wrote:

Hi all,

I have fleshed out the testing a bit more for perfmon/libpfm testing:

-checks to determine which processor is support
-header file (events.h) with the machine specific events to trigger
        (currently only coded for AMD64)
-test to check that multiple sequential runs of event get expected counts

Comments and feedback on this would be appreciated. The attached tar file has the testing code in it. It is basically run in the same manner as the previous version.

-Will
_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/



_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/

Reply via email to