The way the performance tests are being run has been changed. The tests
that used to load the large 3000 event calendar and then conduct a
performance tests are the ones that have been affected. We now create a
repository backup, and run the tests on a backed up repository.

Many tests got slower because we now need to load the needed items all
the way from disk; some tests also got faster. Previously everything was
in memory because we had just imported them. The new way should be
closer to real world use cases. What this means is that test results
from today and after can not be compared to results from yesterday and
before that.

Running the whole test suite became much faster. For example, on Windows
the fully automated test suite used to take 60+ minutes, now its down to
33-34 minutes. This is so big a time savings that we might be able to
have those same boxes also run the functional tests fully automated -
we'll need to check it out.

Now some nitty gritty details (you can stop reading if you never run the
performance tests).

The tools/do_tests.sh script has been updated to use the new way of
running the performance tests. If you want to manually run just one of
the PerfLargeData*.py performance tests, here's what you need to do (you
can look into the do_tests.sh script as well).

Do this once to create the backup repository:

$ rm -fr __repository__*
$ release/RunChandler --create --profileDir=.
--scriptFile=tools/QATestScripts/Performance/LargeDataBackupRepository.py

This will create __repository__.001 repository backup directory that you
can use for running the actual tests. Running a test:

$ release/RunChandler --restore=__repository__.001 --profileDir=.
--scriptFile=tools/QATestScripts/Performance/PerfLargeDataStampEvent.py

-- 
  Heikki Toivonen

Attachment: signature.asc
Description: OpenPGP digital signature

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev

Reply via email to