On Mon, Sep 16, 2013 at 2:30 PM, Dave Bouvier <d...@bx.psu.edu> wrote:
> Peter,
> Yes, the functional test suite is a bit on the slow side, and one of my
> long-term goals has been to improve the performance as best I can.

Great - I appreciate there are lots of other high priority issues
with the test framework (e.g. repeats and conditionals), so
speed alone won't be top of the list.

>> Florent has identified one clear bottleneck is the creation of
>> a fresh SQLite database each time, upon which the growing
>> number of schema migration calls must be performed. Could
>> we not cache an empty but up to date copy of this database?
>> i.e. Something like this:
>> 1. run_functional_tests.sh starts
>> 2. If it exists, copy empty_test_sqlite_database.db,
>>     otherwise create new test SQLite database as now.
>> 3. Check the schema version.
>> 4. If the temporary SQlite database is out of date,
>>     run the migration, and then save a copy of this now
>>     up to date database as empty_test_sqlite_database.db,
>> 5. run the tests
> And that would be very easy to automate, with the environment
> variable GALAXY_TEST_DBURI set to the path to the empty,
> migrated database, and a few aliases or scripts to replace the
> test database file with the "template" file on each run.

I see that environment variable already exists, perhaps I
can hack something together... at least enough to show
a worthwhile time saving.

>> Perhaps the empty SQLite database could even be
>> cached in BitBucket too, for a faster first test run with
>> a clean checkout?
> I'm not sure of that idea, it would feel a bit like adding a
> non-essential file to the codebase which is generated
> programmatically by existing code.

External caching would be enough (and cleaner).

>> Separately, running the tests themselves seems overly
>> slow - can anything be tweaked here? For example, is
>> there any point executing the external set_meta script
>> in the test environment?
> The switch to the external set_meta script was done
> because setting metadata internally was becoming
> computationally expensive. As I understand it, switching
> to internal metadata would actually lead to poorer
> performance in the functional tests.

Sorry, I was unclear - although that is interesting.

What I meant was, do we even need the metadata
for validating the test results? If not, skip generating it.


Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

To search Galaxy mailing lists use the unified search at:

Reply via email to