Hi all,

I'd like to echo Florent's concern that run_functional_tests.sh
is too slow, and that this discourages Tool Authors from
adding more tests to their tools:


(And encourage people to vote up that issue ;) )

Florent has identified one clear bottleneck is the creation of
a fresh SQLite database each time, upon which the growing
number of schema migration calls must be performed. Could
we not cache an empty but up to date copy of this database?

i.e. Something like this:

1. run_functional_tests.sh starts
2. If it exists, copy empty_test_sqlite_database.db,
   otherwise create new test SQLite database as now.
3. Check the schema version.
4. If the temporary SQlite database is out of date,
   run the migration, and then save a copy of this now
   up to date database as empty_test_sqlite_database.db,
5. run the tests

Perhaps the empty SQLite database could even be
cached in BitBucket too, for a faster first test run with
a clean checkout?

Separately, running the tests themselves seems overly
slow - can anything be tweaked here? For example, is
there any point executing the external set_meta script
in the test environment?



P.S. On a related note, my achievement last Friday
was to get TravisCI doing continuous integration
testing of a GitHub repository of Galaxy tools:


For those not familiar with this system, the idea is
that via a special .travis.yml configuration file, each
time new commits are pushed to GitHub, the latest
code is checked out and tested on a virtual machine.
i.e. Continuous in the sense of running the tests
after each change, rather than just once a night.

Right now the tests for the BLAST+ suite and associated
tool wrappers like Blast2GO takes from 15 to 20 minutes,
which I feel could be much improved.
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

To search Galaxy mailing lists use the unified search at:

Reply via email to