Why not separate toolshed updates from dist updates - tool xml and
other code should be robust wrt dist version.
One thing at a time - tools get updated less often than dist I'd
wager, and you can subscribe to repository update emails.
After a dist update you want all the tool functional tests green as
evidence that at least the test cases are running!
As always, YMMV
On Sat, Jun 9, 2012 at 2:38 PM, John Chilton <chil...@msi.umn.edu> wrote:
> On Fri, Jun 8, 2012 at 3:27 PM, Greg Von Kuster <g...@bx.psu.edu> wrote:
>> Hi John,
>> On Jun 8, 2012, at 1:22 PM, John Chilton wrote:
>>> Hello Greg,
>>> Thanks for the prompt and detailed response (though it did make me
>>> sad). I think deploying tested, static components and configurations
>>> to production environments and having production environments not
>>> depending on outside services (like the tool shed) should be
>>> considered best practices.
>> I'm not sure I understand this issue. What processes are you using to
>> upgrade your test and production servers with new Galaxy distributions? If
>> you are pulling
>> new Galaxy distributions from our Galaxy dist repository in bitbucket, then
>> pulling tools from the Galaxy tool shed is not much different - both are
>> outside services. Updating your test environment, determining it is
>> functionally correct, and then updating your production environment using
>> the same approach would generally follow a best practice approach. This is
>> the approach we are currently using for our public test and main Galaxy
>> instances at Penn State.
> We don't pull down from bitbucket directly to our production
> environment, we pull galaxy-dist changes into our testing repository,
> merge (that can be quite complicated, sometimes a multihour process),
> auto-deploy to a testing server, and then finally we push the tested
> changes into a bare production repo. Our sys admins then pull in
> changes from that bare production repo in our production environment.
> We also prebuild eggs in our testing environment not live on our
> production system. Given the complicated merges we need to do and the
> configuration files that need to be updated each dist update it would
> seem making those changes on a live production system would be
> Even if one was pulling changes directly from bitbucket into a
> production codebase, I think the dependency on bitbucket would be very
> different than on N toolsheds. If our sys admin is going to update
> Galaxy and bitbucket is down, that is no problem he or she can just
> bring Galaxy back up and update later. Now lets imagine they shutdown
> our galaxy instance, updated the code base, did a database migration,
> and went to do a toolshed migration and that failed. In this case
> instead of just bringing Galaxy back up they will now need to restore
> the database from backup and pullout of the mercurial changes.
> Anyway all of that is a digression right, I understand that we will
> need to have the deploy-time dependencies on tool sheds and make these
> tool migration script calls part of our workflow. My lingering hope is
> for a way of programmatically importing and updating new tools that
> were never part of Galaxy (Qiime, upload_local_file, etc...) using
> tool sheds. My previous e-mail was proposing or positing a mechanism
> for doing that, but I think you read it like I was trying to describe
> a way to script the migrations of the existing official Galaxy tools
> (I definitely get that you have done that).
> Thanks again for your time and detailed responses,
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
Ross Lazarus MBBS MPH;
Associate Professor, Harvard Medical School;
Head, Medical Bioinformatics, BakerIDI; Tel: +61 385321444;
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at: