Thanks John. I guess I'm looking at options "enable_tool_shed_install"
and "tool_shed_install_config_file" that were added in rev 6398 and
dropped in rev 6747 when the tool migration scripts were added. I
don't know why they were dropped, but they would be immensely useful
today.

This is related to the CLIA pipeline, and a separate effort to
simplify galaxy VM provisioning with puppet. From my perspective,
copying in changes to that one file is significantly easier than
grooming a mounted disk image. My end goal is dozens of independent
galaxy instances running different tool chests.

I might just merge the old code back in and submit a pull request, or
I also see that a few scripted calls to
scripts/api/install_tool_shed_repositories.py should also take care of
it.

Cheers,

-E
-Evan Bollig
Research Associate | Application Developer | User Support Consultant
Minnesota Supercomputing Institute
599 Walter Library
612 624 1447
e...@msi.umn.edu
boll0...@umn.edu


On Tue, May 20, 2014 at 5:01 PM, John Chilton <jmchil...@gmail.com> wrote:
> Hey Evan,
>
> Out of curiosity, are you trying to update MSI's CLIA pipeline to
> utilize tool shed installed tools? (It sounds kind of like yes.)
>
> I don't think that file you mentioned exists in either place - is it a
> typo or maybe a configuration file with overridden location in
> universe_wsgi.ini?
>
> Reading between the lines, it sounds like you want Galaxy cloud images
> with completely fresh Galaxy instances but with preinstalled tools?
> There are many variants of the following approach that I think could
> work - but one approach is:
>
> Create a persistent directory (...or at least a directory that can be
> recreated when you launch a new VM) like:
>
> /mnt/gx_tools
>
> It should have the following contents:
>
> shed_tools/ (empty directory)
> shed_tool_conf.xml (contents like <toolbox
> tool_path="/mnt/gx_tools/shed_tools"></toolbox>).
> dependencies/ (empty directory)
>
> Now create a throw away Galaxy instance and update its
> universe_wsgi.ini with the following properties:
>
> [app:main]
> tool_dependency_dir = /mnt/gx_tools/dependencies
> install_database_connection =
> sqlite:////mnt/gx_tools/install_database.sqlite?isolation_level=IMMEDIATE
> tool_config_file = tool_conf.xml,/mnt/gx_tools/shed_tool_conf.xml
>
> Bootstrap and install tools. Throw away or wipe out Galaxy data and
> take a snapshot.
>
> Now just ensure new Galaxy VM instances starts with the above 3
> properties set the same way.
>
> Hope this helps.
>
> -John
>
>
> On Tue, May 20, 2014 at 3:40 PM, Evan Bollig <boll0...@umn.edu> wrote:
>> I want to setup a tool_shed_install.xml to roll out tools at first
>> boot for Galaxy.
>>
>> 1) Is it possible to get Galaxy to generate this based on the tools
>> currently installed from the tool shed?
>>
>> 2) Is it required to list every tool provided by each repository, or
>> can I simply add the repository to the file and Galaxy will assume all
>> tools are included?
>>
>> 3) The tool_shed_install.xml.sample file exists only in the
>> galaxy-central#default branch. This should probably be included in the
>> galaxy-dist#stable too.
>>
>> Cheers,
>>
>> -Evan Bollig
>> Research Associate | Application Developer | User Support Consultant
>> Minnesota Supercomputing Institute
>> 599 Walter Library
>> 612 624 1447
>> e...@msi.umn.edu
>> boll0...@umn.edu
>> ___________________________________________________________
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>   http://lists.bx.psu.edu/
>>
>> To search Galaxy mailing lists use the unified search at:
>>   http://galaxyproject.org/search/mailinglists/
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to