The problem here is that the galaxy update failed to merge a change to run.sh
because of minor customizations it has. We'll have a long term fix out for
this soon, but for now what you can do is ssh in to your instance and update
run.sh yourself prior to restarting galaxy. All you need to do is add
'migrated_tools_conf.xml.sample' to the SAMPLES in
/mnt/galaxyTools/galaxy-central/run.sh, execute `sh run.sh --run-daemon` (or
restart galaxy again from the admin page) and you should be good to go.
That new AMI you're seeing is not owned by the Galaxy Team, and we don't
actually know who made it. Keep using the same galaxy-cloudman-2011-03-22 for
now (and we'll always have the most up-to-date AMI listed at
usegalaxy.org/cloud). Because of Cloudman's modular volume design almost
nothing resides on the AMI itself, so we can (and do) update the tools and
index volumes without having to touch it. So while the AMI reflects a date of
almost a year old, the galaxy tools volume (and thus the actual Galaxy instance
you're running) has been updated much more recently.
One last note- if you're updating and copying your tools in every time, you may
want to try using the 'Persist changes' functionality available in the Cloudman
admin panel. Once you've set your instance up how you want, if you click
'Persist changes to galaxyTools', it'll create a custom snapshot of your tools
volume that will be used from that point forward with this instance.
Let me know if you have any more issues,
On Mar 9, 2012, at 2:53 AM, Greg Edwards wrote:
> I'm trying to restart my Galaxy Cloudman service, using the same approach
> that has been successful over the last couple of months ..
> - launch AMI 861460482541/galaxy-cloudman-2011-03-22 as m1.large
> - update from Cloudman console
> - copy in my tools etc etc
> - restart
> - away we go, all works
> However today the update fails, the log says ...
> RuntimeWarning: __builtin__.file size changed, may indicate binary
> from csamtools import *
> python path is:
> /mnt/galaxyTools/galaxy-central/eggs/Mako-0.4.1-py2.6.egg, /mnt/g!
> Traceback (most recent call last):
> File "/mnt/galaxyTools/galaxy-central/lib/galaxy/web/buildapp.py", line 82,
> in app_factory
> app = UniverseApplication( global_conf = global_conf, **kwargs )
> File "/mnt/galaxyTools/galaxy-central/lib/galaxy/app.py", line 24, in
> File "/mnt/galaxyTools/galaxy-central/lib/galaxy/config.py", line 243, in
> tree = parse_xml( config_filename )
> File "/mnt/galaxyTools/galaxy-central/lib/galaxy/util/__init__.py", line
> 105, in parse_xml
> tree = ElementTree.parse(fname)
> line 859, in parse
> tree.parse(source, parser)
> line 576, in parse
> source = open(source, "rb")
> IOError: [Errno 2] No such file or directory: './migrated_tools_conf.xml'
> Removing PID file paster.pid
> While I'm here, I see a new Galaxy Cloudman AMI
> 072133624695/galaxy-cloudman-2012-02-26. I can't manage to start that, I get
> an error as below, with all types of instance, (tiny/small/medium/large). Is
> that a recommended AMI now ? It would be good to have a new updated AMI.
> Thanks !
> Greg Edwards,
> Port Jackson Bioinformatics
> The Galaxy User list should be used for the discussion of
> Galaxy analysis and other features on the public server
> at usegalaxy.org. Please keep all replies on the list by
> using "reply all" in your mail client. For discussion of
> local Galaxy instances and the Galaxy source code, please
> use the Galaxy Development list:
> To manage your subscriptions to this and other Galaxy lists,
> please use the interface at:
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org. Please keep all replies on the list by
using "reply all" in your mail client. For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:
To manage your subscriptions to this and other Galaxy lists,
please use the interface at: