Re: [galaxy-dev] Installation issue on EC2

2012-12-11 Thread Fabiano Lucchese
Hi, Dannon.

Thanks for the tip. No, I was not reusing the cluster names exactly to 
avoid previous data to mess up with my fresh new deployments. There were indeed 
about 8 buckets referring to clusters that don`t exist anymore. Write now I 
can`t remove my current cluster by the web interface because it never comes up.

What would be the safe way to allow you to see my instance?

Cheers,

F.

 

-Original Message-
From: Dannon Baker [mailto:dannonba...@me.com] 
Sent: Monday, December 10, 2012 3:36 PM
To: Fabiano Lucchese
Cc: Brad Chapman; galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Installation issue on EC2

On Dec 10, 2012, at 6:17 PM, Fabiano Lucchese fabiano.lucch...@hds.com wrote:
   I appreciate your effort to help me, but it looks like my AWS account 
 has some serious hidden issues going on. I completely wiped out 
 CloudMan/Galaxy instances from my EC2 environment as well as their volumes, 
 and waited a couple of hours for the instances to disappear from the 
 instances list. After that, I repeated the whole process twice, trying to 
 create a Galaxy cluster with 10 and then 15 Gb of storage space, but the 
 result was equally frustrating with some minor differences.

 PS: Dannon, one thing that intrigues me is how the web form manages to find 
 out the names of the previous clusters that I tried to instantiate. Where 
 does it get this information from if all the respective instances have been 
 terminated and wiped out?

Did you reuse the same cluster name for either of these?  This would explain 
conflicting settings - there's more to a cluster than just the running 
instances and persistent volumes.

That form retrieves those listings from the S3 buckets in your account.  Each 
cluster has its own S3 bucket -- you can identify them with the 
yourCluster.clusterName files in the listing.  These contain lots of 
information about your galaxy clusters (references to volumes, universe 
settings, etc.), and if you're attempting to eliminate a cluster completely 
(you never want to restart it and don't want *anything* preserved), you should 
delete the buckets referring to them as well.  When you ask Cloudman to 
terminate and remove a cluster permanently, it removes all of this for you, and 
I'd recommend always using the interface to do this and not doing it manually.

Definitely let me know what else I can do to help.  If you have a running 
instance you'd like for me to look at directly I'd be happy to do so -- maybe 
this is indeed some weird issue that we can work around better.

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Installation issue on EC2

2012-12-10 Thread Fabiano Lucchese
Guys,

I appreciate your effort to help me, but it looks like my AWS account 
has some serious hidden issues going on. I completely wiped out CloudMan/Galaxy 
instances from my EC2 environment as well as their volumes, and waited a couple 
of hours for the instances to disappear from the instances list. After that, I 
repeated the whole process twice, trying to create a Galaxy cluster with 10 and 
then 15 Gb of storage space, but the result was equally frustrating with some 
minor differences.

In the less unsuccessful scenario, the Galaxy service was the only one 
down, apparently for the following reason:

 Traceback (most recent call last):
  File /mnt/galaxyTools/galaxy-central/lib/galaxy/webapps/galaxy/buildapp.py, 
line 36, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /mnt/galaxyTools/galaxy-central/lib/galaxy/app.py, line 85, in __init__
from_shed_config=True )
  File /mnt/galaxyTools/galaxy-central/lib/galaxy/tools/data/__init__.py, 
line 41, in load_from_config_file
tree = util.parse_xml( config_filename )
  File /mnt/galaxyTools/galaxy-central/lib/galaxy/util/__init__.py, line 143, 
in parse_xml
tree = ElementTree.parse(fname)
  File 
/mnt/galaxyTools/galaxy-central/eggs/elementtree-1.2.6_20050316-py2.6.egg/elementtree/ElementTree.py,
 line 859, in parse
tree.parse(source, parser)
  File 
/mnt/galaxyTools/galaxy-central/eggs/elementtree-1.2.6_20050316-py2.6.egg/elementtree/ElementTree.py,
 line 576, in parse
source = open(source, rb)
IOError: [Errno 2] No such file or directory: './shed_tool_data_table_conf.xml'
Removing PID file paster.pid

I restarted it a few times, rebooted the machine and even tried to 
update it, but nothing could magically fix the problem. I'm giving up the 
high-level approach and starting from a fresh installation of Galaxy in one of 
my instances. It's going to be less productive, but at least I have some 
control over what's going on and can try to diagnose problems as they occur.

Cheers,

F.

PS: Dannon, one thing that intrigues me is how the web form manages to find out 
the names of the previous clusters that I tried to instantiate. Where does it 
get this information from if all the respective instances have been terminated 
and wiped out?

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/