[galaxy-dev] Missing tool previews in tool shed, e.g. hmmer.xml

2012-03-19 Thread Peter Cock
Hello all,

I was looking at Edward's updated HMMER wrapper on the toolshed,
http://toolshed.g2.bx.psu.edu/

There is a preview offered for the (simple) hmmpress.xml only, giving
the impression that Edward's repository isn't very useful.

Why isn't anything shown for the more complex hmmer.xml? If there
is a problem rendering the preview, it would still be useful to list the
tool in the table with its description, version and requirements.

Edward - I have a query (which might be what the Tool Shed preview
is unhappy about), where is the hmmer file format referenced in
hmmer.xml defined? In hmmpress.xml  you use hmm (as an input)
and hmmpressed (as an output) which are both defined in hmmer.py,
so is this an accidental inconsistency?

Thanks,

Peter

P.S. Edward, hmmpress.xml is missing hmmpress as a requirement.
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Same error in multiple tools

2012-03-19 Thread Alban Lermine
Hi,

I have the same error message for multiple tools (GFFtoBED,
BEDtoBIGBED,BAMtoSAM,...):

/from galaxy import eggs
ImportError: No module named galaxy

It's happening only since the last upgrade, do you know what is going wrong?

Thanks,

Alban
/

-- 
Alban Lermine 
Unité 900 : Inserm - Mines ParisTech - Institut Curie
« Bioinformatics and Computational Systems Biology of Cancer »
11-13 rue Pierre et Marie Curie (1er étage) - 75005 Paris - France
Tel : +33 (0) 1 56 24 69 84

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Missing tool previews in tool shed, e.g. hmmer.xml

2012-03-19 Thread Greg Von Kuster
Hi Peter,

Here is the error produced by attempting to load the hmmr.xml tool config into 
the tool shed.  With regard to tool validity, the definition of a valid tool in 
the tool shed has always been restricted to the tool properly loading in a 
Galaxy instance.  If a tool is not valid, it will not be returned in a search 
and it cannot be automatically installed (unless it belongs to a repository 
containing other valid tools).  I'll consider ways to list invalid tools 
contained in repositories in the tool shed, but the preference is for tool 
developers to share only valid tools, or the value of the tool shed will be 
significantly diminished over time.  Filtering out files that actually are not 
tools from a list of invalid tools could become a bit messy.

Thanks,

Greg Von Kuster



Repository Actions
Metadata was defined for some items in revision '66f8262e1686'. Correct the 
following problems if necessary and reset metadata.
hmmer.xml - This file refers to a file named hmmdb.loc. Upload a file named 
hmmdb.loc.sample to the repository to correct this error.


On Mar 19, 2012, at 6:15 AM, Peter Cock wrote:

 Hello all,
 
 I was looking at Edward's updated HMMER wrapper on the toolshed,
 http://toolshed.g2.bx.psu.edu/
 
 There is a preview offered for the (simple) hmmpress.xml only, giving
 the impression that Edward's repository isn't very useful.
 
 Why isn't anything shown for the more complex hmmer.xml? If there
 is a problem rendering the preview, it would still be useful to list the
 tool in the table with its description, version and requirements.
 
 Edward - I have a query (which might be what the Tool Shed preview
 is unhappy about), where is the hmmer file format referenced in
 hmmer.xml defined? In hmmpress.xml  you use hmm (as an input)
 and hmmpressed (as an output) which are both defined in hmmer.py,
 so is this an accidental inconsistency?
 
 Thanks,
 
 Peter
 
 P.S. Edward, hmmpress.xml is missing hmmpress as a requirement.
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Missing tool previews in tool shed, e.g. hmmer.xml

2012-03-19 Thread Peter Cock
On Mon, Mar 19, 2012 at 1:28 PM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hi Peter,

 Here is the error produced by attempting to load the hmmr.xml tool config
 into the tool shed.  With regard to tool validity, the definition of a valid
 tool in the tool shed has always been restricted to the tool properly
 loading in a Galaxy instance.  If a tool is not valid, it will not be
 returned in a search and it cannot be automatically installed (unless it
 belongs to a repository containing other valid tools).  I'll consider ways
 to list invalid tools contained in repositories in the tool shed, but the
 preference is for tool developers to share only valid tools, or the value of
 the tool shed will be significantly diminished over time.  Filtering out
 files that actually are not tools from a list of invalid tools could become
 a bit messy.

I see - I hope you can make some improvements for browsing the tool
shed for this kind of situation.

Thanks for confirming there was something the Tool Shed didn't like
in hmmer.xml (missing a hmmdb.loc.sample file). For the tool
uploader (in this case Edward) that kind of error is very helpful.

Peter

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Hard-links and sshfs

2012-03-19 Thread Daniel Sobral
Hello,

My instance of galaxy is running on an sshfs mount.
The problem comes when I set the new_file_path in the universe_wsgi.ini
to be a folder within that sshfs.
When I upload something, the hard link in the mkstemp_ln function fails
because of apparent limitations of sshfs with hard links. It works if I
replace that hard link by a file copy, but I would appreciate an
alternative that would avoid me messing around the internals of the code.

Thanks,
Daniel
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Possible bug: Tags in a workflow gets duplicated hundreds of times

2012-03-19 Thread Anthonius deBoer
Hi,I have run into an issue with a tag being duplicated hundreds of times each time I open, edit or clone a workflow.I have tried to manually remove them, but each time the tag gets inserted somehow and duplicated each time I edit the workflow.By now there are hundreds of copies of the tags at it makes loading and running the workflow very slow.Is there a way to remove a tag from the system somehow? I am willing to dig into the database with some SQL statement but not sure where to start...ThanksThon
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Trackster cannot display INTERVAL or GATK-INTERVAL files???

2012-03-19 Thread Anthonius deBoer
It seems that trackster does not know how to display INTERVAL files?Is that true? Surely there is an easy way to support those kinds of simple files without having to convert them?Am I missing something?I also don't seem to be able to convert an interval file into a BED file?Thanks,Thon
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Trackster cannot display INTERVAL or GATK-INTERVAL files???

2012-03-19 Thread Anthonius deBoer
Oh wait..It's in the info section...Weird place for a conversion tool but it is there :)On Mar 19, 2012, at 12:02 PM, Anthonius deBoer thondeb...@me.com wrote:It seems that trackster does not know how to display INTERVAL files?Is that true? Surely there is an easy way to support those kinds of simple files without having to convert them?Am I missing something?I also don't seem to be able to convert an interval file into a BED file?Thanks,Thon___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:  http://lists.bx.psu.edu/___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Trackster cannot display INTERVAL or GATK-INTERVAL files???

2012-03-19 Thread Jeremy Goecks
Thon,

 It seems that trackster does not know how to display INTERVAL files?
 Is that true?

Yes, this is currently a limitation. However, most of the work is done that is 
necessary to make this happen, so it should be done soon, definitely in the 
next month or two.

 Surely there is an easy way to support those kinds of simple files without 
 having to convert them?

As you discovered, conversions can be done by clicking on the pencil icon and 
then performing the conversion. We're aware that isn't ideal for usability and 
plan to improve it in the future.

J.
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Help writing a tool

2012-03-19 Thread Mark Johnson

  
  
I'm writing some tools to integrate NCBI data
  resources with Galaxy. I have two questions.
  
  The first is simple. I want to write a tool for a long-running
  process that is handled by some other scheduler, and that produces
  its own job ids. Some web services, like BLAST, for example,
  receive a request, and take a while to complete processing. The
  job id can be used to fetch either job status or results from the
  server, depending on whether it has completed. How do you make a
  Galaxy tool that polls the server, and produces an output set only
  when the process is complete?
  
  The second question is, besides this mailing list, and the Galaxy
  wiki, is there are good online video or text resource that
  explains the Galaxy architecture and how to use it? The docs are
  good as far as they go, but most of what's in the command
  scripts in the tool files isn't documented.
  
  Thanks
  
  Mark Johnson
  Staff Scientist, NCBI
  mjohn...@ncbi.nlm.nih.gov
  

  

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Reducing costs in Cloud Galaxy

2012-03-19 Thread Enis Afgan
Greg,
Regarding the performance of different types of instances, I came across
this and thought you might potentially find it useful:
http://cloudharmony.com/benchmarks

Enis

On Mon, Mar 19, 2012 at 7:49 PM, Greg Edwards gedwar...@gmail.com wrote:

 Enis,

 Thanks. Will try that re the storage.

 Greg E


 On Mon, Mar 19, 2012 at 4:49 PM, Enis Afgan eaf...@emory.edu wrote:

 Hi Greg,

 On Mon, Mar 19, 2012 at 11:01 AM, Greg Edwards gedwar...@gmail.comwrote:

 Hi,

 I've got an implementation of some proteomics tools going well in Galaxy
 on AWS EC2 under Cloudman. Thanks for the help along the way.

 I need to drive the costs down a bit. I'm using an m1.large AMI and it's
 costing about $180 - $200 / month. This is about 55% storage and 45%
 instance costs. That's peanuts in some senses but for now we need to get it
 down so that it comes out of petty cash for the department, while the case
 is proven for it's use.

 I have a few questions and would appreciate ny insights ..


 1. AWS has just released an m1.medium and m1.small instance type, which
 are 1/2 and 1/4 the cost of m1.large.

 http://aws.amazon.com/ec2/instance-types/
 http://aws.amazon.com/ec2/pricing/

 I tried the m1.small and m1.medium with the latest Cloudman AMI *  
 *galaxy-cloudman-2011-03-22
 (ami-da58aab3)
 All seemed to install ok, but the Tools took up tp 30 minutes to start
 execution on m1.medium, and never started on m1.small.

 m1.medium only added about 15% to run times compared with m1.large,
 can't say for m1.small. t1.micro does run (and for free in my Free Tier
 first year) but blows execution times out by a factor of about 3 which is
 too much.

 Has anyone tried these new Instance Types ? (m1.small/medium)

 I have no real experience with these instance types yet either so maybe
 someone else can chime in on this?



 2. The vast majority of the storage costs are fro the Gemome databases
 in the 700GB /mnt/galaxyIndices, which I don't need. Can this be reduced to
 the bare essentials ?


 You can do this manually:
 1. Start a new Galaxy cluster (ie, one you can easily delete later)
 2. ssh into the master instance and delete whatever genomes you don't
 need/want (these are all located under /mnt/galaxyIndices)
 3. Create a new EBS volume of size that'll fit whatever's left on the
 original volume, attach it and mount it
 4. Copy over the data from the original volume to the new one while
 keeping the directory structure the same (rsync is probably the best tool
 for this)
 5. Unmount  detach the new volume; create a snapshot from it
 6. For the cluster you want to keep around (while it is terminated), edit
 persistent_data.yaml in it's bucket on S3 and replace the existing snap ID
 for the galaxyIndices with the snapshot ID you got in the previous step
 7. Start that cluster and you should have a file system from the new
 snapshot mounted.
 8. Terminate  delete the cluster you created in step 1

 If you don't want to have to do this the first time around on your custom
 cluster, you can first try it with another temporary cluster and make sure
 it all works as expected and then move on to the real cluster.

 Best,
 Enis


 Using m1.small/medium and getting rid of the 700GB would being my costs
 down to say $50 / month which is ok.


 Thanks !
 Greg E


 --
 Greg Edwards,
 Port Jackson Bioinformatics
 gedwar...@gmail.com


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/





 --
 Greg Edwards,
 Port Jackson Bioinformatics
 gedwar...@gmail.com


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Reducing costs in Cloud Galaxy

2012-03-19 Thread Dave Clements
Hi Enis, Greg,

I've taken stuff from my this email, and previous conversations with Enis
and put it in the wiki:

  http://wiki.g2.bx.psu.edu/Admin/Cloud/CapacityPlanning

Please feel free to update/correct/enhance.

Dave C.

On Mon, Mar 19, 2012 at 2:58 PM, Enis Afgan eaf...@emory.edu wrote:

 Greg,
 Regarding the performance of different types of instances, I came across
 this and thought you might potentially find it useful:
 http://cloudharmony.com/benchmarks

 Enis

 On Mon, Mar 19, 2012 at 7:49 PM, Greg Edwards gedwar...@gmail.com wrote:

 Enis,

 Thanks. Will try that re the storage.

 Greg E


 On Mon, Mar 19, 2012 at 4:49 PM, Enis Afgan eaf...@emory.edu wrote:

 Hi Greg,

 On Mon, Mar 19, 2012 at 11:01 AM, Greg Edwards gedwar...@gmail.comwrote:

 Hi,

 I've got an implementation of some proteomics tools going well in
 Galaxy on AWS EC2 under Cloudman. Thanks for the help along the way.

 I need to drive the costs down a bit. I'm using an m1.large AMI and
 it's costing about $180 - $200 / month. This is about 55% storage and 45%
 instance costs. That's peanuts in some senses but for now we need to get it
 down so that it comes out of petty cash for the department, while the case
 is proven for it's use.

 I have a few questions and would appreciate ny insights ..


 1. AWS has just released an m1.medium and m1.small instance type, which
 are 1/2 and 1/4 the cost of m1.large.

 http://aws.amazon.com/ec2/instance-types/
 http://aws.amazon.com/ec2/pricing/

 I tried the m1.small and m1.medium with the latest Cloudman AMI *  
 *galaxy-cloudman-2011-03-22
 (ami-da58aab3)
 All seemed to install ok, but the Tools took up tp 30 minutes to start
 execution on m1.medium, and never started on m1.small.

 m1.medium only added about 15% to run times compared with m1.large,
 can't say for m1.small. t1.micro does run (and for free in my Free Tier
 first year) but blows execution times out by a factor of about 3 which is
 too much.

 Has anyone tried these new Instance Types ? (m1.small/medium)

 I have no real experience with these instance types yet either so maybe
 someone else can chime in on this?



 2. The vast majority of the storage costs are fro the Gemome databases
 in the 700GB /mnt/galaxyIndices, which I don't need. Can this be reduced to
 the bare essentials ?


 You can do this manually:
 1. Start a new Galaxy cluster (ie, one you can easily delete later)
 2. ssh into the master instance and delete whatever genomes you don't
 need/want (these are all located under /mnt/galaxyIndices)
 3. Create a new EBS volume of size that'll fit whatever's left on the
 original volume, attach it and mount it
 4. Copy over the data from the original volume to the new one while
 keeping the directory structure the same (rsync is probably the best tool
 for this)
 5. Unmount  detach the new volume; create a snapshot from it
 6. For the cluster you want to keep around (while it is terminated),
 edit persistent_data.yaml in it's bucket on S3 and replace the existing
 snap ID for the galaxyIndices with the snapshot ID you got in the previous
 step
 7. Start that cluster and you should have a file system from the new
 snapshot mounted.
 8. Terminate  delete the cluster you created in step 1

 If you don't want to have to do this the first time around on your
 custom cluster, you can first try it with another temporary cluster and
 make sure it all works as expected and then move on to the real cluster.

 Best,
 Enis


 Using m1.small/medium and getting rid of the 700GB would being my costs
 down to say $50 / month which is ok.


 Thanks !
 Greg E


 --
 Greg Edwards,
 Port Jackson Bioinformatics
 gedwar...@gmail.com


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/





 --
 Greg Edwards,
 Port Jackson Bioinformatics
 gedwar...@gmail.com



 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/




-- 
http://galaxyproject.org/GCC2012 http://galaxyproject.org/wiki/GCC2012
http://galaxyproject.org/
http://getgalaxy.org/
http://usegalaxy.org/
http://galaxyproject.org/wiki/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Reducing costs in Cloud Galaxy

2012-03-19 Thread Dannon Baker
Just one extra thought on this-- If you leave your instance up all the time it 
may be worth looking into having a reserved micro instance up as the front end 
(cheap, or free, with your intro tier) with SGE submission disabled.  Then, 
enable autoscaling(max 1) of m1.large/xlarge instances.

-Dannon


On Mar 19, 2012, at 7:20 PM, Dave Clements wrote:

 Hi Enis, Greg,
 
 I've taken stuff from my this email, and previous conversations with Enis and 
 put it in the wiki:
 
   http://wiki.g2.bx.psu.edu/Admin/Cloud/CapacityPlanning
 
 Please feel free to update/correct/enhance.
 
 Dave C.
 
 On Mon, Mar 19, 2012 at 2:58 PM, Enis Afgan eaf...@emory.edu wrote:
 Greg,
 Regarding the performance of different types of instances, I came across this 
 and thought you might potentially find it useful: 
 http://cloudharmony.com/benchmarks
 
 Enis
 
 On Mon, Mar 19, 2012 at 7:49 PM, Greg Edwards gedwar...@gmail.com wrote:
 Enis,
 
 Thanks. Will try that re the storage.
 
 Greg E
 
 
 On Mon, Mar 19, 2012 at 4:49 PM, Enis Afgan eaf...@emory.edu wrote:
 Hi Greg,
 
 On Mon, Mar 19, 2012 at 11:01 AM, Greg Edwards gedwar...@gmail.com wrote:
 Hi,
 
 I've got an implementation of some proteomics tools going well in Galaxy on 
 AWS EC2 under Cloudman. Thanks for the help along the way.
 
 I need to drive the costs down a bit. I'm using an m1.large AMI and it's 
 costing about $180 - $200 / month. This is about 55% storage and 45% instance 
 costs. That's peanuts in some senses but for now we need to get it down so 
 that it comes out of petty cash for the department, while the case is proven 
 for it's use.
 
 I have a few questions and would appreciate ny insights ..
 
 
 1. AWS has just released an m1.medium and m1.small instance type, which are 
 1/2 and 1/4 the cost of m1.large.   
 
 http://aws.amazon.com/ec2/instance-types/ 
 http://aws.amazon.com/ec2/pricing/
 
 I tried the m1.small and m1.medium with the latest Cloudman AMI   
 galaxy-cloudman-2011-03-22 (ami-da58aab3)
 All seemed to install ok, but the Tools took up tp 30 minutes to start 
 execution on m1.medium, and never started on m1.small.
 
 m1.medium only added about 15% to run times compared with m1.large, can't say 
 for m1.small. t1.micro does run (and for free in my Free Tier first year) but 
 blows execution times out by a factor of about 3 which is too much.
 
 Has anyone tried these new Instance Types ? (m1.small/medium)
 I have no real experience with these instance types yet either so maybe 
 someone else can chime in on this?  
 
 
 2. The vast majority of the storage costs are fro the Gemome databases in the 
 700GB /mnt/galaxyIndices, which I don't need. Can this be reduced to the bare 
 essentials ?
 
 You can do this manually: 
 1. Start a new Galaxy cluster (ie, one you can easily delete later)
 2. ssh into the master instance and delete whatever genomes you don't 
 need/want (these are all located under /mnt/galaxyIndices)
 3. Create a new EBS volume of size that'll fit whatever's left on the 
 original volume, attach it and mount it
 4. Copy over the data from the original volume to the new one while keeping 
 the directory structure the same (rsync is probably the best tool for this)
 5. Unmount  detach the new volume; create a snapshot from it
 6. For the cluster you want to keep around (while it is terminated), edit 
 persistent_data.yaml in it's bucket on S3 and replace the existing snap ID 
 for the galaxyIndices with the snapshot ID you got in the previous step
 7. Start that cluster and you should have a file system from the new snapshot 
 mounted.
 8. Terminate  delete the cluster you created in step 1
 
 If you don't want to have to do this the first time around on your custom 
 cluster, you can first try it with another temporary cluster and make sure it 
 all works as expected and then move on to the real cluster.
 
 Best,
 Enis
 
 Using m1.small/medium and getting rid of the 700GB would being my costs down 
 to say $50 / month which is ok.
 
 
 Thanks !
 Greg E
 
 
 -- 
 Greg Edwards,
 Port Jackson Bioinformatics
 gedwar...@gmail.com
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/
 
 
 
 
 -- 
 Greg Edwards,
 Port Jackson Bioinformatics
 gedwar...@gmail.com
 
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/
 
 
 
 -- 
 http://galaxyproject.org/GCC2012
 http://galaxyproject.org/
 http://getgalaxy.org/
 http://usegalaxy.org/
 http://galaxyproject.org/wiki/
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your 

[galaxy-dev] Getting the input name of the dataset

2012-03-19 Thread diana michelle magbanua
Hi there,

I am new to Galaxy and I've just recently learned how to integrate a Perl
script to it. Now, my code uses the input file's name as a header for a
column in the output. When I ran it in Galaxy, I did get the filename, but
it's the one ending in .dat (actually, I got the entire path of the file).
I was wondering if it's possible to retain the original name of the file
(or retrieve the name of the input dataset) and use it in the output file.
I can't think of a Perl script for this yet, for my scripting's a bit rusty
(I just started learning Perl last month). I've already checked the FAQs
page, the wiki and the mailing list, but I did not get any useful hints.

I hope my writing made sense. Thank you for your time!

- Diana
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Import error when running galaxy on Debian Lenny 64bit

2012-03-19 Thread Jemma Wu
Dear galaxy,



I'm trying to install galaxy on a 64bit Debian (5.0) Lenny server with
python2.5.2. When running %sh run.sh, the fetch_eggs completed
successfully. However, I have problem with running galaxy instance. I've
posted the error messages below.





Traceback (most recent call last):

  File ./scripts/paster.py, line 34, in module

command.run()

  File
/home/galaxy/galaxy-dist/eggs/PasteScript-1.7.3-py2.5.egg/paste/script/
command.py, line 84, in run

invoke(command, command_name, options, args[1:])

  File
/home/galaxy/galaxy-dist/eggs/PasteScript-1.7.3-py2.5.egg/paste/script/
command.py, line 123, in invoke

exit_code = runner.run(args)

  File
/home/galaxy/galaxy-dist/eggs/PasteScript-1.7.3-py2.5.egg/paste/script/
command.py, line 218, in run

result = self.command()

  File
/home/galaxy/galaxy-dist/eggs/PasteScript-1.7.3-py2.5.egg/paste/script/
serve.py, line 276, in command

relative_to=base, global_conf=vars)

  File
/home/galaxy/galaxy-dist/eggs/PasteScript-1.7.3-py2.5.egg/paste/script/
serve.py, line 313, in loadapp

**kw)

  File
/home/galaxy/galaxy-dist/eggs/PasteDeploy-1.3.3-py2.5.egg/paste/deploy/
loadwsgi.py, line 204, in loadapp

return loadobj(APP, uri, name=name, **kw)

  File
/home/galaxy/galaxy-dist/eggs/PasteDeploy-1.3.3-py2.5.egg/paste/deploy/
loadwsgi.py, line 225, in loadobj

return context.create()

  File
/home/galaxy/galaxy-dist/eggs/PasteDeploy-1.3.3-py2.5.egg/paste/deploy/
loadwsgi.py, line 625, in create

return self.object_type.invoke(self)

  File
/home/galaxy/galaxy-dist/eggs/PasteDeploy-1.3.3-py2.5.egg/paste/deploy/
loadwsgi.py, line 110, in invoke

return fix_call(context.object, context.global_conf,
**context.local_conf)

  File
/home/galaxy/galaxy-dist/eggs/PasteDeploy-1.3.3-py2.5.egg/paste/deploy/
util/fixtypeerror.py, line 57, in fix_call

val = callable(*args, **kw)

  File /home/galaxy/galaxy-dist/lib/galaxy/web/buildapp.py, line 90,
in app_factory

add_ui_controllers( webapp, app )

  File /home/galaxy/galaxy-dist/lib/galaxy/web/buildapp.py, line 39,
in add_ui_controllers

module = __import__( module_name )

  File /home/galaxy/galaxy-dist/lib/galaxy/web/controllers/cloud.py,
line 9, in module

import boto

ImportError: No module named boto







We have another Debian (6.0) Squeeze 32 bit server with python 2.6.6,
and I did the same installation of galaxy. Everything went smoothly and
galaxy server instance can run successfully on it.



Do you know why this importError happens on our 64bit Debian 5.0 server
and how I could fix it?



Many thanks.



Jemma Wu



Australian Proteome Analysis Facility (APAF)

Level 4, Building F7B, Research Park Drive

Macquarie University  Sydney  NSW  2109  Australia

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/