> On Fri, Nov 18, 2011 at 12:56 AM, Jeremy Goecks <jeremy.goe...@emory.edu>
>> Scalability issues are more likely to arise on the back end than the front
>> end, so you'll want to ensure that you have enough compute nodes. BWA uses
>> four nodes by default--Enis, does the cloud config change this
>> parameter?--so you'll want 4x50 or 200 total nodes if you want everyone to
>> be able to run a BWA job simultaneously.
> Actually, one other question - this paragraph makes me realise that I
> don't really understand how Galaxy is distributing jobs. I had thought
> that each job would only use one node, and in some cases take
> advantage of multiple cores within that node. I'm taking a "node" to
> be a set of cores with their own shared memory, so in this case a VM
> instance, is this right? If some types of jobs can be distributed over
> multiple nodes, can I configure, in Galaxy, how many nodes they should
You're right -- my word choices were poor. Replace 'node' with 'core' in my
paragraph to get an accurate suggestion for resources.
Galaxy uses a job scheduler--SGE on the cloud--to distribute jobs to different
cluster nodes. Jobs that require multiple cores typically run on a single node.
Enis can chime in on whether CloudMan supports job submission over multiple
nodes; this would require setup of an appropriate parallel environment and a
tool that can make use of this environment.
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org. Please keep all replies on the list by
using "reply all" in your mail client. For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:
To manage your subscriptions to this and other Galaxy lists,
please use the interface at: