On Fri, Nov 18, 2011 at 12:56 AM, Jeremy Goecks <jeremy.goe...@emory.edu> wrote:

> Scalability issues are more likely to arise on the back end than the front 
> end, so you'll want to ensure that you have enough compute nodes. BWA uses 
> four nodes by default--Enis, does the cloud config change this parameter?--so 
> you'll want 4x50 or 200 total nodes if you want everyone to be able to run a 
> BWA job simultaneously.
>

Actually, one other question - this paragraph makes me realise that I
don't really understand how Galaxy is distributing jobs. I had thought
that each job would only use one node, and in some cases take
advantage of multiple cores within that node. I'm taking a "node" to
be a set of cores with their own shared memory, so in this case a VM
instance, is this right? If some types of jobs can be distributed over
multiple nodes, can I configure, in Galaxy, how many nodes they should
use?

Thanks again,
Clare

-- 
E: s...@unimelb.edu.au
P: 03 903 53357
M: 0414 854 759

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to