Hello John,

This is totally excellent!
This gave me a great idea for my little torque installation.

I hope you don't mind but I have an unrelated issue with torque that I do not 
seem to grasp.
My issue is that it take a rediculious amount of time to upload a 2Gb file.
I tried many different configurations of apache to upload that 2Gb file without 
Galaxy keeps showing that arrow going up constantly (over 12h).
And this is an upload from a file located on the server running galaxy.

Here is some information from my installation.
1. Server redhat6 with galaxy behind an Apache HTTP Server proxy like described 
on your website.
2. I have Apache HTTP Server running as the galaxy user with
        a. Serving Galaxy at a sub directory
        b. Compression and caching
        c. Sending files using Apache

You will see in attachment my apache's galaxy, deflate, expire and xsendfile 
configuration file.
Furthermore, I've changed the following in galaxy.ini:
        apache_xsendfile = True
        upstream_gzip = False

Any suggestions?
Any information I missed?

Cordialement / Regards, 
Edgar Fernandez

-----Message d'origine-----
De : John Chilton [mailto:jmchil...@gmail.com] 
Envoyé : January-21-15 9:34 PM
À : Fernandez Edgar
Cc : galaxy-...@bx.psu.edu
Objet : Re: [galaxy-dev] galaxy and torque - resource allocation

Was hoping someone with an actual torque cluster would respond. I think the way 
you would configure this might be for instance might be setting 
native_specification like "-l nodes=1:ppn=n". Depending on how things are 
configured or simply because I am ignorant - this may be wrong - but I think 
what you really want should look a lot like your arguments to qsub. So let say 
you have defined a job runner plugin called pbs_drmaa at the top of your 
job_conf.xml file. Then you could default everything to a single core by 
default and send "big" jobs to a destination with 8 cores with destinations and 
tool sections that look something like this...


  <destinations default="singlecore">
        <destination id="singlecore" runner="pbs_drmaa">
          <param id="native_specification">-l nodes=1:ppn=1</param>
        <destination id="multicore" runner="pbs_drmaa">
          <param id="native_specification">-l nodes=1:ppn=8</param>

and ...

    <tool id="bowtie2" destination="multicore" />
    <tool id="bowtie" destination="multicore" />
    <tool id="deeptools" destination="multicore" />
    <tool id="cufflinks"destination="multicore" />
    <tool id="cuffdiff" destination="multicore" />
    <tool id="cuffmerge" destination="multicore" />
    <tool id="tophat2" destination="multicore" />

Again - that native specification could be wrong - probably best to test it out 
locally (or maybe Nate or JJ will step in and correct me).

Hope this helps,

On Mon, Jan 19, 2015 at 8:05 AM, Fernandez Edgar <edgar.fernan...@umontreal.ca> 
> Hello gents,
> Once again I would like to convey my most sincere appreciation for your 
> help!!!
> And yes I would like to hear your elaboration on my DRMAA runner which is 
> what I'm using.
> So my installation looks like: galaxy --> pbs_drmaa --> torque
> Cordialement / Regards,
> Edgar Fernandez
> -----Message d'origine-----
> De : John Chilton [mailto:jmchil...@gmail.com] Envoyé : January-18-15 
> 8:59 PM À : Fernandez Edgar Cc : galaxy-...@bx.psu.edu Objet : Re: 
> [galaxy-dev] galaxy and torque - resource allocation
> Galaxy generally defers to the DRM (torque in your case) for dealing with 
> these things. In your job_conf.xml you can specify limits for memory or CPUs 
> and Galaxy will pass these along to the DRM at which point it is up to the 
> DRM to enforce these - details depend on if you are using the PBS runner or 
> the DRMAA runner (let me know which and I can try to elobrate if that would 
> be useful).
> In your particular case - I don't believe torque "schedules" RAM so 
> things will generally only be... throttled... by CPUs counts. This is 
> what I was told by the admins at MSI when I worked there anyway. If 
> you want to place hard limits on RAM I think you need to "upgrade" to 
> the MOAB scheduler or switch over to a different DRM entirely like 
> SLURM. Even for DRMs that deal more directly with memory - Galaxy 
> doesn't provide a general mechanism for passing this along to the tool
> (https://trello.com/c/3RkTDnIn) - so it would be up to the tool to interact 
> with the environment variables your DRM sets .
> This sounds really terrible in the abstract - but it reality it usually isn't 
> an issue - most of Galaxy's multi-core mappers say have relatively low memory 
> per CPU usage - and for things like assemblers where this is more important - 
> one can usually just assign them to their own CPU or something like that to 
> ensure they get all the memory available.
> Unlike memory - Galaxy will attempt to pass the number of slots allocated to 
> a job to the tools by setting the GALAXY_SLOTS environment variable. All of 
> the multiple core devteam Galaxy tools at this point use this and so should 
> work - as should probably most multi-core tools from the tool shed - at least 
> the most popular ones.
> Hope this helps,
> -John
> On Tue, Jan 13, 2015 at 11:52 AM, Fernandez Edgar 
> <edgar.fernan...@umontreal.ca> wrote:
>> Hello gents,
>> Hope everyone had a great holiday break!
>> Wish you guys all the best for 2015!
>> I have a couple of questions about how resources (CPU and memory) is 
>> allocated when you have a galaxy and torque installation.
>> So I’ve setup torque with some default and maximum amount of CPU and 
>> memory allocations.
>> However, I have some worries when it comes to application (like 
>> tophat for example).
>> By default, it takes half the CPU of a server unless specified otherwise.
>> How is the CPU allocation is specified to application like tophat 
>> through galaxy?
>> Also, how does galaxy react if a job needs more memory than the limit 
>> set by torque?
>> Any information would help me a lot!
>> My sincere salutations to you all!!!
>> Cordialement / Regards,
>> Edgar Fernandez
>> System Administrator (Linux)
>> Direction Générale des Technologies de l'Information et de la 
>> Communication
>> (  Bur. : 1-514-343-6111 poste 16568
>> Université de Montréal
>> ___________________________________________________________
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this and other 
>> Galaxy lists, please use the interface at:
>>   https://lists.galaxyproject.org/
>> To search Galaxy mailing lists use the unified search at:
>>   http://galaxyproject.org/search/mailinglists/
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

To search Galaxy mailing lists use the unified search at:

Reply via email to