[galaxy-dev] Picard MarkDups

2012-01-12 Thread Ryan Golhar
I'm trying to run Picard MarkDups through Galaxy. Picard is using the standard 4g for the java max heap size. I need to increase this. Is it possible to offer this as an option to the user? If not, where do I change this? I see the entry in picard_wrapper.py. Do I change it here or in the XML

Re: [galaxy-dev] Picard MarkDups

2012-01-12 Thread Ann Black
in picard_wrapper.py. Do I change it here or in the XML file for MarkDups? Ryan -- next part -- An HTML attachment was scrubbed... URL: http://lists.bx.psu.edu/pipermail/galaxy-dev/attachments/20120112/187d8ac f/attachment-0001.html

Re: [galaxy-dev] Picard MarkDups

2012-01-12 Thread Ryan Golhar
? If not, where do I change this? I see the entry in picard_wrapper.py. Do I change it here or in the XML file for MarkDups? Ryan -- next part -- An HTML attachment was scrubbed... URL: http://lists.bx.psu.edu/pipermail/galaxy-dev/attachments/20120112/187d8ac f

[galaxy-dev] download error - duplicate headers received from server

2012-01-12 Thread Jeremy Coate
I'm trying to download a fastq sanger file from my Galaxy (Main) account and getting the error message below as of 2:15pm, Wed, 1/11/12. I have a concatenated fastq file and I get this message when clicking the download (floppy disk) icon. Any help would be appreciated. Thanks, Jeremy Duplicate

Re: [galaxy-dev] Galaxy Hang after DrmCommunicationException

2012-01-12 Thread Edward Kirton
sometimes the scheduler can't keep up with all the work in it's 15sec cycle, so it doesn't respond to some messages. here's a fix i've been trying that seems to work. in lib/galaxy/jobs/runners/drmaa.py: def check_watched_items( self ): Called by the monitor thread to look

Re: [galaxy-dev] Status on importing BAM file into Library does not update

2012-01-12 Thread Ryan Golhar
Any ideas as to how to fix this? We are interested in using Galaxy to host all our NGS data. If indexing on the head node is going to happen, then this is going to be an extremely slow process. ___ Please keep all replies on the list by