[galaxy-user] Error out of memory when trying to retrieve output

2013-08-29 Thread Delong, Zhou
Hello,
I wanted to download the accepted junction .bam file from tophat output of my 
local instance and I get an out of memory error. When I examine the server 
via command line, I found that a python process used by galaxy occupied more 
than 80% of total memory (on the virtual machine with 10G of RAM).. I tried 
curl command to retrieve the datafile after rebooted the virtual machine, and 
python is activated again and used up all the memory.
The bam is around 20G of size, but I never had this kind of problem with other 
tophat analyses before on my local instance although they are of the same size. 
The discription on the web mentioned some .dat files that I manage to find on 
the disk, but not the bam.
Can anyone explain what python is doing and how can I solve this please?
Thanks,
Delong

___
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using reply all in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:

  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-user] Error out of memory when trying to retrieve output

2013-08-29 Thread Dannon Baker
Do you have debug enabled in your universe_wsgi.ini?  IIRC, this causes the
entire request to be loaded into memory (which is a bad thing when the
response is 20GB).


On Thu, Aug 29, 2013 at 3:50 PM, Delong, Zhou delong.z...@usherbrooke.cawrote:

  Hello,
 I wanted to download the accepted junction .bam file from tophat output of
 my local instance and I get an out of memory error. When I examine the
 server via command line, I found that a python process used by galaxy
 occupied more than 80% of total memory (on the virtual machine with 10G of
 RAM).. I tried curl command to retrieve the datafile after rebooted the
 virtual machine, and python is activated again and used up all the memory.
 The bam is around 20G of size, but I never had this kind of problem with
 other tophat analyses before on my local instance although they are of the
 same size. The discription on the web mentioned some .dat files that I
 manage to find on the disk, but not the bam.
 Can anyone explain what python is doing and how can I solve this please?
 Thanks,
 Delong


 ___
 The Galaxy User list should be used for the discussion of
 Galaxy analysis and other features on the public server
 at usegalaxy.org.  Please keep all replies on the list by
 using reply all in your mail client.  For discussion of
 local Galaxy instances and the Galaxy source code, please
 use the Galaxy Development list:

   http://lists.bx.psu.edu/listinfo/galaxy-dev

 To manage your subscriptions to this and other Galaxy lists,
 please use the interface at:

   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:

   http://galaxyproject.org/search/mailinglists/

___
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using reply all in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:

  http://galaxyproject.org/search/mailinglists/