OK, it seems to be a proxy error.

When the proxy does not receive data from the server, it times out, and closes the connection. I think the process that packs the datasets takes too long, so the connection is closed before the packaging is finished? Just a gues...

From the httpd logs:
=====
[Thu Mar 28 15:14:46 2013] [error] [client 157.193.10.52] (70007)The timeout specified has expired: proxy: error reading status line from remote server localhost, referer: http://galaxy.bits.vib.be/library_common/browse_library?sort=name&f-description=All&f-name=All&id=142184b92db50a63&cntrller=library&async=false&show_item_checkboxes=false&operation=browse&page=1 [Thu Mar 28 15:14:46 2013] [error] [client 157.193.10.52] proxy: Error reading from remote server returned by /library_common/act_on_multiple_datasets, referer: http://galaxy.bits.vib.be/library_common/browse_library?sort=name&f-description=All&f-name=All&id=142184b92db50a63&cntrller=library&async=false&show_item_checkboxes=false&operation=browse&page=1
=====

See if changing time out settings fixes this issue.


Cheers,
Joachim

Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib

On 03/28/2013 02:58 PM, Joachim Jacob | VIB | wrote:
Hi Assaf,


After all, the problem appears not to be total size of the history, but the size of the individual datasets.

Now, histories which contain big datasets (>1GB) imported from Data Libraries causes the exporting process to crash. Can somebody confirm if this is a bug? I uploaded the datasets to a directory, which are then imported from that directory into a Data Library.

Downloading data sets >1GB from a data library directly (as tar.gz) also crashes.

Note: I have re-enabled abrt, but waiting for some jobs to be finished to restart.


Cheers,
Joachim.


Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib



On Tue 26 Mar 2013 03:45:43 PM CET, Assaf Gordon wrote:
Hello Joachim,

Joachim Jacob | VIB | wrote, On 03/26/2013 10:01 AM:

abrt was filling the root directory indeed. So disabled it.

I have done some exporting tests, and the behaviour is not consistent.

1. *size*: in general, it worked out for smaller datasets, and usually crashed on bigger ones (starting from 3 GB). So size is key? 2. But now I have found several histories of 4.5GB that I was able to export... So far for the size hypothesis.

Another observation: when the export crashes, the corresponding webhandler process dies.


A crashing python process crosses the fine boundary between the Galaxy code and Python internals... perhaps the Galaxy developers can help with this problem.

It would be helpful to find a reproducible case with a specific history or a specific sequence of events, then someone can help you with the debugging.

Once you find a history that causes a crash (every time or sometimes, but in a reproducible way), try to pinpoint when exactly it happens: Is it when you start preparing the export (and "export_history.py" is running as a job), or when you start downloading the exported file. (I'm a bit behind on the export mechanism, so perhaps there are other steps involved?).

Couple of things to try:

1. set "cleanup_job=never" in your universe_wsgi.ini - this will keep the temporary files, and will help you re-produce jobs later.

2. Enable "abrt" again - it is not the problem (just the symptom).
You can cleanup the "/var/spool/abrt/XXX" directory from previous crash logs, then reproduce a new crash, and look at the collected files (assuming you have enough space to store at least one crash). In particular, look at the file called "coredump" - it will tell you which script has crashed.
Try running:
     $ file /var/spool/abrt/XXXX/coredump
coredump ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'python XXXXXX.py'

Instead of "XXXX.py" it would show the python script that crashed (hopefully with full command-line parameters).

It won't show which python statement caused the crash, but it will point in the right direction.

So now I suspect something to be wrong with the datasets, but I am not able to trace something meaningful in the logs. I am not confident in turning on logging in Python yet, but apparently this happens with the module "logging" initiated like logging.getLogger( __name__ ).


It could be a bad dataset (file on disk), or a problem in the database, or something completely different (a bug in the python archive module).
No point guessing until there are more details.

-gordon



___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

Reply via email to