Hi,

We've just re run a Tophat run with debug=true and this is the email I got back 
from our HPC guys:



For the instance that just crashed I ran it interactively i.e. not in daemon 
mode and with debug = True

The only error I'm seeing on the screen is:

./run.sh: line 48:  8705 Segmentation fault      python ./scripts/paster.py 
serve universe_wsgi.ini $@



So we are not much wiser here. As for the question about what path are loading 
- sorry for being a bit slow but what exactly do you mean by that?


Sorry we are eating up so much of your time...

Best Wishes,
David.

__________________________________
Dr David A. Matthews

Senior Lecturer in Virology
Room E49
Department of Cellular and Molecular Medicine,
School of Medical Sciences
University Walk,
University of Bristol
Bristol.
BS8 1TD
U.K.

Tel. +44 117 3312058
Fax. +44 117 3312091

d.a.matth...@bristol.ac.uk






On 12 Dec 2011, at 21:28, Nate Coraor wrote:

> On Dec 12, 2011, at 5:25 AM, David Matthews wrote:
> 
>> Hi, 
>> 
>> Many thanks for the feedback. Sorry about missing some of the traceback, 
>> here is the full traceback:
>> 
>> Exception happened during processing of request from ('xxx.xxx.xxx.xxx', 
>> 32781)
>> Traceback (most recent call last):
>> File 
>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>  line 1053, in process_request_in_thread
>>   self.finish_request(request, client_address)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/SocketServer.py", line 
>> 322, in finish_request
>>   self.RequestHandlerClass(request, client_address, self)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/SocketServer.py", line 
>> 617, in __init__
>>   self.handle()
>> File 
>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>  line 432, in handle
>>   BaseHTTPRequestHandler.handle(self)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/BaseHTTPServer.py", 
>> line 329, in handle
>>   self.handle_one_request()
>> File 
>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>  line 421, in handle_one_request
>>   self.raw_requestline = self.rfile.readline()
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/socket.py", line 444, 
>> in readline
>> Unexpected exception in worker <function <lambda> at 0x2aab12fa9f50>
>> Traceback (most recent call last):
>> File 
>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>  line 863, in worker_thread_callback
>>   runnable()
>> File 
>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>  line 1037, in <lambda>
>>   lambda: self.process_request_in_thread(request, client_address))
>> File 
>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>  line 1056, in process_request_in_thread
>>   self.handle_error(request, client_address)
>> File 
>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>  line 1044, in handle_error
>>   return super(ThreadPoolMixIn, self).handle_error(request, client_address)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/SocketServer.py", line 
>> 338, in handle_error
>>   traceback.print_exc() # XXX But this goes to stderr!
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/traceback.py", line 
>> 233, in print_exc
>>   print_exception(etype, value, tb, limit, file)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/traceback.py", line 
>> 125, in print_exception
>>   print_tb(tb, limit, file)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/traceback.py", line 69, 
>> in print_tb
>>   line = linecache.getline(filename, lineno, f.f_globals)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/linecache.py", line 14, 
>> in getline
>>   lines = getlines(filename, module_globals)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/linecache.py", line 40, 
>> in getlines
>>   return updatecache(filename, module_globals)
>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/linecache.py", line 
>> 131, in updatecache
>>   lines = fp.readlines()
>> MemoryError
> 
> Hi David,
> 
> It looks like something is trying to read way too much data into something 
> else, probably the response.
> 
> 1. is debug = True?
> 2. what path are you loading that yields this result?
> 
> --nate
> 
>> 
>> 
>> We'll look again at the webpage for more clues. As for trying to reproduce 
>> it elsewhere, the data we are using here are all from Galaxy main which has 
>> run the jobs several times without a problem. So we're confident we are 
>> missing something at our end in the set up - presumably to do with memory 
>> usage by TopHat - any thoughts would be most welcome.
>> 
>> Best Wishes,
>> David.
>> 
>> __________________________________
>> Dr David A. Matthews
>> 
>> Senior Lecturer in Virology
>> Room E49
>> Department of Cellular and Molecular Medicine,
>> School of Medical Sciences
>> University Walk,
>> University of Bristol
>> Bristol.
>> BS8 1TD
>> U.K.
>> 
>> Tel. +44 117 3312058
>> Fax. +44 117 3312091
>> 
>> d.a.matth...@bristol.ac.uk
>> 
>> 
>> 
>> 
>> 
>> 
>> On 10 Dec 2011, at 14:18, Jeremy Goecks wrote:
>> 
>>> David,
>>> 
>>> What is the full stack trace that you're seeing? The stack trace is the 
>>> text below "Traceback" and identifies the exact location where the problem 
>>> occurs. 
>>> 
>>> Are you following these guidelines:
>>> 
>>> http://wiki.g2.bx.psu.edu/Admin/NGS%20Local%20Setup
>>> 
>>> for setting up large genomes in Galaxy?
>>> 
>>> Also, it would be ideal if you could upload the problematic data--genome + 
>>> reads--to our public instance (main.g2.bx.psu.edu) and see if you can 
>>> reproduce the problem that you're seeing.
>>> 
>>> Thanks,
>>> J.
>>> 
>>> On Dec 9, 2011, at 12:20 PM, David Matthews wrote:
>>> 
>>>> Hi,
>>>> 
>>>> We seem to have sorted out the problem of TopHat failing to run but now we 
>>>> have a new problem. When TopHat runs with a large genome (but not with 
>>>> small genomes), it finishes the run fine and all the data is there but the 
>>>> web interface falls over about 8 hours into the run and when we try to 
>>>> access the web based interface we get a Proxy Error. When we restart it 
>>>> all looks fine. This is the sort of errors our HPC people find:
>>>> 
>>>> Exception happened during processing of request from ('XXX.XXX.XXX.XXX', 
>>>> 32960)
>>>> Traceback (most recent call last):
>>>>  File 
>>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/pa
>>>> ste/httpserver.py", line 1053, in process_request_in_thread
>>>> Unexpected exception in worker <function <lambda> at 0x201caed8>
>>>> Traceback (most recent call last):
>>>>  File 
>>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/pa
>>>> ste/httpserver.py", line 863, in worker_thread_callback
>>>> Unhandled exception in thread started by <bound method Thread.__bootstrap 
>>>> of <Th
>>>> read(worker 118, stopped 1127647552)>>
>>>> Traceback (most recent call last):
>>>> 
>>>> 
>>>> Do you have any suggestions for what is happening?
>>>> 
>>>> 
>>>> Best Wishes,
>>>> David.
>>>> 
>>>> __________________________________
>>>> Dr David A. Matthews
>>>> 
>>>> Senior Lecturer in Virology
>>>> Room E49
>>>> Department of Cellular and Molecular Medicine,
>>>> School of Medical Sciences
>>>> University Walk,
>>>> University of Bristol
>>>> Bristol.
>>>> BS8 1TD
>>>> U.K.
>>>> 
>>>> Tel. +44 117 3312058
>>>> Fax. +44 117 3312091
>>>> 
>>>> d.a.matth...@bristol.ac.uk
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ___________________________________________________________
>>>> Please keep all replies on the list by using "reply all"
>>>> in your mail client.  To manage your subscriptions to this
>>>> and other Galaxy lists, please use the interface at:
>>>> 
>>>> http://lists.bx.psu.edu/
>>> 
>> 
>> ___________________________________________________________
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>> 
>> http://lists.bx.psu.edu/
> 

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to