Re: [galaxy-dev] BED files not recognized correctly

2013-01-22 Thread Hans-Rudolf Hotz

Hi Thon


We see the same with our BED files, and I can reproduce it with your 
example. I tend to ignore it since (so far) it has no influence on the 
functionality, if I use the BED file in any other tool.


I only run into troubles when I present working with BED files in our 
introductory courses


Sorry, no solution.


Regards, Hans-Rudolf



On 01/22/2013 10:35 PM, Anthonius deBoer wrote:

Hi,

I have noticed for a while now that BED files are not recognized
correctly or at least not parsed out correctly.
I notice that invariably, the (9 column) BED file comments state there
is 1 region and X comments, where X + 1 is the actual number of regions
in the file..

Capture.JPG

Here's a few lines from the file
1   3807695038077349utr3:RSPO1  1   -   
38077349380773490,0,255
1   3807742038078426utr3:RSPO1  1   -   
38078426380784260,0,255
1   3807842638078593cds:RSPO1   1   -   
3807842638078593255,0,0
1   3807937538079564cds:RSPO1   1   -   
3807937538079564255,0,0
1   3807985538080005cds:RSPO1   1   -   
3807985538080005255,0,0
1   3808215538082347cds:RSPO1   1   -   
3808215538082347255,0,0
1   3809523938095333cds:RSPO1   1   -   
3809523938095333255,0,0
1   3809533338095621utr5:RSPO1  1   -   
38095621380956210,0,255


Any ideas why it thinks there are comments in the file and why only one
region?

The file is a regular txt file without the LF and is not DOS format or
anything...

It also does not parse out the name, score and strand info, but once I
correct that manually, it works, but it is a pain to have to do that
everytime...

Thanks,

Thon


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] Upgrade Python to a newer version.

2013-01-22 Thread Luobin Yang
Hi, Galaxy developers,

My python version is 2.6.3 and I would like to upgrade it to 2.7.3. So is
there anything that we need to on Galaxy after Python is upgraded to a new
version?

Thanks,
Luobin
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Job handler keeps crashing

2013-01-22 Thread Derrick Lin
Hey Ido,

I actually copied & pasted those committed changes in dramaa.py manually
and it serves well as a temporary solution.

D


On Tue, Jan 22, 2013 at 10:51 PM, Ido Tamir  wrote:

> Is it possible to backport this onto the latest distribution?
> Yes, I'm lazy, but there are also others that are still updating
> within the next weeks and will have problems without them being
> aware of this fix.
>
> best,
> ido
>
>
> On Jan 21, 2013, at 9:50 PM, Nate Coraor wrote:
>
> > Hi all,
> >
> > The commit[1] that fixes this is not in the January 11 distribution.
>  It'll be part of the next distribution.
> >
> > --nate
> >
> > [1]
> https://bitbucket.org/galaxy/galaxy-central/commits/c015b82b3944f967e2c859d5552c00e3e38a2da0
> >
> > On Jan 21, 2013, at 3:10 PM, Anthonius deBoer wrote:
> >
> >> I have seen this same issue exactly. Python just dies without any
> errors in the log. Using the latest galaxy-dist
> >>
> >> Sent from my iPhone
> >>
> >> On Jan 20, 2013, at 8:35 PM, Derrick Lin  wrote:
> >>
> >>> Update to the 11 Jan 2013 dist does not help with this issue. :(
> >>>
> >>> I checked the database and have the look at the job entries that
> handler0 tried to stop then shutdown:
> >>>
> >>> | 3088 | 2013-01-03 14:25:38 | 2013-01-03 14:27:05 |531 |
> toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
>  | 0.1.2| deleted_new | Job output deleted by user before job
> completed. | NULL | NULL   | NULL| NULL   | NULL
> | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL
>   |  NULL |  76 |0 | NULL| NULL
>   | handler0 |  NULL |
> >>> | 3091 | 2013-01-04 10:52:19 | 2013-01-07 09:14:34 |531 |
> toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
>  | 0.1.2| deleted_new | Job output deleted by user before job
> completed. | NULL | NULL   | NULL| NULL   | NULL
> | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL
>   |  NULL |  76 |0 | NULL| NULL
>   | handler0 |  NULL |
> >>> | 3093 | 2013-01-07 22:02:21 | 2013-01-07 22:16:27 |531 |
> toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_pos2bed/1.0.0
>  | 1.0.0| deleted_new | Job output deleted by user before job
> completed. | NULL | NULL   | NULL| NULL   | NULL
> | NULL  |   1749 | drmaa://-V -j n -R y -q intel.q/ | NULL
>   |  NULL |  76 |0 | NULL| NULL
>   | handler0 |  NULL |
> >>>
> >>> So basically the job table has several of these entries what assigned
> to handler0 and marked as "deleted_new". When the handler0 is up, it starts
> stopping these jobs, after the first job has been "stopped", handler0 went
> crash and died. But that job was then marked as "deleted".
> >>>
> >>> I think if I manually change the job state from "deleted_new" to
> "deleted" in the db, the handler0 will become fine. I am just concerned
> about how these jobs were created (like assigned to a handler but marked as
> "deleted_new").
> >>>
> >>> Cheers,
> >>> D
> >>>
> >>>
> >>> On Mon, Jan 21, 2013 at 1:49 PM, Derrick Lin 
> wrote:
> >>> I had a close look at the code in
> >>>
> >>> galaxy-dist / lib / galaxy / jobs / handler.py
> >>> galaxy-dist / lib / galaxy / jobs / runners / drmaa.py
> >>>
> >>> and found that stopping "deleted" and "deleted_new" seems to be normal
> routine for the job handler. Could not find any exception that caused the
> shutdown.
> >>>
> >>> I do notice in the galaxy-dist on bitbucket, there is one commit with
> comment "Fix shutdown on python >= 2.6.2 by calling setDaemon when creating
> threads (these are still...", it seems to be relevant?
> >>>
> >>> I will do the update to 11 Jan release and see if it fixes the issue.
> >>>
> >>> D
> >>>
> >>>
> >>> On Fri, Jan 18, 2013 at 4:03 PM, Derrick Lin 
> wrote:
> >>> Hi guys,
> >>>
> >>> We have updated our galaxy to 20 Dec 2012 release. Recently we found
> that some submitted jobs could not start (stay gray forever).
> >>>
> >>> We found that it was caused by the job manager sent jobs to a handler
> (handler0) whose python process crashed and died.
> >>>
> >>> From the handler log we found the last messages right before the crash:
> >>>
> >>> galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 Stopping job 3032:
> >>> galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 stopping job 3032 in
> drmaa runner
> >>>
> >>> We restarted the galaxy, handler0 is up for few seconds then died
> again with the same error messages except the job number moved to the next
> one.
> >>>
> >>> We observed that the jobs it was trying to stop are all previous jobs
> whose status is either "deleted" or "deleted_new".
> >>>
> >>> We have never seen this in the past, so wondering if there is bugs in
> the new release?
> >>>
> >>> Cheers,
> >>> Derrick
> >>>
> >>>
> >>> ___

[galaxy-dev] Questions about tool wrapping

2013-01-22 Thread Jean-Frédéric Berthelot
Hi list, 

I am currently writing a Galaxy wrapper (for SortMeRNA [1]) and I would have 
several questions. 


== Pre-processing an input == 

SortMeRNA filters rRNA reads against an indexed rRNA database. This database 
can either be one bundled together with SortMeRNA and defined in a relevant 
.loc file, or one from the history. Either way, it must have been processed 
beforehand with another binary (just like makeblastdb/blastp). 

I am not sure of the best design here. 
* Providing another wrapper for this other binary, with users explicitly using 
it beforehand in their workflow? 
*Assuming the databases have been processed outside of Galaxy beforehand ? (As 
the tool relies on the file name, this would work for databases in the .loc 
file but not ones from the history) 
*Providing an extra-smart wrapper which would check if the database has been 
indexed, and if not invisibly indexing before running SortMeRNA ? 


== Error handling == 

I have been trying to use  to catch an error (the one defined in 
above section actually) and inform the user with a meaningful message. >From 
the console debug, it seems the error is correctly detected, since it displays 
the custom  : 
galaxy.jobs INFO 2013-01-22 19:02:04,198 Job 202: Fatal error: The database 
${databases} has not been preprocessed using buildtrie before using SortMeRNA. 
But Galaxy only displays in the failed history a generic message 

« tool error An error occurred running this job: Unable to finish job » 

Am I missing something, or the  is not meant to be used this way? 


== Using a previously set choice to set a default value later == 

Main option Alpha (a binary choice A/B) influence the default value of advanced 
option Beta (a float). Beta is buried into the "advanced options" section − I’d 
rather not have it just next to Alpha. 

Mirroring constructs seen in the tag, I was hoping to do something like 
: 

 
 
Option A 
Option B 
 
 
[…] 
 
 
 
 
 
 
 
 
But that does not seem to work. Am I missing something, or is it just not 
possible ? 

Alternatively, I looked into setting some hidden variable $default_value in the 
Alpha select, which would be used as , 
but that does not seem to work either 


Thanks for your help! 

[1]  

-- 
Jean-Frédéric 
Bonsai Bioinformatics group 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] BED files not recognized correctly

2013-01-22 Thread Anthonius deBoer
Hi,I have noticed for a while now that BED files are not recognized correctly or at least not parsed out correctly.I notice that invariably, the (9 column) BED file comments state there is 1 region and X comments, where X + 1 is the actual number of regions in the file..Here's a few lines from the file13807695038077349utr3:RSPO11-38077349380773490,0,25513807742038078426utr3:RSPO11-38078426380784260,0,25513807842638078593cds:RSPO11-3807842638078593255,0,013807937538079564cds:RSPO11-3807937538079564255,0,013807985538080005cds:RSPO11-3807985538080005255,0,013808215538082347cds:RSPO11-3808215538082347255,0,013809523938095333cds:RSPO11-3809523938095333255,0,01380958095621utr5:RSPO11-38095621380956210,0,255Any ideas why it thinks there are comments in the file and why only one region?The file is a regular txt file without the LF and is not DOS format or anything...It also does not parse out the name, score and strand info, but once I correct that manually, it works, but it is a pain to have to do that everytime...Thanks,Thon___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Blank history panel / Error in history API at listing contents

2013-01-22 Thread Peter Cock
On Tue, Jan 22, 2013 at 4:43 PM, Carl Eberhard  wrote:
> Hello, Peter
>
> In revision 8646:50c65739cd1a, I've changed the error handling in
> history_contents.py:index and the client-side code that fetches dataset
> info.
>
> Long form:
> Now when an error (like yours) occurs when fetching dataset information for
> one or more datasets, the method will record the error for that dataset as
> part of the returned list and continue trying to fetch the other requested
> datasets. In other words, one specific dataset in error will not cause the
> entire API request to fail (and break the history panel).
>
> Client-side code has been added to handle these types of errors better
> through the API and the initial mako page building. Now a dataset's server
> error will be shown on the client side as dataset in the 'error' state.
>
> Short form:
> Both the API and client side should handle single datasets error-ing more
> gracefully than they did and the history panel should be more resilient and
> useful during and after a server error (at least of this kind).
>
> Please let me know if you see more problems,
> C

Will do, thanks,

Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Blank history panel / Error in history API at listing contents

2013-01-22 Thread Carl Eberhard
Hello, Peter

In revision 8646:50c65739cd1a, I've changed the error handling in
history_contents.py:index and the client-side code that fetches dataset
info.

Long form:
Now when an error (like yours) occurs when fetching dataset information for
one or more datasets, the method will record the error for that dataset as
part of the returned list and continue trying to fetch the other requested
datasets. In other words, one specific dataset in error will not cause the
entire API request to fail (and break the history panel).

Client-side code has been added to handle these types of errors better
through the API and the initial mako page building. Now a dataset's server
error will be shown on the client side as dataset in the 'error' state.

Short form:
Both the API and client side should handle single datasets error-ing more
gracefully than they did and the history panel should be more resilient and
useful during and after a server error (at least of this kind).

Please let me know if you see more problems,
C




On Thu, Jan 17, 2013 at 5:54 PM, Peter Cock wrote:

>
>
> On Thursday, January 17, 2013, Carl Eberhard wrote:
>
>> Hello, Peter
>>
>> The blank panel should definitely be handled more gracefully in this
>> situation - I'll work on that.
>>
>>
> Great :)
>
>
>> Have you noticed though, since your patch, any particular pattern to
>> which metadata names are turning out to equal None (some obviously missing
>> metadata field)? Is there a particular datatype?
>>
>
> No, and thus far I've only had it on my development Galaxy install which
> has (compared to a production system) been exposed to plenty of cluster
> oddities and other corner cases. It is also running on SQLite (easy to
> reset and it is just me running jobs so contention is not normally an
> issue).
>
> Note that without adding more debugging or looking directly in the
> database there is no easy way to tell what the datasets causing this
> problem were, or what file type.
>
>
>>
>>
> Have you seen the assertion fail?
>> C
>>
>>
> Not yet, no. The fact the two fields were both None suggests to me
> sometimes an entry was not recorded properly...
>
> Regards,
>
>  Peter
>
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] ASCII and utf-8

2013-01-22 Thread Anatoliy Pandorin
Hi,
Thanks for your help!

Have a problem:
On some pages (in particular on the registration page, in .mako file) need
to add comments (explanatory information in Cyrillic  (need to use utf-8)).
template`s files in ASCII.
Maybe there is a traditional method of solving this problem with the
encoding?

Best regards,
 Anatoly
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] JAVAscript error after initiating tool

2013-01-22 Thread Joachim Jacob |VIB|

Hi Carl,

Sorry for the long delay!

It did not happen in a published or shared history. But it did 
definitely happen in a history where some datasets where copied from 
another history, since I've been copying a lot lately.



Thanks,
J

Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib

On 01/17/2013 05:32 PM, Carl Eberhard wrote:

Joachim,

That certainly helps. Thanks.

Do you recall if this error happened (or happens more often) with a 
published or shared history? Or if it was imported or copied from a 
published or shared history?


Thanks for the info,
C



On Thu, Jan 17, 2013 at 9:58 AM, Joachim Jacob > wrote:


Hi,

In the server logs I've found:

web0.log: galaxy.webapps.galaxy.api.histories ERROR 2013-01-11
19:56:21,071 Error in history API at showing history detail:
History is not accessible to the current user
web1.log: galaxy.webapps.galaxy.api.histories ERROR 2013-01-11
19:55:14,341 Error in history API at showing history detail:
History is not accessible to the current user
web5.log: galaxy.webapps.galaxy.api.histories ERROR 2013-01-11
20:29:11,138 Error in history API at showing history detail:
History is not accessible to the current user

The specific situation here is that I am almost at my 100% quotum
of storage space... Don't know if this gives some hint. The
problem appear sporadically, not reproducibabilibity :-)


Cheers,
Joachim




Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34 
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib

On 01/16/2013 05:33 PM, Carl Eberhard wrote:

Hi, Joachim

If you have access to your server logs, do you see any log
messages containing 'Error in history API' around the time
those happen?

Is there a situation where this happens more often (or a way
to reliably reproduce)?

I'm unable to reproduce this locally so far.

The error definitely can be handled better on the javascript
side, but I'd like to track down the API error as well before
I change the javascript.

Thanks for the help,
C




On Wed, Jan 16, 2013 at 7:57 AM, Joachim Jacob
mailto:joachim.ja...@vib.be>
>>
wrote:

Hi all,

For who is interested. Occasionally I get this strange
Javascript
error, just after clicking 'run' on a tool.

ERROR updating hdas from api history contents:e47699a32b93ce7f

The tool gets running, but the history panel is not updated. I
click 'analyse data' to see the updated history.


Cheers,
Joachim

-- Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34 


Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/






___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] Jobs status not being updated

2013-01-22 Thread Joachim Jacob |VIB|

Hi all,


After updating to the latest Galaxy dist-release (and moving our server 
to a new location - but this can not be the reason I think), the status 
of the jobs are not being updated. The box in the history stays grey. 
After restarting Galaxy or the server, the correct status of the job is 
displayed.


Any advice on how to proceed is appreciated.


Thanks,
Joachim




--
Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Job handler keeps crashing

2013-01-22 Thread Ido Tamir
Is it possible to backport this onto the latest distribution?
Yes, I'm lazy, but there are also others that are still updating
within the next weeks and will have problems without them being
aware of this fix.

best,
ido


On Jan 21, 2013, at 9:50 PM, Nate Coraor wrote:

> Hi all,
> 
> The commit[1] that fixes this is not in the January 11 distribution.  It'll 
> be part of the next distribution.
> 
> --nate
> 
> [1] 
> https://bitbucket.org/galaxy/galaxy-central/commits/c015b82b3944f967e2c859d5552c00e3e38a2da0
> 
> On Jan 21, 2013, at 3:10 PM, Anthonius deBoer wrote:
> 
>> I have seen this same issue exactly. Python just dies without any errors in 
>> the log. Using the latest galaxy-dist
>> 
>> Sent from my iPhone
>> 
>> On Jan 20, 2013, at 8:35 PM, Derrick Lin  wrote:
>> 
>>> Update to the 11 Jan 2013 dist does not help with this issue. :(
>>> 
>>> I checked the database and have the look at the job entries that handler0 
>>> tried to stop then shutdown:
>>> 
>>> | 3088 | 2013-01-03 14:25:38 | 2013-01-03 14:27:05 |531 | 
>>> toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
>>> | 0.1.2| deleted_new | Job output deleted by user before job 
>>> completed. | NULL | NULL   | NULL| NULL   | NULL   
>>> | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL  
>>>  |  NULL |  76 |0 | NULL| NULL  
>>>  | handler0 |  NULL |
>>> | 3091 | 2013-01-04 10:52:19 | 2013-01-07 09:14:34 |531 | 
>>> toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
>>> | 0.1.2| deleted_new | Job output deleted by user before job 
>>> completed. | NULL | NULL   | NULL| NULL   | NULL   
>>> | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL  
>>>  |  NULL |  76 |0 | NULL| NULL  
>>>  | handler0 |  NULL |
>>> | 3093 | 2013-01-07 22:02:21 | 2013-01-07 22:16:27 |531 | 
>>> toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_pos2bed/1.0.0  
>>> | 1.0.0| deleted_new | Job output deleted by user before job 
>>> completed. | NULL | NULL   | NULL| NULL   | NULL   
>>> | NULL  |   1749 | drmaa://-V -j n -R y -q intel.q/ | NULL  
>>>  |  NULL |  76 |0 | NULL| NULL  
>>>  | handler0 |  NULL |
>>> 
>>> So basically the job table has several of these entries what assigned to 
>>> handler0 and marked as "deleted_new". When the handler0 is up, it starts 
>>> stopping these jobs, after the first job has been "stopped", handler0 went 
>>> crash and died. But that job was then marked as "deleted".
>>> 
>>> I think if I manually change the job state from "deleted_new" to "deleted" 
>>> in the db, the handler0 will become fine. I am just concerned about how 
>>> these jobs were created (like assigned to a handler but marked as 
>>> "deleted_new"). 
>>> 
>>> Cheers,
>>> D
>>> 
>>> 
>>> On Mon, Jan 21, 2013 at 1:49 PM, Derrick Lin  wrote:
>>> I had a close look at the code in 
>>> 
>>> galaxy-dist / lib / galaxy / jobs / handler.py
>>> galaxy-dist / lib / galaxy / jobs / runners / drmaa.py
>>> 
>>> and found that stopping "deleted" and "deleted_new" seems to be normal 
>>> routine for the job handler. Could not find any exception that caused the 
>>> shutdown.
>>> 
>>> I do notice in the galaxy-dist on bitbucket, there is one commit with 
>>> comment "Fix shutdown on python >= 2.6.2 by calling setDaemon when creating 
>>> threads (these are still...", it seems to be relevant?
>>> 
>>> I will do the update to 11 Jan release and see if it fixes the issue.
>>> 
>>> D
>>> 
>>> 
>>> On Fri, Jan 18, 2013 at 4:03 PM, Derrick Lin  wrote:
>>> Hi guys,
>>> 
>>> We have updated our galaxy to 20 Dec 2012 release. Recently we found that 
>>> some submitted jobs could not start (stay gray forever).
>>> 
>>> We found that it was caused by the job manager sent jobs to a handler 
>>> (handler0) whose python process crashed and died.
>>> 
>>> From the handler log we found the last messages right before the crash:
>>> 
>>> galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 Stopping job 3032:
>>> galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 stopping job 3032 in 
>>> drmaa runner
>>> 
>>> We restarted the galaxy, handler0 is up for few seconds then died again 
>>> with the same error messages except the job number moved to the next one.
>>> 
>>> We observed that the jobs it was trying to stop are all previous jobs whose 
>>> status is either "deleted" or "deleted_new".
>>> 
>>> We have never seen this in the past, so wondering if there is bugs in the 
>>> new release?
>>> 
>>> Cheers,
>>> Derrick
>>> 
>>> 
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and oth

Re: [galaxy-dev] deleted_new kills handlers

2013-01-22 Thread Ido Tamir
No the problem is still there, but we have not updated yet since beginning of 
december.

maybe this is related:
http://gmod.827538.n3.nabble.com/segfault-in-libdrmaa-gt-galaxy-front-end-failure-td4027349.html

best,
ido

On Jan 21, 2013, at 5:45 AM, Derrick Lin wrote:

> Hi Ido,
> 
> I just found you post, and I think we are having the similar issue as your. 
> (I posted it in the mailing yesterday, you can find it for the detail of my 
> problem).
> 
> I am wondering if you are able to fix your problem?
> 
> Cheers,
> Derrick
> 
> 
> 
> 
> On Tue, Dec 11, 2012 at 12:13 AM, Ido Tamir  wrote:
> Dear galaxy maintainers,
> 
> we have the problem that killing jobs running on drmaa before they run 
> leading to a deleted_new state,
> kills our handlers (sometimes?). The job gets sent to the handler, but the 
> handler never acknowledges the jobs (local and drmaa) and
> all submitted jobs afterwards stay in submitted state.
> 
> Normally submission works fine, running also.
> 
> Is such a problem known to anybody else?
> 
> in my universe_wsgi.ini:
> enable_job_recovery = False
> I don't know which other settings could affect this.
> 
> Could it be an old drmaa library, maybe because the SGE was updated
> after galaxy installation? Other ideas?
> 
> 
> thank you very much,
> ido
> 
> 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>   http://lists.bx.psu.edu/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/