Re: [galaxy-dev] deleted_new kills handlers

2013-01-22 Thread Ido Tamir
No the problem is still there, but we have not updated yet since beginning of 
december.

maybe this is related:
http://gmod.827538.n3.nabble.com/segfault-in-libdrmaa-gt-galaxy-front-end-failure-td4027349.html

best,
ido

On Jan 21, 2013, at 5:45 AM, Derrick Lin wrote:

 Hi Ido,
 
 I just found you post, and I think we are having the similar issue as your. 
 (I posted it in the mailing yesterday, you can find it for the detail of my 
 problem).
 
 I am wondering if you are able to fix your problem?
 
 Cheers,
 Derrick
 
 
 
 
 On Tue, Dec 11, 2012 at 12:13 AM, Ido Tamir ta...@imp.ac.at wrote:
 Dear galaxy maintainers,
 
 we have the problem that killing jobs running on drmaa before they run 
 leading to a deleted_new state,
 kills our handlers (sometimes?). The job gets sent to the handler, but the 
 handler never acknowledges the jobs (local and drmaa) and
 all submitted jobs afterwards stay in submitted state.
 
 Normally submission works fine, running also.
 
 Is such a problem known to anybody else?
 
 in my universe_wsgi.ini:
 enable_job_recovery = False
 I don't know which other settings could affect this.
 
 Could it be an old drmaa library, maybe because the SGE was updated
 after galaxy installation? Other ideas?
 
 
 thank you very much,
 ido
 
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
   http://lists.bx.psu.edu/
 


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Job handler keeps crashing

2013-01-22 Thread Ido Tamir
Is it possible to backport this onto the latest distribution?
Yes, I'm lazy, but there are also others that are still updating
within the next weeks and will have problems without them being
aware of this fix.

best,
ido


On Jan 21, 2013, at 9:50 PM, Nate Coraor wrote:

 Hi all,
 
 The commit[1] that fixes this is not in the January 11 distribution.  It'll 
 be part of the next distribution.
 
 --nate
 
 [1] 
 https://bitbucket.org/galaxy/galaxy-central/commits/c015b82b3944f967e2c859d5552c00e3e38a2da0
 
 On Jan 21, 2013, at 3:10 PM, Anthonius deBoer wrote:
 
 I have seen this same issue exactly. Python just dies without any errors in 
 the log. Using the latest galaxy-dist
 
 Sent from my iPhone
 
 On Jan 20, 2013, at 8:35 PM, Derrick Lin klin...@gmail.com wrote:
 
 Update to the 11 Jan 2013 dist does not help with this issue. :(
 
 I checked the database and have the look at the job entries that handler0 
 tried to stop then shutdown:
 
 | 3088 | 2013-01-03 14:25:38 | 2013-01-03 14:27:05 |531 | 
 toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
 | 0.1.2| deleted_new | Job output deleted by user before job 
 completed. | NULL | NULL   | NULL| NULL   | NULL   
 | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL  
  |  NULL |  76 |0 | NULL| NULL  
  | handler0 |  NULL |
 | 3091 | 2013-01-04 10:52:19 | 2013-01-07 09:14:34 |531 | 
 toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
 | 0.1.2| deleted_new | Job output deleted by user before job 
 completed. | NULL | NULL   | NULL| NULL   | NULL   
 | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL  
  |  NULL |  76 |0 | NULL| NULL  
  | handler0 |  NULL |
 | 3093 | 2013-01-07 22:02:21 | 2013-01-07 22:16:27 |531 | 
 toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_pos2bed/1.0.0  
 | 1.0.0| deleted_new | Job output deleted by user before job 
 completed. | NULL | NULL   | NULL| NULL   | NULL   
 | NULL  |   1749 | drmaa://-V -j n -R y -q intel.q/ | NULL  
  |  NULL |  76 |0 | NULL| NULL  
  | handler0 |  NULL |
 
 So basically the job table has several of these entries what assigned to 
 handler0 and marked as deleted_new. When the handler0 is up, it starts 
 stopping these jobs, after the first job has been stopped, handler0 went 
 crash and died. But that job was then marked as deleted.
 
 I think if I manually change the job state from deleted_new to deleted 
 in the db, the handler0 will become fine. I am just concerned about how 
 these jobs were created (like assigned to a handler but marked as 
 deleted_new). 
 
 Cheers,
 D
 
 
 On Mon, Jan 21, 2013 at 1:49 PM, Derrick Lin klin...@gmail.com wrote:
 I had a close look at the code in 
 
 galaxy-dist / lib / galaxy / jobs / handler.py
 galaxy-dist / lib / galaxy / jobs / runners / drmaa.py
 
 and found that stopping deleted and deleted_new seems to be normal 
 routine for the job handler. Could not find any exception that caused the 
 shutdown.
 
 I do notice in the galaxy-dist on bitbucket, there is one commit with 
 comment Fix shutdown on python = 2.6.2 by calling setDaemon when creating 
 threads (these are still..., it seems to be relevant?
 
 I will do the update to 11 Jan release and see if it fixes the issue.
 
 D
 
 
 On Fri, Jan 18, 2013 at 4:03 PM, Derrick Lin klin...@gmail.com wrote:
 Hi guys,
 
 We have updated our galaxy to 20 Dec 2012 release. Recently we found that 
 some submitted jobs could not start (stay gray forever).
 
 We found that it was caused by the job manager sent jobs to a handler 
 (handler0) whose python process crashed and died.
 
 From the handler log we found the last messages right before the crash:
 
 galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 Stopping job 3032:
 galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 stopping job 3032 in 
 drmaa runner
 
 We restarted the galaxy, handler0 is up for few seconds then died again 
 with the same error messages except the job number moved to the next one.
 
 We observed that the jobs it was trying to stop are all previous jobs whose 
 status is either deleted or deleted_new.
 
 We have never seen this in the past, so wondering if there is bugs in the 
 new release?
 
 Cheers,
 Derrick
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
 http://lists.bx.psu.edu/
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, 

[galaxy-dev] Jobs status not being updated

2013-01-22 Thread Joachim Jacob |VIB|

Hi all,


After updating to the latest Galaxy dist-release (and moving our server 
to a new location - but this can not be the reason I think), the status 
of the jobs are not being updated. The box in the history stays grey. 
After restarting Galaxy or the server, the correct status of the job is 
displayed.


Any advice on how to proceed is appreciated.


Thanks,
Joachim




--
Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] Questions about tool wrapping

2013-01-22 Thread Jean-Frédéric Berthelot
Hi list, 

I am currently writing a Galaxy wrapper (for SortMeRNA [1]) and I would have 
several questions. 


== Pre-processing an input == 

SortMeRNA filters rRNA reads against an indexed rRNA database. This database 
can either be one bundled together with SortMeRNA and defined in a relevant 
.loc file, or one from the history. Either way, it must have been processed 
beforehand with another binary (just like makeblastdb/blastp). 

I am not sure of the best design here. 
* Providing another wrapper for this other binary, with users explicitly using 
it beforehand in their workflow? 
*Assuming the databases have been processed outside of Galaxy beforehand ? (As 
the tool relies on the file name, this would work for databases in the .loc 
file but not ones from the history) 
*Providing an extra-smart wrapper which would check if the database has been 
indexed, and if not invisibly indexing before running SortMeRNA ? 


== Error handling == 

I have been trying to use stdioregex to catch an error (the one defined in 
above section actually) and inform the user with a meaningful message. From 
the console debug, it seems the error is correctly detected, since it displays 
the custom regex description= : 
galaxy.jobs INFO 2013-01-22 19:02:04,198 Job 202: Fatal error: The database 
${databases} has not been preprocessed using buildtrie before using SortMeRNA. 
But Galaxy only displays in the failed history a generic message 

« tool error An error occurred running this job: Unable to finish job » 

Am I missing something, or the stdioregex is not meant to be used this way? 


== Using a previously set choice to set a default value later == 

Main option Alpha (a binary choice A/B) influence the default value of advanced 
option Beta (a float). Beta is buried into the advanced options section − I’d 
rather not have it just next to Alpha. 

Mirroring constructs seen in the actiontag, I was hoping to do something like 
: 

conditional name=Alpha 
param name=Alpha_selector type=select format=text 
option value=AOption A/option 
option value=BOption B/option 
/param 
/conditional 
[…] 
conditional name=Alpha.Alpha_selector 
when value=A 
param name=Beta type=float value=0.15/ 
/when 
when value=B 
param name=Beta type=float value=0.25/ 
/when 
/conditional 
But that does not seem to work. Am I missing something, or is it just not 
possible ? 

Alternatively, I looked into setting some hidden variable $default_value in the 
Alpha select, which would be used as param name=Beta value=$default_value/, 
but that does not seem to work either 


Thanks for your help! 

[1] http://bioinfo.lifl.fr/RNA/sortmerna/ 

-- 
Jean-Frédéric 
Bonsai Bioinformatics group 
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Job handler keeps crashing

2013-01-22 Thread Derrick Lin
Hey Ido,

I actually copied  pasted those committed changes in dramaa.py manually
and it serves well as a temporary solution.

D


On Tue, Jan 22, 2013 at 10:51 PM, Ido Tamir ta...@imp.ac.at wrote:

 Is it possible to backport this onto the latest distribution?
 Yes, I'm lazy, but there are also others that are still updating
 within the next weeks and will have problems without them being
 aware of this fix.

 best,
 ido


 On Jan 21, 2013, at 9:50 PM, Nate Coraor wrote:

  Hi all,
 
  The commit[1] that fixes this is not in the January 11 distribution.
  It'll be part of the next distribution.
 
  --nate
 
  [1]
 https://bitbucket.org/galaxy/galaxy-central/commits/c015b82b3944f967e2c859d5552c00e3e38a2da0
 
  On Jan 21, 2013, at 3:10 PM, Anthonius deBoer wrote:
 
  I have seen this same issue exactly. Python just dies without any
 errors in the log. Using the latest galaxy-dist
 
  Sent from my iPhone
 
  On Jan 20, 2013, at 8:35 PM, Derrick Lin klin...@gmail.com wrote:
 
  Update to the 11 Jan 2013 dist does not help with this issue. :(
 
  I checked the database and have the look at the job entries that
 handler0 tried to stop then shutdown:
 
  | 3088 | 2013-01-03 14:25:38 | 2013-01-03 14:27:05 |531 |
 toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
  | 0.1.2| deleted_new | Job output deleted by user before job
 completed. | NULL | NULL   | NULL| NULL   | NULL
 | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL
   |  NULL |  76 |0 | NULL| NULL
   | handler0 |  NULL |
  | 3091 | 2013-01-04 10:52:19 | 2013-01-07 09:14:34 |531 |
 toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_findPeaks/0.1.2
  | 0.1.2| deleted_new | Job output deleted by user before job
 completed. | NULL | NULL   | NULL| NULL   | NULL
 | NULL  |   1659 | drmaa://-V -j n -R y -q intel.q/ | NULL
   |  NULL |  76 |0 | NULL| NULL
   | handler0 |  NULL |
  | 3093 | 2013-01-07 22:02:21 | 2013-01-07 22:16:27 |531 |
 toolshed.g2.bx.psu.edu/repos/kevyin/homer/homer_pos2bed/1.0.0
  | 1.0.0| deleted_new | Job output deleted by user before job
 completed. | NULL | NULL   | NULL| NULL   | NULL
 | NULL  |   1749 | drmaa://-V -j n -R y -q intel.q/ | NULL
   |  NULL |  76 |0 | NULL| NULL
   | handler0 |  NULL |
 
  So basically the job table has several of these entries what assigned
 to handler0 and marked as deleted_new. When the handler0 is up, it starts
 stopping these jobs, after the first job has been stopped, handler0 went
 crash and died. But that job was then marked as deleted.
 
  I think if I manually change the job state from deleted_new to
 deleted in the db, the handler0 will become fine. I am just concerned
 about how these jobs were created (like assigned to a handler but marked as
 deleted_new).
 
  Cheers,
  D
 
 
  On Mon, Jan 21, 2013 at 1:49 PM, Derrick Lin klin...@gmail.com
 wrote:
  I had a close look at the code in
 
  galaxy-dist / lib / galaxy / jobs / handler.py
  galaxy-dist / lib / galaxy / jobs / runners / drmaa.py
 
  and found that stopping deleted and deleted_new seems to be normal
 routine for the job handler. Could not find any exception that caused the
 shutdown.
 
  I do notice in the galaxy-dist on bitbucket, there is one commit with
 comment Fix shutdown on python = 2.6.2 by calling setDaemon when creating
 threads (these are still..., it seems to be relevant?
 
  I will do the update to 11 Jan release and see if it fixes the issue.
 
  D
 
 
  On Fri, Jan 18, 2013 at 4:03 PM, Derrick Lin klin...@gmail.com
 wrote:
  Hi guys,
 
  We have updated our galaxy to 20 Dec 2012 release. Recently we found
 that some submitted jobs could not start (stay gray forever).
 
  We found that it was caused by the job manager sent jobs to a handler
 (handler0) whose python process crashed and died.
 
  From the handler log we found the last messages right before the crash:
 
  galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 Stopping job 3032:
  galaxy.jobs.handler DEBUG 2013-01-18 15:00:34,481 stopping job 3032 in
 drmaa runner
 
  We restarted the galaxy, handler0 is up for few seconds then died
 again with the same error messages except the job number moved to the next
 one.
 
  We observed that the jobs it was trying to stop are all previous jobs
 whose status is either deleted or deleted_new.
 
  We have never seen this in the past, so wondering if there is bugs in
 the new release?
 
  Cheers,
  Derrick
 
 
  ___
  Please keep all replies on the list by using reply all
  in your mail client.  To manage your subscriptions to this
  and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/
  

[galaxy-dev] Upgrade Python to a newer version.

2013-01-22 Thread Luobin Yang
Hi, Galaxy developers,

My python version is 2.6.3 and I would like to upgrade it to 2.7.3. So is
there anything that we need to on Galaxy after Python is upgraded to a new
version?

Thanks,
Luobin
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] BED files not recognized correctly

2013-01-22 Thread Hans-Rudolf Hotz

Hi Thon


We see the same with our BED files, and I can reproduce it with your 
example. I tend to ignore it since (so far) it has no influence on the 
functionality, if I use the BED file in any other tool.


I only run into troubles when I present working with BED files in our 
introductory courses


Sorry, no solution.


Regards, Hans-Rudolf



On 01/22/2013 10:35 PM, Anthonius deBoer wrote:

Hi,

I have noticed for a while now that BED files are not recognized
correctly or at least not parsed out correctly.
I notice that invariably, the (9 column) BED file comments state there
is 1 region and X comments, where X + 1 is the actual number of regions
in the file..

Capture.JPG

Here's a few lines from the file
1   3807695038077349utr3:RSPO1  1   -   
38077349380773490,0,255
1   3807742038078426utr3:RSPO1  1   -   
38078426380784260,0,255
1   3807842638078593cds:RSPO1   1   -   
3807842638078593255,0,0
1   3807937538079564cds:RSPO1   1   -   
3807937538079564255,0,0
1   3807985538080005cds:RSPO1   1   -   
3807985538080005255,0,0
1   3808215538082347cds:RSPO1   1   -   
3808215538082347255,0,0
1   3809523938095333cds:RSPO1   1   -   
3809523938095333255,0,0
1   3809533338095621utr5:RSPO1  1   -   
38095621380956210,0,255


Any ideas why it thinks there are comments in the file and why only one
region?

The file is a regular txt file without the LF and is not DOS format or
anything...

It also does not parse out the name, score and strand info, but once I
correct that manually, it works, but it is a pain to have to do that
everytime...

Thanks,

Thon


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/