Re: [galaxy-dev] Manage API keys in admin panel

2013-02-28 Thread Rémy Dernat
Ok, so I answer to myself. I created 2 template and 1 controller.

You need tu put the controller in
[galaxy_install_dir]/lib/galaxy/webapps/galaxy/controllers/userskeys.py



Contains the user interface in the Universe class


import glob
import logging
import os
import socket
import string
import random
import pprint

from galaxy import web
from galaxy import util, model
from galaxy.model.orm import and_
from galaxy.security.validate_user_input import validate_email,
validate_publicname, validate_password, transform_publicname
from galaxy.util.json import from_json_string, to_json_string
from galaxy.web import url_for
from galaxy.web.base.controller import BaseUIController,
UsesFormDefinitionsMixin
from galaxy.web.form_builder import CheckboxField, build_select_field
from galaxy.web.framework.helpers import time_ago, grids

from inspect import getmembers

log = logging.getLogger( __name__ )

require_login_template = 
p
This %s has been configured such that only users who are logged in may
use it.%s
/p
p/


class UserOpenIDGrid( grids.Grid ):
use_panels = False
title = OpenIDs linked to your account
model_class = model.UserOpenID
template = '/user/openid_manage.mako'
default_filter = { openid : All }
default_sort_key = -create_time
columns = [
grids.TextColumn( OpenID URL, key=openid, link=( lambda x:
dict( action='openid_auth', login_button=Login, openid_url=x.openid if
not x.provider else '', openid_provider=x.provider, auto_associate=True ) )
),
grids.GridColumn( Created, key=create_time, format=time_ago ),
]
operations = [
grids.GridOperation( Delete, async_compatible=True ),
]
def build_initial_query( self, trans, **kwd ):
return trans.sa_session.query( self.model_class ).filter(
self.model_class.user_id == trans.user.id )

class User( BaseUIController, UsesFormDefinitionsMixin ):
user_openid_grid = UserOpenIDGrid()
installed_len_files = None


@web.expose
@web.require_login()
def api_keys( self, trans, cntrller, uid, **kwd ):
params = util.Params( kwd )
message = util.restore_text( params.get( 'message', ''  ) )
status = params.get( 'status', 'done' )
uid = params.get('uid', uid)
pprint.pprint(uid)
if params.get( 'new_api_key_button', False ):
new_key = trans.app.model.APIKeys()
new_key.user_id = uid
new_key.key = trans.app.security.get_new_guid()
trans.sa_session.add( new_key )
trans.sa_session.flush()
message = Generated a new web API key
status = done
return trans.fill_template(
'webapps/galaxy/user/ok_admin_api_keys.mako',
cntrller=cntrller,
message=message,
status=status )


@web.expose
@web.require_login()
def all_users( self, trans, cntrller, **kwd ):
params = util.Params( kwd )
message = util.restore_text( params.get( 'message', ''  ) )
status = params.get( 'status', 'done' )
users = []
for user in trans.sa_session.query( trans.app.model.User ) \
.filter(
trans.app.model.User.table.c.deleted==False ) \
.order_by(
trans.app.model.User.table.c.email ):
uid = int(user.id)
userkey = 
for api_user in
trans.sa_session.query(trans.app.model.APIKeys) \
  .filter(
trans.app.model.APIKeys.user_id == uid):
userkey = api_user.key
users.append({'uid':uid, 'email':user.email, 'key':userkey})
return trans.fill_template( 'webapps/galaxy/user/list_users.mako',
cntrller=cntrller,
users=users,
message=message,
status=status )


Then the 2 templates [galaxy_install_dir]/templates/webapps/galaxy/user/

The first one listing all the users with their keys
cat templates/webapps/galaxy/user/list_users.mako

%inherit file=/base.mako/

%if message:
${render_msg( message, status )}
%endif


%if users:
div class=toolForm
div class=toolFormTitleUsers informations/div
table
theadthUID/ththemail/th/thead
tbody
%for user in users:
 tr
td${user['uid']}/td
td${user['email']}/td
td${user['key']}/td
td
  form action=${h.url_for(
controller='userskeys', action='api_keys', cntrller=cntrller )}
method=POST
  input type=hidden name=uid
value=${user['uid']} /
  input type=submit name=new_api_key_button
value=Generate a new key now /
   

Re: [galaxy-dev] all the data files in .loc files needed to download ?

2013-02-28 Thread shenwiyn
Hi everyone,
Thank you very much for your help.
I want to download the complete directory of galaxy data through rsync -avzP 
rsync://datacache.g2.bx.psu.edu/indexes/phiX.But It can't work.
And then I try to via http://dan.g2.bx.psu.edu/ by 
http://datacache.g2.bx.psu.edu/ ,I get 502 Bad Gateway.
So what happens? How cat I get the data? Is there another way?
Help me please. 




shenwiyn

From: Carl Eberhard
Date: 2013-02-27 22:28
To: Ross
CC: shenwiyn; galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] all the data files in .loc files needed to download ?
Hello,


Ross makes an important point: if this installation is for more than one user, 
you should install a more complete database system. This page may help with 
your decisions there:
http://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer?action=showredirect=Admin%2FConfig%2FPerformance


To your first question Is all the data in datacache.g2.bx.psu.edu/indexes 
needed?, I would say you should only need the data specific to the subject of 
your analyses and the tools you plan to use.


To your second question can all of these needed data be downloaded (via 
rsync)? - perhaps you can clarify, please? We don't have any restriction on 
the amount you rsync from there and you're free to download all the data. If a 
problem or topic isn't covered in that wiki page, this mailing list is a good 
place to ask questions related to data set up.


Thanks, 
Carl



On Wed, Feb 27, 2013 at 5:44 AM, Ross ross.laza...@gmail.com wrote:

Hi, shenwiyn
I'd also add:
Fourth, replace distributed default sqlite with postgresql
to your list if it's for any serious use. Sqlite is ok for testing.





On Wed, Feb 27, 2013 at 9:37 PM, shenwiyn shenw...@gmail.com wrote:

Hello Carl,
Thank you very much for your help.I have some thoughts about installing my 
local galaxy :

First,download the latest source of galaxy from 
https://bitbucket.org/galaxy/galaxy-dist/ ,then install it in my local computer.
Second,install dependencies tools mentioned in 
http://wiki.galaxyproject.org/Admin/Data%20Integration successfully.
Third ,install the needed data and configure the .loc files from the following 
help of wiki page http://wiki.galaxyproject.org/Admin/NGS%20Local%20Setup.
Finally ,our installation of local galaxy overs and it works in the mass.Is it 
right ?

And I also have some confusions:
First ,How large the total needed data is ?Is all the data in 
datacache.g2.bx.psu.edu/indexes needed ?
Second, can all of these needed data be downloaded (via rsync) by the help of 
http://wiki.galaxyproject.org/Admin/Data%20Integration?
These are the most important questions I concerned about now.I am so thankful 
for some of you advice.


Thank you very much.
  

 



shenwiyn

From: Carl Eberhard
Date: 2013-02-23 02:42
To: shenwiyn
CC: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] all the data files in .loc files needed to download ?
Hello shenwiyn, 


You may want to have a look at 
http://wiki.galaxyproject.org/Admin/Data%20Integration which provides more 
information specific to the indeces/data you'll need and how to install.


Along with data management, that page has instructions on how to download (via 
rsync) the same data we use on our main server. It may be a good starting point.


Thanks,
Carl





On Thu, Feb 21, 2013 at 8:58 PM, shenwiyn shenw...@foxmail.com wrote:

Hi Carl Eberhard,
Thank you very much for you help.I have another some questions :
First,
we need to install the needed data,for example sam_fa_indices.loc file of SAM 
Tools :
index   AaegL1  /afs/bx.psu.edu/depot/data/genome/AaegL1/sam_index/AaegL1.fa
index   AgamP3  /afs/bx.psu.edu/depot/data/genome/AgamP3/sam_index/AgamP3.fa
index   ailMel1 /afs/bx.psu.edu/depot/data/genome/ailMel1/sam_index/ailMel1.fa
index   anoCar2 /afs/bx.psu.edu/depot/data/genome/anoCar2/sam_index/anoCar2.fa
index   apiMel2 /afs/bx.psu.edu/depot/data/genome/apiMel2/sam_index/apiMel2.fa
index   apiMel3 /afs/bx.psu.edu/depot/data/genome/apiMel3/sam_index/apiMel3.fa
index   aplCal1 /afs/bx.psu.edu/depot/data/genome/aplCal1/sam_index/aplCal1.fa

so we need to download 
AaegL1.fa,AgamP3.fa,ailMel1.fa,anoCar2.fa,apiMel2.fa,apiMel3.fa,and so on ,then 
install all of this needed datas to 
/afs/bx.psu.edu/depot/data/genome/aplCal1/sam_index/,is it right?
Second,
from the needed datas mentioned in the .loc files in 
http://wiki.galaxyproject.org/Admin/NGS%20Local%20Setup ,we need to download 
too much data,can we get all these data from some website instead of searching 
them on Iternet one by one?


Thanks,
shenwiyn



   



From: Carl Eberhard
Date: 2013-02-22 01:20
To: shenwiyn
Subject: Re: [galaxy-dev] all the data files in .loc files needed to download ?
Hello, 
Yes a local galaxy requires a few steps in order to install the needed data.


The following wiki page should help get you started:
http://wiki.galaxyproject.org/Admin/NGS%20Local%20Setup



Let me know if you need more information,
Carl





On Tue, Feb 

Re: [galaxy-dev] Macs14 - Invalid Literal for int error

2013-02-28 Thread Peter Briggs
Hi Greg - the issue is that when the wrapper processes the Macs output, 
it attempts to turn the 2nd column of every non-comment line (the 
start field) into an integer (as an aside, it then also subtracts 1 
from this value).


Unfortunately the line starting chr  start  end ... isn't commented, 
so the integer conversion fails causing the error you saw. The patch 
just traps for the integer conversion error.


HTH, best wishes, Peter

On 27/02/13 19:16, greg wrote:

Thanks Peter.  I'm running it now after applying your fix.

Any idea what the problem was?

-Greg

On Wed, Feb 27, 2013 at 4:21 AM, Peter Briggs
peter.bri...@manchester.ac.uk wrote:

Hello Greg

It looks like you're running the version of the MACS14 tool from the
toolshed? I think we also ran into this here and I patched the
macs14_wrapper.py thusly to work around it:

diff --git a/macs14/macs142_wrapper.py b/macs14/macs142_wrapper.py
index ccefb10..c0cf099 100644
--- a/macs14/macs142_wrapper.py
+++ b/macs14/macs142_wrapper.py
@@ -37,7 +37,13 @@ def xls_to_interval( xls_file, interval_file, header =
None ):
  else:
  fields = line.split( '\t' )
  if len( fields )  1:
-fields[1] = str( int( fields[1] ) - 1 )
+try:
+# Try to convert 'start' to int and shift
+fields[1] = str( int( fields[1] ) - 1 )
+except ValueError:
+# Integer conversion failed so comment out
+# bad line instead
+fields[0] = #%s % fields[0]
  out.write( '\t'.join( fields ) )
  out.close()

I'm intending to feed this back to the tool authors once things got a bit
quieter here.

HTH, best wishes

Peter


On 26/02/13 15:52, greg wrote:


Hi guys,

(Sorry for showing up on this list so much, hopefully I'll get
everything running soon!)


On our local galaxy install when I try to run MACS14 like this:
http://snag.gy/RYBBN.jpg

we get this error:

Dataset generation errors

Dataset 74: MACS14 on data 29 and data 24 (peaks: bed)

Tool execution generated the following error message:

Traceback (most recent call last):
File
/misc/local/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/ryo-tas/macs14/cdd9791c0afa/macs14/macs14_wrapper.py,
line 132, in module
  if __name__ == __main__: main()
File
/misc/local/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/ryo-tas/macs14/cdd9791c0afa/macs14/macs14_wrapper.py,
line 94, in main
  xls_to_interval( create_peak_xls_file,
options['xls_to_interval']['peaks_file'], header = 'peaks file' )
File
/misc/local/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/ryo-tas/macs14/cdd9791c0afa/macs14/macs14_wrapper.py,
line 40, in xls_to_interval
  fields[1] = str( int( fields[1] ) - 1 )
ValueError: invalid literal for int() with base 10: 'start'
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/



--
Peter Briggs peter.bri...@manchester.ac.uk
Bioinformatics Core Facility University of Manchester
B.1083 Michael Smith Bldg Tel: (0161) 2751482




--
Peter Briggs peter.bri...@manchester.ac.uk
Bioinformatics Core Facility University of Manchester
B.1083 Michael Smith Bldg Tel: (0161) 2751482


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] purging datasets doesn't free up disk space

2013-02-28 Thread Sarah Diehl
Hello,

our Galaxy server is running low on disk space, so I asked all users to 
(permanently) delete old histories and datasets. One specific user alone 
managed to reduce his space usage by 2 TB. That's at least what the Galaxy 
server says. Afterwards I additionally ran the following scripts:

delete_userless_histories.sh -d 1 -r -f
purge_histories.sh -d 1 -r -f
purge_libraries.sh -d 1 -r -f
purge_folders.sh -d 1 -r -f
python cleanup_datasets.py universe_wsgi.ini -d 200 -6 -r -f
purge_datasets.sh -d 1 -r -f

Purging the histories took very long (about a day) and the log of deleted 
histories is huge. Purging the datasets also took some time. However, my disk 
usage is still the same as before. It wasn't reduced at all.

Did I miss some important step or some waiting time? Any help would be 
appreciated.

Thanks,
Sarah
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] bowtie-wrapper

2013-02-28 Thread Jeremy Goecks
Hmm, I don't see a patch for your changes to the wrapper script. Can you please 
bundle up and send along your tool definition + wrapper script changes and I'll 
add it to the card?

Thanks,
J.

On Feb 28, 2013, at 4:47 AM, Peter Briggs wrote:

 Hi Jeremy - that's awesome, thanks!
 
 One thing: not sure if I read this right (I'm a Trello newbie) however it 
 looks like only the patch to the tool XML has was attached to the card. This 
 change also requires a patch to the python wrapper script (should have been 
 attached to my earlier message).
 
 Apologies if I missed something, thanks again
 
 Best wishes, Peter
 
 On 27/02/13 18:33, Jeremy Goecks wrote:
 Thanks Peter. I've added your patch to the Trello card tracking 
 incorporation of community contributions to the Bowtie wrapper:
 
 https://trello.com/c/TdYxdbkm
 
 Best,
 J.
 
 On Feb 27, 2013, at 4:35 AM, Peter Briggs wrote:
 
 Hello Alexander
 
 I've made some changes to our local copy of the bowtie tool files 
 (bowtie_wrapper.xml and bowtie_wrapper.py) to give the option of capturing 
 bowtie's stderr output and adding it as an additional history item (here 
 the statistics in this output is used as input to a local tool).
 
 I've attached patches to the tool in case they're any use to you.
 
 (As an aside and more generally: I'm not sure how to manage these local 
 customisations in the longer term - what do other people on this list do?)
 
 HTH, best wishes, Peter
 
 On 25/02/13 11:29, Alexander Kurze wrote:
 Hello,
 
 I am using the bowtie-wrapper on my locally installed galaxy server to
 align reads. However I missing the stats read-out. Is there any
 possibility to include statistics about unaligned reads?
 
 If I use bowtie vi comman line I get following output:
 
 
 bowtie ~/dm3 -v 2 -k 5 --best --strata -S -t reads.fastq reads.sam
 End-to-end 2/3-mismatch full-index search: 01:00:21
 # reads processed: 12084153
 # reads with at least one reported alignment: 9391748 (77.72%)
 # reads that failed to align: 2692405 (22.28%)
 Reported 30293838 alignments to 1 output stream(s)
 
 
 The output should be normally saved in the stderr but unfortunatly the
 stderr is somehow deleted after the alignment job is done in bowtie
 under galaxy.
 
 Any idea how I can still access the stats?
 
 Thanks,
 
 Alex
 
 --
 Alexander Kurze, DPhil
 University of Oxford
 Department of Biochemistry
 South Parks Road
 Oxford, OX1 3QU
 United Kingdom
 
 Tel: +44 1865 613 230
 Fax:+44 1865 613 341
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
   http://lists.bx.psu.edu/
 
 
 --
 Peter Briggs peter.bri...@manchester.ac.uk
 Bioinformatics Core Facility University of Manchester
 B.1083 Michael Smith Bldg Tel: (0161) 2751482
 
 
 bowtie_wrapper.xml.patch
 
 
 -- 
 Peter Briggs peter.bri...@manchester.ac.uk
 Bioinformatics Core Facility University of Manchester
 B.1083 Michael Smith Bldg Tel: (0161) 2751482
 
 


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] purging datasets doesn't free up disk space

2013-02-28 Thread Hans-Rudolf Hotz

Hi Sarah


May be the histories and/or datasets are shared with users who did not 
delete them?


Also, have you looked at the log files? What is written in the log files 
for the purge_datasets.sh step? Do you have lines like:


Removing disk, file  ***/database/files/008/dataset_8271.dat

and at the very end, something like:

Purged 82 datasets
Freed disk space:  647719608
Elapsed time:  1.51890397072


And I am confused with the way you call the scripts. This might be 
explained by different galaxy versions, however:


delete_userless_histories.sh is in my case a wrapper executing 
'cleanup_datasets.py' with the options: -d 90 -1. I do not provide the 
options when I call it. and the same is true for all the other scripts




Regards, Hans-Rudolf

On 02/28/2013 01:51 PM, Sarah Diehl wrote:

Hello,

our Galaxy server is running low on disk space, so I asked all users to 
(permanently) delete old histories and datasets. One specific user alone 
managed to reduce his space usage by 2 TB. That's at least what the Galaxy 
server says. Afterwards I additionally ran the following scripts:

delete_userless_histories.sh -d 1 -r -f
purge_histories.sh -d 1 -r -f
purge_libraries.sh -d 1 -r -f
purge_folders.sh -d 1 -r -f
python cleanup_datasets.py universe_wsgi.ini -d 200 -6 -r -f
purge_datasets.sh -d 1 -r -f

Purging the histories took very long (about a day) and the log of deleted 
histories is huge. Purging the datasets also took some time. However, my disk 
usage is still the same as before. It wasn't reduced at all.

Did I miss some important step or some waiting time? Any help would be 
appreciated.

Thanks,
Sarah
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] access to api when using apache ldap auth

2013-02-28 Thread Nate Coraor
Hi Brad,

I think the method that Thon deBoer is using here is probably the best way to 
do it:

http://dev.list.galaxyproject.org/Setting-up-a-proxy-with-authentication-while-waiving-this-for-API-calls-td4658497.html

--nate

On Feb 27, 2013, at 5:34 PM, Langhorst, Brad wrote:

 
 I need to access the api using an api key… 
 Does anybody have a working apache configuration for that? A search at 
 galaxyproject.org/search didn't turn anything up.
 
 I tried this: but that causes the history to fail to update.
 
 Location /
   
   
 Order allow,deny  
   
   
 allow from all
   
   
   
   

 AuthType Basic
   
   
 AuthName NEB Credentials
   
   
 AuthBasicProvider ldap
   
   
 AuthzLDAPAuthoritative off
   
   
 AuthLDAPBindDN @  
   
   
 AuthLDAPBindPassword x
   
   
 AuthLDAPURL ldap://host…?sAMAccountName   
   
  
 require valid-user
   
   
   
   
   

 RequestHeader set REMOTE_USER %{AUTHENTICATE_sAMAccountName}e 
   

   
   

 SetOutputFilter DEFLATE   
   

 SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip 
 dont-vary 
  
 SetEnvIfNoCase Request_URI \.(?:t?gz|zip|bz2)$ no-gzip 
 dont-vary 
  
 SetEnvIfNoCase Request_URI /history/export_archive no-gzip 
 dont-vary 
  
   
   

 XSendFile on  
   
   
 XSendFileAllowAbove on
   
   
 /Location 
 
Location /api  
   
  
Satisfy Any
 

Re: [galaxy-dev] purging datasets doesn't free up disk space

2013-02-28 Thread Sarah Diehl
Hi Hans,

thanks for your help. I didn't realize the sh scripts came with their own 
options. I just misunderstood that in the documentation. It seems to have 
worked though, the log file says:

# 2013-02-28 11:56:20 - Handling stuff older than 1 days
...
Purged 10097 datasets
Freed disk space:  9515493164336
Elapsed time:  5669.43398285

So the freed disk space should be around 9 TB, right?

Best regards,
Sarah


- Original Message -
From: Hans-Rudolf Hotz h...@fmi.ch
To: Sarah Diehl di...@ie-freiburg.mpg.de
Cc: galaxy-dev@lists.bx.psu.edu List galaxy-dev@lists.bx.psu.edu
Sent: Thursday, February 28, 2013 4:18:49 PM
Subject: Re: [galaxy-dev] purging datasets doesn't free up disk space

Hi Sarah


May be the histories and/or datasets are shared with users who did not 
delete them?

Also, have you looked at the log files? What is written in the log files 
for the purge_datasets.sh step? Do you have lines like:

Removing disk, file  ***/database/files/008/dataset_8271.dat

and at the very end, something like:

Purged 82 datasets
Freed disk space:  647719608
Elapsed time:  1.51890397072


And I am confused with the way you call the scripts. This might be 
explained by different galaxy versions, however:

delete_userless_histories.sh is in my case a wrapper executing 
'cleanup_datasets.py' with the options: -d 90 -1. I do not provide the 
options when I call it. and the same is true for all the other scripts



Regards, Hans-Rudolf

On 02/28/2013 01:51 PM, Sarah Diehl wrote:
 Hello,

 our Galaxy server is running low on disk space, so I asked all users to 
 (permanently) delete old histories and datasets. One specific user alone 
 managed to reduce his space usage by 2 TB. That's at least what the Galaxy 
 server says. Afterwards I additionally ran the following scripts:

 delete_userless_histories.sh -d 1 -r -f
 purge_histories.sh -d 1 -r -f
 purge_libraries.sh -d 1 -r -f
 purge_folders.sh -d 1 -r -f
 python cleanup_datasets.py universe_wsgi.ini -d 200 -6 -r -f
 purge_datasets.sh -d 1 -r -f

 Purging the histories took very long (about a day) and the log of deleted 
 histories is huge. Purging the datasets also took some time. However, my disk 
 usage is still the same as before. It wasn't reduced at all.

 Did I miss some important step or some waiting time? Any help would be 
 appreciated.

 Thanks,
 Sarah
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] purging datasets doesn't free up disk space

2013-02-28 Thread Sarah Diehl
Ok, sorry for having bothered you again with my stupidness... I actually have a 
test instance of galaxy on the same server with the database directory 
hard-linked. After I deleted that one the space was free *d'oh*.



- Original Message -
From: Sarah Diehl di...@ie-freiburg.mpg.de
To: Hans-Rudolf Hotz h...@fmi.ch, galaxy-dev@lists.bx.psu.edu List 
galaxy-dev@lists.bx.psu.edu
Sent: Thursday, February 28, 2013 5:07:58 PM
Subject: Re: [galaxy-dev] purging datasets doesn't free up disk space

Hi Hans,

thanks for your help. I didn't realize the sh scripts came with their own 
options. I just misunderstood that in the documentation. It seems to have 
worked though, the log file says:

# 2013-02-28 11:56:20 - Handling stuff older than 1 days
...
Purged 10097 datasets
Freed disk space:  9515493164336
Elapsed time:  5669.43398285

So the freed disk space should be around 9 TB, right?

Best regards,
Sarah


- Original Message -
From: Hans-Rudolf Hotz h...@fmi.ch
To: Sarah Diehl di...@ie-freiburg.mpg.de
Cc: galaxy-dev@lists.bx.psu.edu List galaxy-dev@lists.bx.psu.edu
Sent: Thursday, February 28, 2013 4:18:49 PM
Subject: Re: [galaxy-dev] purging datasets doesn't free up disk space

Hi Sarah


May be the histories and/or datasets are shared with users who did not 
delete them?

Also, have you looked at the log files? What is written in the log files 
for the purge_datasets.sh step? Do you have lines like:

Removing disk, file  ***/database/files/008/dataset_8271.dat

and at the very end, something like:

Purged 82 datasets
Freed disk space:  647719608
Elapsed time:  1.51890397072


And I am confused with the way you call the scripts. This might be 
explained by different galaxy versions, however:

delete_userless_histories.sh is in my case a wrapper executing 
'cleanup_datasets.py' with the options: -d 90 -1. I do not provide the 
options when I call it. and the same is true for all the other scripts



Regards, Hans-Rudolf

On 02/28/2013 01:51 PM, Sarah Diehl wrote:
 Hello,

 our Galaxy server is running low on disk space, so I asked all users to 
 (permanently) delete old histories and datasets. One specific user alone 
 managed to reduce his space usage by 2 TB. That's at least what the Galaxy 
 server says. Afterwards I additionally ran the following scripts:

 delete_userless_histories.sh -d 1 -r -f
 purge_histories.sh -d 1 -r -f
 purge_libraries.sh -d 1 -r -f
 purge_folders.sh -d 1 -r -f
 python cleanup_datasets.py universe_wsgi.ini -d 200 -6 -r -f
 purge_datasets.sh -d 1 -r -f

 Purging the histories took very long (about a day) and the log of deleted 
 histories is huge. Purging the datasets also took some time. However, my disk 
 usage is still the same as before. It wasn't reduced at all.

 Did I miss some important step or some waiting time? Any help would be 
 appreciated.

 Thanks,
 Sarah
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] first install, MySQL date

2013-02-28 Thread Nate Coraor
On Feb 28, 2013, at 7:37 AM, Arnau Bria wrote:

 Hi all,
 
 my name is Arnau Bria and I work at CRG, in BCN.
 I'm a sysadmin and I'm helping one user to install/configure galaxy and
 to run jobs in our SGE cluster. I think this is the correct list for
 asking SGE/MySQL questions, if not, let me apology in advance.
 
 I have a question about MySQL dates. system date is correct and uses
 CET (I'm in Spain), MySQL i sconfigured to use system's timezone:
 
 date
 Thu Feb 28 13:38:10 CET 2013
 
 SELECT @@global.time_zone, @@session.time_zone;
 ++-+
 | @@global.time_zone | @@session.time_zone |
 ++-+
 | SYSTEM | SYSTEM  |
 ++-+
 1 row in set (0.02 sec)
 
 
 but SGE job dates are wrong:
 
 
 create_time
 2013-02-28 12:37:48
 
 notice it shows 12:37 and it should be 13:37.
 
 is this importatn? may I change glaxy's timezone in some way?

Hi Arnau,

The times are intentionally stored in the database in UTC.  Times are rarely 
shown in the interface, but when they are, they should be properly adjusted 
using the system's currently set timezone.

Thanks,
--nate

 
 TIA,
 Arnau
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] blend4j

2013-02-28 Thread Marc Logghe
Hi all,
Is this the correct forum to post blend4j issues ? Apologies if this is not the 
case.

I was trying to run a custom workflow via blend4j, according to 
https://github.com/jmchilton/blend4j
In my hands, the call to workflowDetails.getInputs() returns an empty Map. I 
believe the corresponding REST request looks like:
/galaxy/api/workflows/workflow id?key=API key
In the JSON response, the 'inputs' attribute is empty indeed ({url: 
/galaxy/api/workflows/f09437b8822035f7, inputs: {}, ...).
I do not understand why this is the case because 2 steps require inputs to be 
set at runtime. Via the Galaxy web interface those required parameters can be 
set and the workflow is running smoothly.
Or has the term 'inputs' nothing to do with 'parameters' ?

Regards,
Marc

THIS E-MAIL MESSAGE IS INTENDED ONLY FOR THE USE OF THE INDIVIDUAL OR ENTITY TO 
WHICH IT IS ADDRESSED AND MAY CONTAIN INFORMATION THAT IS PRIVILEGED, 
CONFIDENTIAL AND EXEMPT FROM DISCLOSURE. 
If the reader of this E-mail message is not the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately at abl...@ablynx.com. Thank you for your 
co-operation. 
NANOBODY and NANOCLONE are registered trademarks of Ablynx N.V. 

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] bowtie-wrapper

2013-02-28 Thread Jeremy Goecks
Attached to the card, thanks Peter.

J.

On Feb 28, 2013, at 11:14 AM, Peter Briggs wrote:

 Hi Jeremy
 No problem - zip file with the two patches is attached, please let me know if 
 there's a problem.
 Best wishes
 Peter
 
 On 28/02/13 15:07, Jeremy Goecks wrote:
 Hmm, I don't see a patch for your changes to the wrapper script. Can you 
 please bundle up and send along your tool definition + wrapper script 
 changes and I'll add it to the card?
 
 Thanks,
 J.
 
 On Feb 28, 2013, at 4:47 AM, Peter Briggs wrote:
 
 Hi Jeremy - that's awesome, thanks!
 
 One thing: not sure if I read this right (I'm a Trello newbie) however it 
 looks like only the patch to the tool XML has was attached to the card. 
 This change also requires a patch to the python wrapper script (should have 
 been attached to my earlier message).
 
 Apologies if I missed something, thanks again
 
 Best wishes, Peter
 
 On 27/02/13 18:33, Jeremy Goecks wrote:
 Thanks Peter. I've added your patch to the Trello card tracking 
 incorporation of community contributions to the Bowtie wrapper:
 
 https://trello.com/c/TdYxdbkm
 
 Best,
 J.
 
 On Feb 27, 2013, at 4:35 AM, Peter Briggs wrote:
 
 Hello Alexander
 
 I've made some changes to our local copy of the bowtie tool files 
 (bowtie_wrapper.xml and bowtie_wrapper.py) to give the option of 
 capturing bowtie's stderr output and adding it as an additional history 
 item (here the statistics in this output is used as input to a local 
 tool).
 
 I've attached patches to the tool in case they're any use to you.
 
 (As an aside and more generally: I'm not sure how to manage these local 
 customisations in the longer term - what do other people on this list do?)
 
 HTH, best wishes, Peter
 
 On 25/02/13 11:29, Alexander Kurze wrote:
 Hello,
 
 I am using the bowtie-wrapper on my locally installed galaxy server to
 align reads. However I missing the stats read-out. Is there any
 possibility to include statistics about unaligned reads?
 
 If I use bowtie vi comman line I get following output:
 
 
 bowtie ~/dm3 -v 2 -k 5 --best --strata -S -t reads.fastq reads.sam
 End-to-end 2/3-mismatch full-index search: 01:00:21
 # reads processed: 12084153
 # reads with at least one reported alignment: 9391748 (77.72%)
 # reads that failed to align: 2692405 (22.28%)
 Reported 30293838 alignments to 1 output stream(s)
 
 
 The output should be normally saved in the stderr but unfortunatly the
 stderr is somehow deleted after the alignment job is done in bowtie
 under galaxy.
 
 Any idea how I can still access the stats?
 
 Thanks,
 
 Alex
 
 --
 Alexander Kurze, DPhil
 University of Oxford
 Department of Biochemistry
 South Parks Road
 Oxford, OX1 3QU
 United Kingdom
 
 Tel: +44 1865 613 230
 Fax:+44 1865 613 341
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
   http://lists.bx.psu.edu/
 
 
 --
 Peter Briggs peter.bri...@manchester.ac.uk
 Bioinformatics Core Facility University of Manchester
 B.1083 Michael Smith Bldg Tel: (0161) 2751482
 
 
 bowtie_wrapper.xml.patch
 
 
 --
 Peter Briggs peter.bri...@manchester.ac.uk
 Bioinformatics Core Facility University of Manchester
 B.1083 Michael Smith Bldg Tel: (0161) 2751482
 
 
 
 
 -- 
 Peter Briggs peter.bri...@manchester.ac.uk
 Bioinformatics Core Facility University of Manchester
 B.1083 Michael Smith Bldg Tel: (0161) 2751482
 
 
 bowtie_wrapper_patches_pjbriggs.zip


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] blend4j

2013-02-28 Thread John Chilton
This is not a blend4j issue, it is more of a Galaxy API issue. I think
it is essentially a known problem that you cannot specify runtime
parameters for workflow, only data inputs. Here the relevant piece of
Galaxy code:

inputs = {}
for step in latest_workflow.steps:
if step.type == 'data_input':
inputs[step.id] = {'label':step.tool_inputs['name'], 'value':}
else:
pass
# Eventually, allow regular tool parameters to be
inserted and modified at runtime.
# p = step.get_required_parameters()

I think the meantime, the work around is not define these are workflow
parameters but instead hard code some values, and then pull down,
rewrite, and reupload a new workflow for each execution:

WorkflowsClient client = galaxyInstance.getWorkflowsClient();
String workflowJson = client.exportWorkflow(originalWorkflowId);
// modify workflow json
Workflow importedWorkflow = client.importWorkflow(workflowJson);
String importedWorkflowId = importedWorkflow.getId();

It is not pretty obviously. They are pretty good about responding to
API pull requests so if you have the time to fix the Galaxy code
itself I think everyone would appreciate that fix.

-John


On Thu, Feb 28, 2013 at 3:24 PM, Marc Logghe marc.log...@ablynx.com wrote:
 Hi all,

 Is this the correct forum to post blend4j issues ? Apologies if this is not
 the case.



 I was trying to run a custom workflow via blend4j, according to
 https://github.com/jmchilton/blend4j

 In my hands, the call to workflowDetails.getInputs() returns an empty Map. I
 believe the corresponding REST request looks like:

 /galaxy/api/workflows/workflow id?key=API key

 In the JSON response, the ‘inputs’ attribute is empty indeed ({url:
 /galaxy/api/workflows/f09437b8822035f7, inputs: {}, …).

 I do not understand why this is the case because 2 steps require inputs to
 be set at runtime. Via the Galaxy web interface those required parameters
 can be set and the workflow is running smoothly.

 Or has the term ‘inputs’ nothing to do with ‘parameters’ ?



 Regards,

 Marc

 
 THIS E-MAIL MESSAGE IS INTENDED ONLY FOR THE USE OF THE INDIVIDUAL OR ENTITY
 TO WHICH IT IS ADDRESSED AND MAY CONTAIN INFORMATION THAT IS PRIVILEGED,
 CONFIDENTIAL AND EXEMPT FROM DISCLOSURE.
 If the reader of this E-mail message is not the intended recipient, you are
 hereby notified that any dissemination, distribution or copying of this
 communication is strictly prohibited. If you have received this
 communication in error, please notify us immediately at abl...@ablynx.com.
 Thank you for your co-operation.
 NANOBODY and NANOCLONE are registered trademarks of Ablynx N.V.
 

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] blend4j

2013-02-28 Thread Marc Logghe


-Original Message-
From: jmchil...@gmail.com [mailto:jmchil...@gmail.com] On Behalf Of John Chilton
Sent: Thursday, February 28, 2013 11:27 PM
To: Marc Logghe
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] blend4j

This is not a blend4j issue, it is more of a Galaxy API issue. I think
it is essentially a known problem that you cannot specify runtime
parameters for workflow, only data inputs. Here the relevant piece of
Galaxy code:

inputs = {}
for step in latest_workflow.steps:
if step.type == 'data_input':
inputs[step.id] = {'label':step.tool_inputs['name'], 'value':}
else:
pass
# Eventually, allow regular tool parameters to be
inserted and modified at runtime.
# p = step.get_required_parameters()

I think the meantime, the work around is not define these are workflow
parameters but instead hard code some values, and then pull down,
rewrite, and reupload a new workflow for each execution:

WorkflowsClient client = galaxyInstance.getWorkflowsClient();
String workflowJson = client.exportWorkflow(originalWorkflowId);
// modify workflow json
Workflow importedWorkflow = client.importWorkflow(workflowJson);
String importedWorkflowId = importedWorkflow.getId();

It is not pretty obviously. They are pretty good about responding to
API pull requests so if you have the time to fix the Galaxy code
itself I think everyone would appreciate that fix.

-John


Hi John
Thanks for the quick answer; I'll give it a try.
Regarding the fix: you need to give me some time, started with a python 
tutorial only 2 days ago ;-)

Regards,
Marc

THIS E-MAIL MESSAGE IS INTENDED ONLY FOR THE USE OF THE INDIVIDUAL OR ENTITY TO 
WHICH IT IS ADDRESSED AND MAY CONTAIN INFORMATION THAT IS PRIVILEGED, 
CONFIDENTIAL AND EXEMPT FROM DISCLOSURE. 
If the reader of this E-mail message is not the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately at abl...@ablynx.com. Thank you for your 
co-operation. 
NANOBODY and NANOCLONE are registered trademarks of Ablynx N.V. 



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] cleanup script fails on dataset_instance None

2013-02-28 Thread Anthonius deBoer
Hi,I have some problems with the cleanup scripts...I seem to have a dataset_instance that is "None" and if it tries to delete that you obviously get an error. (See below..I added the print statement, but the line is the same)...I wonder how I could get a dataset with a None definition and how can I clean up the database for this?Thanks,Thonsh  purge_libraries.shTraceback (most recent call last): File "./scripts/cleanup_datasets/cleanup_datasets.py", line 527, in module  if __name__ == "__main__": main() File "./scripts/cleanup_datasets/cleanup_datasets.py", line 122, in main  purge_libraries( app, cutoff_time, options.remove_from_disk, info_only = options.info_only, force_retry = options.force_retry ) File "./scripts/cleanup_datasets/cleanup_datasets.py", line 217, in purge_libraries  _purge_folder( library.root_folder, app, remove_from_disk, info_only = info_only ) File "./scripts/cleanup_datasets/cleanup_datasets.py", line 500, in _purge_folder  _purge_folder( sub_folder, app, remove_from_disk, info_only = info_only ) File "./scripts/cleanup_datasets/cleanup_datasets.py", line 500, in _purge_folder  _purge_folder( sub_folder, app, remove_from_disk, info_only = info_only ) File "./scripts/cleanup_datasets/cleanup_datasets.py", line 500, in _purge_folder  _purge_folder( sub_folder, app, remove_from_disk, info_only = info_only ) File "./scripts/cleanup_datasets/cleanup_datasets.py", line 498, in _purge_folder  _purge_dataset_instance( ldda, app, remove_from_disk, info_only = info_only ) #mark a DatasetInstance as deleted, clear associated files, and mark the Dataset as deleted if it is deletable File "./scripts/cleanup_datasets/cleanup_datasets.py", line 376, in _purge_dataset_instance  print('dataset_instance.id: {0}'.format(dataset_instance.id))AttributeError: 'NoneType' object has no attribute 'id'(galaxy_env)[svcgalaxy@srv151 cleanup_datasets]$ sh  purge_libraries.sh___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Coming soon: BOSC/Broad Hackathon, SciPy Bioinformatics, BOSC Codefest

2013-02-28 Thread Brad Chapman

Hi all; 
There are some upcoming coding events and conferences of interest to open source
biology programmers:

- BOSC/Broad Interoperability Hackathon -- This is a two day coding session at
  the Broad Institute in Cambridge, MA on April 7-8 focused on improving tool
  interoperability.
  
  Sign up and details: http://j.mp/XJT6ew
  
- SciPy 2013 -- The Scientific Python conference is June 26-27 in Austin and has
  a Bioinformatics mini-symposia this year. They're doing some great work like
  IPython, NumPy, SciPy and scikit-learn; and this is a nice opportunity to 
reach a
  new set of like minded programmers and expand the open source bioinformatics
  community.
  
  Bioinformatics mini-symposia: http://j.mp/Z4xxXB
  Abstract details: http://conference.scipy.org/scipy2013/about.php
  
- Codefest at the Bioinformatics Open Source Conference -- This year BOSC is
  taking place in Berlin from July 19-20 and we'll have a two day coding session
  before the conference. This is the 4th year of Codefests and they've proven to
  be a productive and fun time to work collectively on open source projects.

  Sign up and details: http://www.open-bio.org/wiki/Codefest_2013
  BOSC conference: http://www.open-bio.org/wiki/BOSC_2013

Here are the key dates for the events and abstracts:

March   20, 2013: SciPy abstracts due
April  7-8, 2013: BOSC/Broad Interoperability Hackathon, Cambridge, MA
April   12, 2013: BOSC abstracts due
June 24-29, 2013: SciPy in Austin, TX
July 17-18, 2013: Codefest 2013, Berlin
July 19-20, 2013: BOSC 2013, Berlin

Looking forward to seeing everyone this spring and summer for plenty of fun
science and code,
Brad
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Error on the sandbox toolshed when updating files

2013-02-28 Thread Peter Waltman
Hi -

I was only trying to upload a new gzipped-tarfile with new files to a
repository that I own on the sandbox toolshed server, and get the following
error:

Server ErrorURL:
http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d
Module paste.exceptions.errormiddleware:*144* in __call__
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
app_iter *=* self*.*application*(*environ*,* sr_checker*)*
Module paste.debug.prints:*106* in __call__
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
environ*,* self*.*app*)*
Module paste.wsgilib:*543* in intercept_output
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
app_iter *=* application*(*environ*,* replacement_start_response*)*
Module paste.recursive:*84* in __call__
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
*return* self*.*application*(*environ*,* start_response*)*
Module paste.httpexceptions:*633* in __call__
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
*return* self*.*application*(*environ*,* start_response*)*
Module galaxy.web.framework.base:*128* in __call__
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
*return* self*.*handle_request*(* environ*,* start_response *)*
Module galaxy.web.framework.base:*184* in handle_request
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
body *=* method*(* trans*,* kwargs *)*
Module galaxy.web.framework:*94* in decorator
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
*return* func*(* self*,* trans*,* ***args*,* kwargs *)*
Module galaxy.webapps.tool_shed.controllers.upload:*108* in upload
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
self*.*upload_tar*(* trans*,* repository*,* tar*,* uploaded_file*,*
upload_point*,* remove_repo_files_not_in_tar*,* commit_message*,*
new_repo_alert *)*
Module galaxy.webapps.tool_shed.controllers.upload:*283* in upload_tar
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
*return* self*.*__handle_directory_changes*(*trans*,* repository*,*
full_path*,* filenames_in_archive*,* remove_repo_files_not_in_tar*,*
new_repo_alert*,* commit_message*,* undesirable_dirs_removed*,*
undesirable_files_removed*)*
Module galaxy.webapps.tool_shed.controllers.upload:*345* in
__handle_directory_changes
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
commands*.*commit*(* repo*.*ui*,* repo*,* full_path*,* user*=*trans*.*user*.
*username*,* message*=*commit_message *)*
Module mercurial.commands:*1279* in commit
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
node *=* cmdutil*.*commit*(*ui*,* repo*,* commitfunc*,* pats*,* opts*)*
Module mercurial.cmdutil:*1294* in commit
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
scmutil*.*match*(*repo*[*None*]**,* pats*,* opts*)**,* opts*)*
Module mercurial.commands:*1277* in commitfunc
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
match*,* editor*=*e*,* extra*=*extra*)*
Module mercurial.localrepo:*1199* in commit
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
ret *=* self*.*commitctx*(*cctx*,* True*)*
Module mercurial.localrepo:*1270* in commitctx
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
p2*.*manifestnode*(**)**,* *(*new*,* drop*)**)*
Module mercurial.manifest:*197* in add
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
cachedelta *=* *(*self*.*rev*(*p1*)**,* addlistdelta*(*addlist*,* delta*)**)
*
Module mercurial.manifest:*124* in addlistdelta
  
 http://testtoolshed.g2.bx.psu.edu/upload/upload?repository_id=f5109f746542d96d#
addlist*[*start*:*end*]* *=* array*.*array*(*'c'*,* content*)*
*TypeError: array item must be char*

Is there a better way to upload updated files to those repositories?

-- 
Peter Waltman, Ph.D.
pwalt...@ucsc.edu
617.347.187
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] March 2013 Galaxy Update

2013-02-28 Thread Dave Clements
Hello all,

The March 2013 Galaxy Update is now
availablehttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03.
See http://wiki.galaxyproject.org/GalaxyUpdates/2013_03.  *Highlights
include:*

   -

   *GCC2013 http://wiki.galaxyproject.org/GalaxyUpdates/2013_03#GCC2013 early
   registration http://wiki.galaxyproject.org/Events/GCC2013/Register,
   and oral presentation and poster abstract
submissionhttp://wiki.galaxyproject.org/Events/GCC2013/Abstracts are
   now open*, and we have  several new
sponsorshttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#Sponsorships
   !
   -

   The March 20 GalaxyAdmins
meetuphttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#March_GalaxyAdmins_Web_Meetup
will
   feature Hailiang (Leon) Mei, and David van Enckevort speaking on *NBIC
   Galaxy http://galaxy.nbic.nl/ at SURFsara's HPC
cloudhttps://www.surfsara.nl/
   *
   -

   A new public Galaxy
serverhttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#New_Public_Galaxy_Servers
in
   Costa Rica
   -

   New papershttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#New_Papers
   -

   Open 
Positionshttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#Who.27s_Hiring
at
   four different institutions
   -

   Other Upcoming Events and
Deadlineshttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#Other_Upcoming_Events_and_Deadlines
   -

   Galaxy 
Distributionshttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#Galaxy_Distributions
   -

   Tool Shed 
Contributionshttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#Tool_Shed_Contributions
   -

   Other Newshttp://wiki.galaxyproject.org/GalaxyUpdates/2013_03#Other_News

If you have anything you would like to see in the April *Galaxy
Updatehttp://wiki.galaxyproject.org/GalaxyUpdates
*, please let us know.

Thanks,

Dave C

-- 
 http://galaxyproject.org/wiki/GCC2012http://galaxyproject.org/
http://getgalaxy.org/
http://usegalaxy.org/
http://wiki.galaxyproject.org/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Error when upgrading the database to version 110

2013-02-28 Thread Derrick Lin
Hi guys,

I was given the following error when upgrading the database from 109 to 110:

0109_add_repository_dependency_tables DEBUG 2013-03-01 15:14:15,554
Creating repository_repository_dependency_association table failed:
(OperationalError) (1059, Identifier name
'ix_repository_repository_dependency_association_tool_shed_repository_id'
is too long) u'CREATE INDEX
ix_repository_repository_dependency_association_tool_shed_repository_id ON
repository_repository_dependency_association (tool_shed_repository_id)' ()
0109_add_repository_dependency_tables DEBUG 2013-03-01 15:14:15,554
Creating repository_repository_dependency_association table failed:
(OperationalError) (1059, Identifier name
'ix_repository_repository_dependency_association_tool_shed_repository_id'
is too long) u'CREATE INDEX
ix_repository_repository_dependency_association_tool_shed_repository_id ON
repository_repository_dependency_association (tool_shed_repository_id)' ()

So I guess the indexes weren't created. I am using MySQL, is it a big deal
for Galaxy?

Cheers,
Derrick
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Problem installing Deseq from toolshed

2013-02-28 Thread Mahtab Mirmomeni
Still waiting on a reply! Any suggestion is appreciated.


On Tue, Feb 26, 2013 at 1:55 PM, Mahtab Mirmomeni 
m.mirmom...@student.unimelb.edu.au wrote:

 Hi all,

 I have installed Deseq (by vipints) from the main galaxy toolshed and have 
 followed the make instructions but I'm still not unable to run it inside 
 Galaxy and I get the following error.

 (Path is the path to our local galaxy instance)

 error: get_read_counts: 
 Path/shed_tools/toolshed.g2.bx.psu.edu/repos/vipints/deseq_hts/2b3bb3348076/deseq_hts/deseq-hts_1.0/mex/get_reads.mex:
  failed to load: 
 /mnt/all/cloudman/galaxy/clare/shed_tools/toolshed.g2.bx.psu.edu/repos/vipints/deseq_hts/2b3bb3348076/deseq_hts/deseq-hts_1.0/mex/get_reads.mex:
  undefined symbol: bam_index_destroy
 error: called from:
 error:   
 Path/shed_tools/toolshed.g2.bx.psu.edu/repos/vipints/deseq_hts/2b3bb3348076/deseq_hts/deseq-hts_1.0/src/get_read_counts.m
  at line 97, column 34
 starting Octave failed

 I should add that I can run octave from command line and it seems to be 
 working.

 Any suggestions is much appreciated.

 Thanks

 Mahtab


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Error while installing DESeq-hts

2013-02-28 Thread Philipe Moncuquet
Hi,

I have tried to follow instructions provided with this installation

a) Run ./setup_deseq-hts.sh and setup paths and configuration options for
DESeq-hts.

I successfully ran the setup scrip but is there other path and conf that
need to be done apart from those done by this script ?

b) Inside the mex folder execute the make file to create platform dependent
.mex files
cd mex/Makefile
make [interpreter]
make octave for octave
make matlab for matlab
make all for octave and matlab

When I run make octave I get the following error message :

 /usr/bin/mkoctfile -g --mex get_reads.cpp get_reads_direct.cpp
mex_input.cpp read.cpp -I/mnt/galaxyTools/tools/samtools/0.1.16
-L/mnt/galaxyTools/tools/samtools/0.1.16 -lbam -lz -lcurses
get_reads_direct.cpp:14:17: fatal error: sam.h: No such file or directory
compilation terminated.
make: *** [get_reads.mex] Error 1

It feels like this is link to samtools, when browsing on the web I found
information about SAMTOOLS_ROOT env variable but changing this one doesn't
solve my problem.

As anyone else encounter this kind of things ?

Regards,
Philippe
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Error while installing DESeq-hts

2013-02-28 Thread Mahtab Mirmomeni
Hi Philipe

Look into the bin/deseq_config.sh and see if the path to octave and
samtools have been set correctly.

I'm having problem running DESeq too. I go past this point by I get


get_reads.mex
error: path/shed_tools/
toolshed.g2.bx.psu.edu/repos/vipints/deseq_hts/2b3bb3348076/deseq_hts/deseq-hts_1.0/mex/get_reads.mex:
failed to load: /mnt/all/cloudman/galaxy/
clare/shed_tools/
toolshed.g2.bx.psu.edu/repos/vipints/deseq_hts/2b3bb3348076/deseq_hts/deseq-hts_1.0/mex/get_reads.mex:
undefined symbol: bam_index_destroy

error when trying to run get_reads.mex in octave.


Mahtab




On Fri, Mar 1, 2013 at 3:54 PM, Philipe Moncuquet
philippe.m...@gmail.comwrote:

 Hi,

 I have tried to follow instructions provided with this installation

 a) Run ./setup_deseq-hts.sh and setup paths and configuration options for
 DESeq-hts.

 I successfully ran the setup scrip but is there other path and conf that
 need to be done apart from those done by this script ?

 b) Inside the mex folder execute the make file to create platform
 dependent .mex files
 cd mex/Makefile
 make [interpreter]
 make octave for octave
 make matlab for matlab
 make all for octave and matlab

 When I run make octave I get the following error message :

  /usr/bin/mkoctfile -g --mex get_reads.cpp get_reads_direct.cpp
 mex_input.cpp read.cpp -I/mnt/galaxyTools/tools/samtools/0.1.16
 -L/mnt/galaxyTools/tools/samtools/0.1.16 -lbam -lz -lcurses
 get_reads_direct.cpp:14:17: fatal error: sam.h: No such file or directory
 compilation terminated.
 make: *** [get_reads.mex] Error 1

 It feels like this is link to samtools, when browsing on the web I found
 information about SAMTOOLS_ROOT env variable but changing this one doesn't
 solve my problem.

 As anyone else encounter this kind of things ?

 Regards,
 Philippe

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/