[galaxy-dev] About an additional tool

2012-04-24 Thread Ciara Ledero
Hi there,

I know  Galaxy already has a SAM-to-BAM converter, but part of my
exercise/task is to incorporate a script that uses samtools' view command.
I get this error:

[samopen] SAM header is present: 66338 sequences.

according to Galaxy. But this might not be an error at all. Is  there any
way that I could tell Galaxy to ignore this and just continue with the
script?

Thanks in advance! Any help would be greatly appreciated.

CL
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] About an additional tool

2012-04-24 Thread Peter Cock
On Tue, Apr 24, 2012 at 10:03 AM, Ciara Ledero lede...@gmail.com wrote:
 Hi there,

 I know  Galaxy already has a SAM-to-BAM converter, but part of my
 exercise/task is to incorporate a script that uses samtools' view command. I
 get this error:

 [samopen] SAM header is present: 66338 sequences.

 according to Galaxy. But this might not be an error at all. Is  there any
 way that I could tell Galaxy to ignore this and just continue with the
 script?

 Thanks in advance! Any help would be greatly appreciated.

 CL

As you have probably guessed, that is not an error. Rather it is
a progress/diagnostic message from samtools printed to stderr.

There is a long standing bug with Galaxy assuming that any
output on stderr indicates a failure, despite typical usage on
Unix/Linux for progress, diagnostics or warning messages:
https://bitbucket.org/galaxy/galaxy-central/issue/325/

In this situation Galaxy tools use a wrapper script to cope
with these non-error messages. That should be happening
already with sam_to_bam.py - but I am unclear if you are
having a problem with the provided SAM to BAM tool, or
your own tool which uses samtools internally.

Peter

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Delete multiple datasets

2012-04-24 Thread SHAUN WEBB


Hi,

is there any plan to add capability to delete multiple datasets at  
once, similar to the copy datasets interface in the history options  
menu.


If I am running workflows and want to delete all the created datasets  
(during testing or if I've used the wrong parameters etc) it is quite  
tedious to delete each individual dataset. It would be useful to see  
and delete any hidden datasets in a similar fashion.


Thanks
Shaun

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Error: Job output not returned from cluster

2012-04-24 Thread Louise-Amélie Schmitt
At first we thought it could be an ssh issue but submitting jobs and 
getting the output back isn't a problem when I do it from my personal 
user manually, so it's really related to Galaxy. We're using PBS Pro btw.


And I'm still at loss... :(

L-A

Le 23/04/2012 15:42, zhengqiu cai a écrit :

I am having the same problem when I use condor as the scheduler instead of sge.

Cai

--- 12年4月23日,周一, Louise-Amélie Schmittlouise-amelie.schm...@embl.de  写道:


发件人: Louise-Amélie Schmittlouise-amelie.schm...@embl.de
主题: [galaxy-dev] Error: Job output not returned from cluster
收件人: galaxy-dev@lists.bx.psu.edu
日期: 2012年4月23日,周一,下午5:09
Hello everyone,

I'm still trying to set up the job submission as the real
user, and I get a mysterious error. The job obviously runs
somewhere and when it ends it is in error state and displays
the following message: Job output not returned from
cluster

In the Galaxy log I have the following lines when the job
finishes running:

galaxy.jobs.runners.drmaa DEBUG 2012-04-23 10:36:41,509
(1455/9161620.pbs-master2.embl.de) state change: job
finished, but failed
galaxy.jobs.runners.drmaa DEBUG 2012-04-23 10:36:41,511 Job
output not returned from cluster
galaxy.jobs DEBUG 2012-04-23 10:36:41,547 finish(): Moved
/g/funcgen/galaxy-dev/database/job_working_directory/001/1455/galaxy_dataset_2441.dat
to
/g/funcgen/galaxy-dev/database/files/002/dataset_2441.dat
galaxy.jobs DEBUG 2012-04-23 10:36:41,755 job 1455 ended
galaxy.datatypes.metadata DEBUG 2012-04-23 10:36:41,755
Cleaning up external metadata files
galaxy.datatypes.metadata DEBUG 2012-04-23 10:36:41,768
Failed to cleanup MetadataTempFile temp files from
/g/funcgen/galaxy-dev/database/job_working_directory/001/1455/metadata_out_HistoryDatasetAssociation_1606_npFIJM:
No JSON object could be decoded: line 1 column 0 (char 0)

The
/g/funcgen/galaxy-dev/database/job_working_directory/001/1455/
directory is empty and
/g/funcgen/galaxy-dev/database/files/002/dataset_2441.dat
exists but is empty.

Any ideas about what can go wrong there? Any lead would be
immensely appreciated!

Thanks,
L-A

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to
this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/

[galaxy-dev] Deleted datasets

2012-04-24 Thread SHAUN WEBB


Hi,

looking at Galaxy reports I have 505 datasets which have been deleted  
60 days ago but have not been purged. My purging scripts seem to be  
working correctly in cron. Why do these datasets not get deleted? Is  
it likely that these are in shared or accessible histories and only  
deleted by the owner? Any tips would be good, trying to free up some  
space on our server.


Thanks
Shaun Webb

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Error: Job output not returned from cluster

2012-04-24 Thread Alban Lermine
Le 24/04/2012 14:53, Louise-Amélie Schmitt a écrit :
 At first we thought it could be an ssh issue but submitting jobs and
 getting the output back isn't a problem when I do it from my personal
 user manually, so it's really related to Galaxy. We're using PBS Pro btw.

 And I'm still at loss... :(

 L-A

 Le 23/04/2012 15:42, zhengqiu cai a écrit :
 I am having the same problem when I use condor as the scheduler
 instead of sge.

 Cai

 --- 12年4月23日,周一, Louise-Amélie
 Schmittlouise-amelie.schm...@embl.de  写道:

 发件人: Louise-Amélie Schmittlouise-amelie.schm...@embl.de
 主题: [galaxy-dev] Error: Job output not returned from cluster
 收件人: galaxy-dev@lists.bx.psu.edu
 日期: 2012年4月23日,周一,下午5:09
 Hello everyone,

 I'm still trying to set up the job submission as the real
 user, and I get a mysterious error. The job obviously runs
 somewhere and when it ends it is in error state and displays
 the following message: Job output not returned from
 cluster

 In the Galaxy log I have the following lines when the job
 finishes running:

 galaxy.jobs.runners.drmaa DEBUG 2012-04-23 10:36:41,509
 (1455/9161620.pbs-master2.embl.de) state change: job
 finished, but failed
 galaxy.jobs.runners.drmaa DEBUG 2012-04-23 10:36:41,511 Job
 output not returned from cluster
 galaxy.jobs DEBUG 2012-04-23 10:36:41,547 finish(): Moved
 /g/funcgen/galaxy-dev/database/job_working_directory/001/1455/galaxy_dataset_2441.dat

 to
 /g/funcgen/galaxy-dev/database/files/002/dataset_2441.dat
 galaxy.jobs DEBUG 2012-04-23 10:36:41,755 job 1455 ended
 galaxy.datatypes.metadata DEBUG 2012-04-23 10:36:41,755
 Cleaning up external metadata files
 galaxy.datatypes.metadata DEBUG 2012-04-23 10:36:41,768
 Failed to cleanup MetadataTempFile temp files from
 /g/funcgen/galaxy-dev/database/job_working_directory/001/1455/metadata_out_HistoryDatasetAssociation_1606_npFIJM:

 No JSON object could be decoded: line 1 column 0 (char 0)

 The
 /g/funcgen/galaxy-dev/database/job_working_directory/001/1455/
 directory is empty and
 /g/funcgen/galaxy-dev/database/files/002/dataset_2441.dat
 exists but is empty.

 Any ideas about what can go wrong there? Any lead would be
 immensely appreciated!

 Thanks,
 L-A

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to
 this
 and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/
Hi L-A,

I run Galaxy as real user on our cluster with pbs (free version).

We first configure LDAP authentification for having email account
related to unix account (just cut the @curie.fr)
Then I have modify pbs.py (in
GALAXY_DIRgalaxy-dist/lib/galaxy/jobs/runners)

I have just disconnected the pbs submission through python library and
replace it by a system call (just like I send jobs to the cluster with
command line), here is the code used:

 galaxy_job_id = job_wrapper.job_id
 log.debug((%s) submitting file %s % ( galaxy_job_id, job_file ) )
 log.debug((%s) command is: %s % ( galaxy_job_id, command_line ) )

# Submit job with system call instead of using python PBS library -
Permit to run jobs as .. with sudo -u cmd prefix

galaxy_job_idSTR = str(job_wrapper.job_id)
galaxy_tool_idSTR = str(job_wrapper.tool.id)
galaxy_job_name =
galaxy_job_idSTR+_+galaxy_tool_idSTR+_+job_wrapper.user
torque_options = runner_url.split(/)
queue = torque_options[3]
ressources = torque_options[4]
user_mail = job_wrapper.user.split(@)
username = user_mail[0]

torque_cmd = sudo -u username echo +\+command_line+\ | qsub
-o +ofile+ -e +efile+ -M +job_wrapper.user+ -N +galaxy_job_name+
-q +queue+ +ressources

submit_pbs_job = os.popen(torque_cmd)

job_id = submit_pbs_job.read().rstrip(\n)
   
#Original job launcher
#job_id = pbs.pbs_submit(c, job_attrs, job_file, pbs_queue_name, None)

pbs.pbs_disconnect(c)

Second thing I have done is to wait error and output file from torque in
the finish_job function (if not, I never receive the output, seems to be
your problem..), here is the code used:

def finish_job( self, pbs_job_state ):

Get the output/error for a finished job, pass to
`job_wrapper.finish`
and cleanup all the PBS temporary files.

ofile = pbs_job_state.ofile
efile = pbs_job_state.efile
job_file = pbs_job_state.job_file
   
# collect the output
try:

# With qsub system call, need to wait efile and ofile creation
at the end of the job execution before reading them
   
efileExists = os.path.isfile(efile)
ofileExists = os.path.isfile(ofile)
efileExistsSTR = str(efileExists)

Re: [galaxy-dev] Sample tracking

2012-04-24 Thread Greg Von Kuster
Hello Leandro,

I've forwarded your request to the galaxy-dev mail list as this is where issues 
like this are discussed.

I want to make sure I'm clear on this issue.  Can you provide some 
clarification?

On Apr 11, 2012, at 5:05 AM, Leandro Hermida wrote:

 Based on our initial use of the sample tracking system I haven't found
 any additional bugs, but we did realize one big functionality that is
 missing which makes the system somewhat hard to use.
 
 The current mechanism to link datasets to their corresponding samples
 is very cumbersome and takes a long time as it has to be done one
 sample at a time with a lot of UI clicking and there is a big
 potential for human error.  I would say without an easier way to do
 this it would detract people from using the sample tracking system. Do
 you have any ideas to change/enhance the way this is done?

The current UI is something that we have plans to improve - the underlying 
framework is flexible and allows for improvement in several areas.  We will 
certainly consider any recommendations from the community, including yours 
below.


 My initial
 suggestion would be that when you upload the samples using a CSV file
 that you can have a field like DatasetsName after FolderName where
 you can have a colon (:) separated list of file paths from the
 configured external service? That would solve it I think and make
 things much easier.
 

I've pasted an example screenshot below for reference.  At the point that you 
are importing samples from a CSV file, the sequence run is not yet started 
(generally speaking), so there are no sequence run datasets (file names) yet 
produced.  So in order to create the csv file with the correct sequence run 
file names, the user will have to know beforehand the resulting files produced 
by the sequencer.  Will this always be possible with your Illumina external 
service?  If so, what are the names of the files, and how are they 
distinguished between runs?  Do they just go in different directories in the 
sequencer that are also know beforehand?

On the other hand, are you saying that your lab will perform the sequence run 
and wait until the run is complete and the datasets are produced and then 
create the csv file, entering the known dataset file names to produce the 
sample line items for the sequencing request?  The weakness of this process is 
that your lab's customers will not be able to use Galaxy sample tracking to 
view the status of their requests throughout their lifecycle since the 
request's sample line items will not be created until the run is finished.  

I'll work on implementing this enhancement, but I want to understand how you're 
lab will use it.  Any additional information you can provide will be helpful.

Thanks!

Greg

Add samples to sequencing request one

NameState   Data LibraryFolder  History Workflow

(required)

For each sample, select the data library and folder in which you would like the 
run datasets deposited. To automatically run a workflow on run datastets, 
select a history first and then the desired workflow.
 Layout Grid1

 Sample form layout 1


Copy  samples from sample 
Select the sample from which the new sample should be copied or leave selection 
as None to add a new generic sample.

  
Click the Add sample button for each new sample and click the Save button when 
you have finished adding samples.
Import samples from csv file

 
The csv file must be in the following format.
The [:FieldValue] is optional, the named form field will contain the value 
after the ':' if included.
SampleName,DataLibraryName,FolderName,HistoryName,WorkflowName,Field1Name:Field1Value,Field2Name:Field2Value...___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] problem with ~/lib/galaxy/app.py after code upgrade to March 12, 2012 release ('6799:40f1816d6857')

2012-04-24 Thread Hans-Rudolf Hotz

Hi everybody

I usually don't like people sending the same e-mail several times, so I 
apologize for just doing this, since nobody has replied to last week's 
email so far.


Although, we do not encounter any problems at the moment, I fear we will 
run into problems at a later stage (ie at the next code update). This is 
our production server.


Thank you very much for any help

Regards, Hans




On 04/16/2012 03:43 PM, Hans-Rudolf Hotz wrote:

Hi

We are in the process of migrating our Galaxy servers to the current
March 12, 2012 release ('6799:40f1816d6857').

We have encountered two issues, which I guess are related:


If I restart the server, I get:


galaxy.model.migrate.check INFO 2012-04-16 09:24:50,391 At database
version 93
galaxy.tool_shed.migrate.check DEBUG 2012-04-16 09:24:50,407
MySQL_python egg successfully loaded for mysql dialect
Traceback (most recent call last):
File //galaxy_dist/lib/galaxy/web/buildapp.py, line 82, in
app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
File //galaxy_dist/lib/galaxy/app.py, line 37, in __init__
verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
'__file__', None ), self.config.database_engine_options )
File //galaxy_dist/lib/galaxy/tool_shed/migrate/check.py, line 42,
in verify_tools
db_schema = schema.ControlledSchema( engine, migrate_repository )
File
//galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
line 24, in __init__
self._load()
File
//galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
line 42, in _load
data = list(result)[0]
IndexError: list index out of range



I can 'temporary' fix this by commenting-out line 36 and 37 in
~/lib/galaxy/app.py

36# from galaxy.tool_shed.migrate.check import verify_tools
37# verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
'__file__', None ), self.config.database_engine_options )


...well, I don't like this fix, and I wonder if there is a better one?



I have a suspicion, that this is related to problems I encountered
during the database schema upgrade to version 93 for our MySQL database:

at step 90to91 and 91to92 I get the following:


90 - 91...

Migration script to create the tool_version and tool_version_association
tables and drop the tool_id_guid_map table.

0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
tool_version table failed: (OperationalError) (1050, Table
'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
\n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
\n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
(id)\n)\n\n' ()
0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
tool_version table failed: (OperationalError) (1050, Table
'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
\n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
\n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
(id)\n)\n\n' ()
0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,720 Creating
tool_version_association table failed: (OperationalError) (1050, Table
'tool_version_association' already exists) u'\nCREATE TABLE
tool_version_association (\n\tid INTEGER NOT NULL AUTO_INCREMENT,
\n\ttool_id INTEGER NOT NULL, \n\tparent_id INTEGER NOT NULL,
\n\tPRIMARY KEY (id), \n\t FOREIGN KEY(tool_id) REFERENCES tool_version
(id), \n\t FOREIGN KEY(parent_id) REFERENCES tool_version (id)\n)\n\n' ()
0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,720 Creating
tool_version_association table failed: (OperationalError) (1050, Table
'tool_version_association' already exists) u'\nCREATE TABLE
tool_version_association (\n\tid INTEGER NOT NULL AUTO_INCREMENT,
\n\ttool_id INTEGER NOT NULL, \n\tparent_id INTEGER NOT NULL,
\n\tPRIMARY KEY (id), \n\t FOREIGN KEY(tool_id) REFERENCES tool_version
(id), \n\t FOREIGN KEY(parent_id) REFERENCES tool_version (id)\n)\n\n' ()
Added 0 rows to the new tool_version table.
done
91 - 92...

Migration script to create the migrate_tools table.

0092_add_migrate_tools_table DEBUG 2012-04-16 08:55:49,920 Creating
migrate_tools table failed: (OperationalError) (1050, Table
'migrate_tools' already exists) u'\nCREATE TABLE migrate_tools
(\n\trepository_id VARCHAR(255), \n\trepository_path TEXT, \n\tversion
INTEGER\n)\n\n' ()
0092_add_migrate_tools_table DEBUG 2012-04-16 08:55:49,920 Creating
migrate_tools table failed: (OperationalError) (1050, Table
'migrate_tools' already exists) u'\nCREATE TABLE migrate_tools
(\n\trepository_id VARCHAR(255), \n\trepository_path TEXT, \n\tversion
INTEGER\n)\n\n' ()
done


the bizarre thing is, all three tables: 'migrate_tools',
'tool_version_association','tool_version' are 

Re: [galaxy-dev] problem with ~/lib/galaxy/app.py after code upgrade to March 12, 2012 release ('6799:40f1816d6857')

2012-04-24 Thread Greg Von Kuster
Hello Hans,

I've stayed out of this one due to your use of mysql which I cannot personally 
support as I restrict my environment to sqlite and postgres.  However, assuming 
this is not a problem restricted to mysql, can you try the following from your 
Galaxy installation directory (you should backup your database before 
attempting this)?

$sh manage_db.sh downgrade 90

After this,
$sh manage_db.sh upgrade

Greg Von Kuster

On Apr 24, 2012, at 10:09 AM, Hans-Rudolf Hotz wrote:

 Hi everybody
 
 I usually don't like people sending the same e-mail several times, so I 
 apologize for just doing this, since nobody has replied to last week's email 
 so far.
 
 Although, we do not encounter any problems at the moment, I fear we will run 
 into problems at a later stage (ie at the next code update). This is our 
 production server.
 
 Thank you very much for any help
 
 Regards, Hans
 
 
 
 
 On 04/16/2012 03:43 PM, Hans-Rudolf Hotz wrote:
 Hi
 
 We are in the process of migrating our Galaxy servers to the current
 March 12, 2012 release ('6799:40f1816d6857').
 
 We have encountered two issues, which I guess are related:
 
 
 If I restart the server, I get:
 
 
 galaxy.model.migrate.check INFO 2012-04-16 09:24:50,391 At database
 version 93
 galaxy.tool_shed.migrate.check DEBUG 2012-04-16 09:24:50,407
 MySQL_python egg successfully loaded for mysql dialect
 Traceback (most recent call last):
 File //galaxy_dist/lib/galaxy/web/buildapp.py, line 82, in
 app_factory
 app = UniverseApplication( global_conf = global_conf, **kwargs )
 File //galaxy_dist/lib/galaxy/app.py, line 37, in __init__
 verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
 '__file__', None ), self.config.database_engine_options )
 File //galaxy_dist/lib/galaxy/tool_shed/migrate/check.py, line 42,
 in verify_tools
 db_schema = schema.ControlledSchema( engine, migrate_repository )
 File
 //galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
 line 24, in __init__
 self._load()
 File
 //galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
 line 42, in _load
 data = list(result)[0]
 IndexError: list index out of range
 
 
 
 I can 'temporary' fix this by commenting-out line 36 and 37 in
 ~/lib/galaxy/app.py
 
 36# from galaxy.tool_shed.migrate.check import verify_tools
 37# verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
 '__file__', None ), self.config.database_engine_options )
 
 
 ...well, I don't like this fix, and I wonder if there is a better one?
 
 
 
 I have a suspicion, that this is related to problems I encountered
 during the database schema upgrade to version 93 for our MySQL database:
 
 at step 90to91 and 91to92 I get the following:
 
 
 90 - 91...
 
 Migration script to create the tool_version and tool_version_association
 tables and drop the tool_id_guid_map table.
 
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
 tool_version table failed: (OperationalError) (1050, Table
 'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
 INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
 \n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
 \n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
 KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
 (id)\n)\n\n' ()
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
 tool_version table failed: (OperationalError) (1050, Table
 'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
 INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
 \n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
 \n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
 KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
 (id)\n)\n\n' ()
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,720 Creating
 tool_version_association table failed: (OperationalError) (1050, Table
 'tool_version_association' already exists) u'\nCREATE TABLE
 tool_version_association (\n\tid INTEGER NOT NULL AUTO_INCREMENT,
 \n\ttool_id INTEGER NOT NULL, \n\tparent_id INTEGER NOT NULL,
 \n\tPRIMARY KEY (id), \n\t FOREIGN KEY(tool_id) REFERENCES tool_version
 (id), \n\t FOREIGN KEY(parent_id) REFERENCES tool_version (id)\n)\n\n' ()
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,720 Creating
 tool_version_association table failed: (OperationalError) (1050, Table
 'tool_version_association' already exists) u'\nCREATE TABLE
 tool_version_association (\n\tid INTEGER NOT NULL AUTO_INCREMENT,
 \n\ttool_id INTEGER NOT NULL, \n\tparent_id INTEGER NOT NULL,
 \n\tPRIMARY KEY (id), \n\t FOREIGN KEY(tool_id) REFERENCES tool_version
 (id), \n\t FOREIGN KEY(parent_id) REFERENCES tool_version (id)\n)\n\n' ()
 Added 0 rows to the new tool_version table.
 done
 91 - 92...
 
 Migration script to create the migrate_tools table.
 
 0092_add_migrate_tools_table DEBUG 2012-04-16 08:55:49,920 Creating
 migrate_tools table 

Re: [galaxy-dev] problem with ~/lib/galaxy/app.py after code upgrade to March 12, 2012 release ('6799:40f1816d6857')

2012-04-24 Thread Hans-Rudolf Hotz

Hi Greg


I was hoping to avoid thisI will let you know the outcome.


On a related note: are there any plans to migrate all MySQL users to 
postgres? It is probably easier in the longterm.maybe a topic we can 
discuss in Chicago.


Thank you very much for your reply

Regards, Hans



On 04/24/2012 04:32 PM, Greg Von Kuster wrote:

Hello Hans,

I've stayed out of this one due to your use of mysql which I cannot personally 
support as I restrict my environment to sqlite and postgres.  However, assuming 
this is not a problem restricted to mysql, can you try the following from your 
Galaxy installation directory (you should backup your database before 
attempting this)?

$sh manage_db.sh downgrade 90

After this,
$sh manage_db.sh upgrade

Greg Von Kuster

On Apr 24, 2012, at 10:09 AM, Hans-Rudolf Hotz wrote:


Hi everybody

I usually don't like people sending the same e-mail several times, so I 
apologize for just doing this, since nobody has replied to last week's email so 
far.

Although, we do not encounter any problems at the moment, I fear we will run 
into problems at a later stage (ie at the next code update). This is our 
production server.

Thank you very much for any help

Regards, Hans




On 04/16/2012 03:43 PM, Hans-Rudolf Hotz wrote:

Hi

We are in the process of migrating our Galaxy servers to the current
March 12, 2012 release ('6799:40f1816d6857').

We have encountered two issues, which I guess are related:


If I restart the server, I get:


galaxy.model.migrate.check INFO 2012-04-16 09:24:50,391 At database
version 93
galaxy.tool_shed.migrate.check DEBUG 2012-04-16 09:24:50,407
MySQL_python egg successfully loaded for mysql dialect
Traceback (most recent call last):
File //galaxy_dist/lib/galaxy/web/buildapp.py, line 82, in
app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
File //galaxy_dist/lib/galaxy/app.py, line 37, in __init__
verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
'__file__', None ), self.config.database_engine_options )
File //galaxy_dist/lib/galaxy/tool_shed/migrate/check.py, line 42,
in verify_tools
db_schema = schema.ControlledSchema( engine, migrate_repository )
File
//galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
line 24, in __init__
self._load()
File
//galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
line 42, in _load
data = list(result)[0]
IndexError: list index out of range



I can 'temporary' fix this by commenting-out line 36 and 37 in
~/lib/galaxy/app.py

36# from galaxy.tool_shed.migrate.check import verify_tools
37# verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
'__file__', None ), self.config.database_engine_options )


...well, I don't like this fix, and I wonder if there is a better one?



I have a suspicion, that this is related to problems I encountered
during the database schema upgrade to version 93 for our MySQL database:

at step 90to91 and 91to92 I get the following:


90 -  91...

Migration script to create the tool_version and tool_version_association
tables and drop the tool_id_guid_map table.

0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
tool_version table failed: (OperationalError) (1050, Table
'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
\n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
\n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
(id)\n)\n\n' ()
0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
tool_version table failed: (OperationalError) (1050, Table
'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
\n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
\n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
(id)\n)\n\n' ()
0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,720 Creating
tool_version_association table failed: (OperationalError) (1050, Table
'tool_version_association' already exists) u'\nCREATE TABLE
tool_version_association (\n\tid INTEGER NOT NULL AUTO_INCREMENT,
\n\ttool_id INTEGER NOT NULL, \n\tparent_id INTEGER NOT NULL,
\n\tPRIMARY KEY (id), \n\t FOREIGN KEY(tool_id) REFERENCES tool_version
(id), \n\t FOREIGN KEY(parent_id) REFERENCES tool_version (id)\n)\n\n' ()
0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,720 Creating
tool_version_association table failed: (OperationalError) (1050, Table
'tool_version_association' already exists) u'\nCREATE TABLE
tool_version_association (\n\tid INTEGER NOT NULL AUTO_INCREMENT,
\n\ttool_id INTEGER NOT NULL, \n\tparent_id INTEGER NOT NULL,
\n\tPRIMARY KEY (id), \n\t FOREIGN KEY(tool_id) REFERENCES tool_version
(id), \n\t FOREIGN KEY(parent_id) REFERENCES 

Re: [galaxy-dev] problem with ~/lib/galaxy/app.py after code upgrade to March 12, 2012 release ('6799:40f1816d6857')

2012-04-24 Thread Greg Von Kuster
Hi hans,

On Apr 24, 2012, at 10:49 AM, Hans-Rudolf Hotz wrote:

 Hi Greg
 
 
 I was hoping to avoid thisI will let you know the outcome.

This really shouldn't be a problem, the migration scripts for 91, 92 simply 
create new tables with 91 generating new data to insert, and 92 inserting a 
single new row.  Since it looks lie you haven't been able to upgrade in your 
production environment, dropping these tables and re-creating them will not 
lose any data.
 
 
 On a related note: are there any plans to migrate all MySQL users to 
 postgres? It is probably easier in the longterm.maybe a topic we can 
 discuss in Chicago.


This is completely up to you.  Galaxy supports sqlite and postgres.  Mysql 
should work, but support for it is minimal, so you are more on your own.  Some 
of us on the core team feel that support for mysql should go away completely 
(myself included), while others feel support for it should continue.  The 
latter category continues to win the debate, but again, support for mysql is 
not at the level we provide for postgres or sqlite.


 
 Thank you very much for your reply
 
 Regards, Hans
 
 
 
 On 04/24/2012 04:32 PM, Greg Von Kuster wrote:
 Hello Hans,
 
 I've stayed out of this one due to your use of mysql which I cannot 
 personally support as I restrict my environment to sqlite and postgres.  
 However, assuming this is not a problem restricted to mysql, can you try the 
 following from your Galaxy installation directory (you should backup your 
 database before attempting this)?
 
 $sh manage_db.sh downgrade 90
 
 After this,
 $sh manage_db.sh upgrade
 
 Greg Von Kuster
 
 On Apr 24, 2012, at 10:09 AM, Hans-Rudolf Hotz wrote:
 
 Hi everybody
 
 I usually don't like people sending the same e-mail several times, so I 
 apologize for just doing this, since nobody has replied to last week's 
 email so far.
 
 Although, we do not encounter any problems at the moment, I fear we will 
 run into problems at a later stage (ie at the next code update). This is 
 our production server.
 
 Thank you very much for any help
 
 Regards, Hans
 
 
 
 
 On 04/16/2012 03:43 PM, Hans-Rudolf Hotz wrote:
 Hi
 
 We are in the process of migrating our Galaxy servers to the current
 March 12, 2012 release ('6799:40f1816d6857').
 
 We have encountered two issues, which I guess are related:
 
 
 If I restart the server, I get:
 
 
 galaxy.model.migrate.check INFO 2012-04-16 09:24:50,391 At database
 version 93
 galaxy.tool_shed.migrate.check DEBUG 2012-04-16 09:24:50,407
 MySQL_python egg successfully loaded for mysql dialect
 Traceback (most recent call last):
 File //galaxy_dist/lib/galaxy/web/buildapp.py, line 82, in
 app_factory
 app = UniverseApplication( global_conf = global_conf, **kwargs )
 File //galaxy_dist/lib/galaxy/app.py, line 37, in __init__
 verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
 '__file__', None ), self.config.database_engine_options )
 File //galaxy_dist/lib/galaxy/tool_shed/migrate/check.py, line 42,
 in verify_tools
 db_schema = schema.ControlledSchema( engine, migrate_repository )
 File
 //galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
 line 24, in __init__
 self._load()
 File
 //galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
 line 42, in _load
 data = list(result)[0]
 IndexError: list index out of range
 
 
 
 I can 'temporary' fix this by commenting-out line 36 and 37 in
 ~/lib/galaxy/app.py
 
 36# from galaxy.tool_shed.migrate.check import verify_tools
 37# verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
 '__file__', None ), self.config.database_engine_options )
 
 
 ...well, I don't like this fix, and I wonder if there is a better one?
 
 
 
 I have a suspicion, that this is related to problems I encountered
 during the database schema upgrade to version 93 for our MySQL database:
 
 at step 90to91 and 91to92 I get the following:
 
 
 90 -  91...
 
 Migration script to create the tool_version and tool_version_association
 tables and drop the tool_id_guid_map table.
 
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
 tool_version table failed: (OperationalError) (1050, Table
 'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
 INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
 \n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
 \n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
 KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
 (id)\n)\n\n' ()
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
 tool_version table failed: (OperationalError) (1050, Table
 'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
 INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
 \n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
 \n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
 KEY(tool_shed_repository_id) REFERENCES 

Re: [galaxy-dev] JobManager object has no attribute dispatcher

2012-04-24 Thread Peter Cock
On Mon, Apr 23, 2012 at 7:08 PM, Nate Coraor n...@bx.psu.edu wrote:

 Not quite shortly, but it's been committed as 476ce0b78713.  Sorry for the 
 wait.

 --nate

Thanks Nate - that did the trick :)

Peter

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] problem with ~/lib/galaxy/app.py after code upgrade to March 12, 2012 release ('6799:40f1816d6857')

2012-04-24 Thread Leandro Hermida
Dear Hans,

 //galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
 line 42, in _load
 data = list(result)[0]
 IndexError: list index out of range

This error is because SQLAlchemy cannot load the controlled schema
version info from the DB, the result is empty, so I think its trying
to load the migrate_version and migrate_tools tables.  Make sure the
tables look like this hopefully they have each one row of data like
this:

mysql select * from migrate_version;
+---+--+-+
| repository_id | repository_path  | version |
+---+--+-+
| Galaxy| lib/galaxy/model/migrate |  93 |
+---+--+-+
1 row in set (0.01 sec)

mysql select * from migrate_tools;
+---+--+-+
| repository_id | repository_path  | version |
+---+--+-+
| GalaxyTools   | lib/galaxy/tool_shed/migrate |   1 |
+---+--+-+
1 row in set (0.01 sec)

hth,
Leandro

On Tue, Apr 24, 2012 at 4:09 PM, Hans-Rudolf Hotz h...@fmi.ch wrote:
 Hi everybody

 I usually don't like people sending the same e-mail several times, so I
 apologize for just doing this, since nobody has replied to last week's email
 so far.

 Although, we do not encounter any problems at the moment, I fear we will run
 into problems at a later stage (ie at the next code update). This is our
 production server.

 Thank you very much for any help

 Regards, Hans




 On 04/16/2012 03:43 PM, Hans-Rudolf Hotz wrote:

 Hi

 We are in the process of migrating our Galaxy servers to the current
 March 12, 2012 release ('6799:40f1816d6857').

 We have encountered two issues, which I guess are related:


 If I restart the server, I get:


 galaxy.model.migrate.check INFO 2012-04-16 09:24:50,391 At database
 version 93
 galaxy.tool_shed.migrate.check DEBUG 2012-04-16 09:24:50,407
 MySQL_python egg successfully loaded for mysql dialect
 Traceback (most recent call last):
 File //galaxy_dist/lib/galaxy/web/buildapp.py, line 82, in
 app_factory
 app = UniverseApplication( global_conf = global_conf, **kwargs )
 File //galaxy_dist/lib/galaxy/app.py, line 37, in __init__
 verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
 '__file__', None ), self.config.database_engine_options )
 File //galaxy_dist/lib/galaxy/tool_shed/migrate/check.py, line 42,
 in verify_tools
 db_schema = schema.ControlledSchema( engine, migrate_repository )
 File

 //galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
 line 24, in __init__
 self._load()
 File

 //galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
 line 42, in _load
 data = list(result)[0]
 IndexError: list index out of range



 I can 'temporary' fix this by commenting-out line 36 and 37 in
 ~/lib/galaxy/app.py

 36# from galaxy.tool_shed.migrate.check import verify_tools
 37# verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
 '__file__', None ), self.config.database_engine_options )


 ...well, I don't like this fix, and I wonder if there is a better one?



 I have a suspicion, that this is related to problems I encountered
 during the database schema upgrade to version 93 for our MySQL database:

 at step 90to91 and 91to92 I get the following:


 90 - 91...

 Migration script to create the tool_version and tool_version_association
 tables and drop the tool_id_guid_map table.

 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
 tool_version table failed: (OperationalError) (1050, Table
 'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
 INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
 \n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
 \n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
 KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
 (id)\n)\n\n' ()
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
 tool_version table failed: (OperationalError) (1050, Table
 'tool_version' already exists) u'\nCREATE TABLE tool_version (\n\tid
 INTEGER NOT NULL AUTO_INCREMENT, \n\tcreate_time DATETIME,
 \n\tupdate_time DATETIME, \n\ttool_id VARCHAR(255),
 \n\ttool_shed_repository_id INTEGER, \n\tPRIMARY KEY (id), \n\t FOREIGN
 KEY(tool_shed_repository_id) REFERENCES tool_shed_repository
 (id)\n)\n\n' ()
 0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,720 Creating
 tool_version_association table failed: (OperationalError) (1050, Table
 'tool_version_association' already exists) u'\nCREATE TABLE
 tool_version_association (\n\tid INTEGER NOT NULL AUTO_INCREMENT,
 \n\ttool_id INTEGER NOT NULL, \n\tparent_id INTEGER NOT NULL,
 \n\tPRIMARY KEY (id), \n\t FOREIGN KEY(tool_id) REFERENCES tool_version
 (id), \n\t FOREIGN KEY(parent_id) 

Re: [galaxy-dev] problem with ~/lib/galaxy/app.py after code upgrade to March 12, 2012 release ('6799:40f1816d6857')

2012-04-24 Thread Hans-Rudolf Hotz

Hi Leandro

Thank you very much for your e-mail. Indeed, the 'migrate_tools' table 
is empty:



mysql select * from galaxy_xenon1.migrate_version;
+---+--+-+
| repository_id | repository_path  | version |
+---+--+-+
| Galaxy| lib/galaxy/model/migrate |  93 |
+---+--+-+
1 row in set (0.02 sec)

mysql select * from galaxy_xenon1.migrate_tools;
Empty set (0.05 sec)

mysql


but I guess, the table itself is correct?

mysql describe galaxy_xenon1.migrate_tools;
+-+--+--+-+-+---+
| Field   | Type | Null | Key | Default | Extra |
+-+--+--+-+-+---+
| repository_id   | varchar(255) | YES  | | NULL|   |
| repository_path | text | YES  | | NULL|   |
| version | int(11)  | YES  | | NULL|   |
+-+--+--+-+-+---+
3 rows in set (0.00 sec)

mysql


tomorrow, I will try adding the missing line manually


Thanks, Hans


On 04/24/2012 05:46 PM, Leandro Hermida wrote:

Dear Hans,


//galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
line 42, in _load
data = list(result)[0]
IndexError: list index out of range


This error is because SQLAlchemy cannot load the controlled schema
version info from the DB, the result is empty, so I think its trying
to load the migrate_version and migrate_tools tables.  Make sure the
tables look like this hopefully they have each one row of data like
this:

mysql  select * from migrate_version;
+---+--+-+
| repository_id | repository_path  | version |
+---+--+-+
| Galaxy| lib/galaxy/model/migrate |  93 |
+---+--+-+
1 row in set (0.01 sec)

mysql  select * from migrate_tools;
+---+--+-+
| repository_id | repository_path  | version |
+---+--+-+
| GalaxyTools   | lib/galaxy/tool_shed/migrate |   1 |
+---+--+-+
1 row in set (0.01 sec)

hth,
Leandro

On Tue, Apr 24, 2012 at 4:09 PM, Hans-Rudolf Hotzh...@fmi.ch  wrote:

Hi everybody

I usually don't like people sending the same e-mail several times, so I
apologize for just doing this, since nobody has replied to last week's email
so far.

Although, we do not encounter any problems at the moment, I fear we will run
into problems at a later stage (ie at the next code update). This is our
production server.

Thank you very much for any help

Regards, Hans




On 04/16/2012 03:43 PM, Hans-Rudolf Hotz wrote:


Hi

We are in the process of migrating our Galaxy servers to the current
March 12, 2012 release ('6799:40f1816d6857').

We have encountered two issues, which I guess are related:


If I restart the server, I get:


galaxy.model.migrate.check INFO 2012-04-16 09:24:50,391 At database
version 93
galaxy.tool_shed.migrate.check DEBUG 2012-04-16 09:24:50,407
MySQL_python egg successfully loaded for mysql dialect
Traceback (most recent call last):
File //galaxy_dist/lib/galaxy/web/buildapp.py, line 82, in
app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
File //galaxy_dist/lib/galaxy/app.py, line 37, in __init__
verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
'__file__', None ), self.config.database_engine_options )
File //galaxy_dist/lib/galaxy/tool_shed/migrate/check.py, line 42,
in verify_tools
db_schema = schema.ControlledSchema( engine, migrate_repository )
File

//galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
line 24, in __init__
self._load()
File

//galaxy_dist/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py,
line 42, in _load
data = list(result)[0]
IndexError: list index out of range



I can 'temporary' fix this by commenting-out line 36 and 37 in
~/lib/galaxy/app.py

36# from galaxy.tool_shed.migrate.check import verify_tools
37# verify_tools( self, db_url, kwargs.get( 'global_conf', {} ).get(
'__file__', None ), self.config.database_engine_options )


...well, I don't like this fix, and I wonder if there is a better one?



I have a suspicion, that this is related to problems I encountered
during the database schema upgrade to version 93 for our MySQL database:

at step 90to91 and 91to92 I get the following:


90 -  91...

Migration script to create the tool_version and tool_version_association
tables and drop the tool_id_guid_map table.

0091_add_tool_version_tables DEBUG 2012-04-16 08:55:49,719 Creating
tool_version table failed: (OperationalError) (1050, Table
'tool_version' already exists) u'\nCREATE TABLE tool_version 

Re: [galaxy-dev] dynamic_options

2012-04-24 Thread Deepthi Theresa
I have a file which contains some animal names.  That file is created by a tool.

I want to list those animal names in another tool using a text box.
How can I do that?

On 4/23/12, Hans-Rudolf Hotz h...@fmi.ch wrote:
 Hi Deepthi


 what do you want to use the 'dynamic_options' for?


 the file provided in the code tag is a pyhton script, but remember the
 'code tag' is deprecated see:

 http://wiki.g2.bx.psu.edu/Admin/Tools/Tool%20Config%20Syntax#A.3Ccode.3E_tag_set



 Regards, Hans

 On 04/21/2012 12:48 AM, Unknown wrote:
 Hi all,

 Is there any help available which describes about the usage of
 dynamic_options?

 Is thecode  file should be a python executable or can we use perl?

 Thanks and regards,
 Deepthi

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/



-- 


Deepthi Theresa Thomas Kannanayakal
1919 University Drive NW, Apt. No. E104
T2N 4K7
Calgary, Alberta
Canada

Ph: (403) 483 7409, (403) 618 5956
Email: deepthither...@gmail.com
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Remove libraries using Galaxy code or API

2012-04-24 Thread liram_vardi
Hello,

I am using Galaxy API for some actions and I must say that this is indeed a 
really great feature with a great power.
Anyway, I am trying to write a python script that one of its goals is to remove 
some data libraries,
But until now, I was unable to find a way to remove data library or some of its 
datasets using the API or by direct call to Galaxy's code.
I found a old post that claim that this feature is not yet implemented.
My questions:

1)  Is this has changed since? I mean, is there a way now to clean or 
remove completely a data library?

2)  Is there a way to use Galaxy code to remove a library?  Such as a 
function that can be used in my script to remove this library?

Thanks in advance!
Liram


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] About an additional tool

2012-04-24 Thread Jim Johnson

I put a samtools_filter tool in the toolshed that uses samtools view command. I 
called the samtools view command from the command section of the tool_config and 
redirected stderr to stdout to avoid having galaxy interpret it as an error. Jim 
Johnson Minnesota Supercomputing Institute University of Minnesota On Tue, Apr 24, 
2012 at 10:03 AM, Ciara Ledero lede...@gmail.com wrote:


Hi there,

I know? Galaxy already has a SAM-to-BAM converter, but part of my
exercise/task is to incorporate a script that uses samtools' view command. I
get this error:

[samopen] SAM header is present: 66338 sequences.

according to Galaxy. But this might not be an error at all. Is? there any
way that I could tell Galaxy to ignore this and just continue with the
script?

Thanks in advance! Any help would be greatly appreciated.

CL



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] One tool ,multiple set of outputs

2012-04-24 Thread Unknown
Hi all,

Can I implement a tool which run 2 different algorithms and providing different 
set of output?

Eg:Suppose I am having a tool named mytool which is running 2 different 
programs for 2 different conditions. Output of first program is a text file and 
fasta file and output of second program is a text file only.  The tool should 
produce either the outputset1 or outputset2 according to the conditions.

Can we use any kind of filter in the outputs tag?

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Cloudman- problem with galaxy update

2012-04-24 Thread Dave Lin
Dear Cloudman Team,

I created a fresh galaxy instance using AMI: galaxy-cloudman-2011-03-22
(ami-da58aab3)
In case it matters, instance type = High-Memory Double Extra Large

I was trying to use the Cloudman Admin Console to update Galaxy.  (
http://bitbucket.org/galaxy/galaxy-central) but galaxy ran into an issue
during the update process.

Log message copied below:

Any suggestions?

Thanks in advance,
Dave


/mnt/galaxyTools/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1:
RuntimeWarning: __builtin__.file size changed, may indicate binary
incompatibility from csamtools import * python path is:
/mnt/galaxyTools/galaxy-central/eggs/numpy-1.6.0-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/boto-2.2.2-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/mercurial-2.1.2-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/Whoosh-0.3.18-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/pycrypto-2.0.1-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/python_lzo-1.08_2.03_static-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/bx_python-0.7.1_7b95ff194725-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/amqplib-0.6.1-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/pexpect-2.4-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/Babel-0.9.4-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/MarkupSafe-0.12-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/Mako-0.4.1-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/WebHelpers-0.2-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/simplejson-2.1.1-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/wchartype-0.1-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/elementtree-1.2.6_20050316-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/docutils-0.7-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/WebOb-0.8.5-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/Routes-1.12.3-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg,
/mnt/galaxyTools/galaxy-central/eggs/PasteDeploy-1.3.3-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/PasteScript-1.7.3-py2.6.egg,
/mnt/galaxyTools/galaxy-central/eggs/Paste-1.6-py2.6.egg,
/mnt/galaxyTools/galaxy-central/lib, /usr/lib/python2.6/,
/usr/lib/python2.6/plat-linux2, /usr/lib/python2.6/lib-tk,
/usr/lib/python2.6/lib-old, /usr/lib/python2.6/lib-dynload Traceback (most
recent call last): File
/mnt/galaxyTools/galaxy-central/lib/galaxy/web/buildapp.py, line 82, in
app_factory app = UniverseApplication( global_conf = global_conf, **kwargs
) File /mnt/galaxyTools/galaxy-central/lib/galaxy/app.py, line 25, in
__init__ self.config.check() File
/mnt/galaxyTools/galaxy-central/lib/galaxy/config.py, line 290, in check
raise ConfigurationError(File not found: %s % path ) ConfigurationError:
File not found: ./migrated_tools_conf.xml
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] More meaningful dataset names/easier method of identifying?

2012-04-24 Thread Josh Nielsen
Hello,

For a while now with the Galaxy mirror that we have I have found on many
occasions a need to identify which dataset_*.dat files on the file system
(in the [galaxy_dist]/database/files/000/ directory) belong to which
user, and even for the same user to distinguish between their various
datasets. Files directly uploaded by the user will have a Galaxy job 
dataset file name which match - like a Galaxy job name of data 18 (for
example) which actually is reflective of the file name 'dataset_18.dat' on
the file system. However any analysis on that file thereafter that produces
another dataset does not give you a clue of the corresponding file name.
For example, a Clip on data 18 run some time later may be called
'dataset_44.dat' on the filesystem, and a Map with Bowtie on data 18 that
runs on the clipped 'dataset_44.dat' may produce an output file of
'dataset_53.dat'.

When debugging failed jobs, and after the user has rerun them for the
umpteenth time, there may be dozens of identical or near-identical files to
weed through, and the generic naming scheme is not helpful even though it
is sequential (also not easy to keep track of/match up unless you are
watching the file writes in the directory live). The current implementation
makes sense for internal usage and the code that uses it, but it is
difficult for a human to distinguish which files match the jobs in Galaxy.

It would be useful to have more meaningful dataset file names or an easier
way to identify them (a record that matches the internal and external
names) for administrative maintenance reasons so that I can delete files,
or possibly even export those .dat files to a network share where our users
can perform manual analysis on them. Could anyone point me to where in the
code I could look to make the dataset names more meaningful? Or perhaps I
should request of the Galaxy developers (as a feature) a way for the users
themselves to see under the metadata name of their job (like Map with
Bowtie on data 18) in the right side pane the *actual* corresponding file
and location on the file system path to it (dataset_53.dat, for example).
Or if not for users at least something for Administrators. Even a database
that has four columns for the internal/filesystem dataset name, the job
metadata name, the Galaxy job number (that the user sees), and the user
that the dataset belongs to, would be helpful. A lot of our users are heavy
into informatics though and would probably prefer that the user be able to
see that information. Does anyone have any suggestions or thoughts about
this?

Thanks,
Josh Nielsen
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Cloudman- problem with galaxy update

2012-04-24 Thread Dannon Baker
Hi Dave,

The problem here is that the galaxy update failed to merge a change to run.sh 
because of minor customizations it has.  We'll have a long term fix out for 
this in the next week, but for now what you can do is ssh in to your instance 
and update run.sh yourself prior to restarting galaxy.  All you need to do is 
add 'migrated_tools_conf.xml.sample' to the SAMPLES in 
/mnt/galaxyTools/galaxy-central/run.sh, execute `sh run.sh --run-daemon` (or 
restart galaxy again from the admin page) and you should be good to go.

-Dannon

On Apr 24, 2012, at 4:36 PM, Dave Lin wrote:

 Dear Cloudman Team,
 
 I created a fresh galaxy instance using AMI: galaxy-cloudman-2011-03-22 
 (ami-da58aab3)
 In case it matters, instance type = High-Memory Double Extra Large
 
 I was trying to use the Cloudman Admin Console to update Galaxy.  
 (http://bitbucket.org/galaxy/galaxy-central) but galaxy ran into an issue 
 during the update process.
 
 Log message copied below:
 
 Any suggestions?
 
 Thanks in advance,
 Dave
 
 
 /mnt/galaxyTools/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1:
  RuntimeWarning: __builtin__.file size changed, may indicate binary 
 incompatibility from csamtools import * python path is: 
 /mnt/galaxyTools/galaxy-central/eggs/numpy-1.6.0-py2.6-linux-x86_64-ucs4.egg, 
 /mnt/galaxyTools/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg,
  /mnt/galaxyTools/galaxy-central/eggs/boto-2.2.2-py2.6.egg, 
 /mnt/galaxyTools/galaxy-central/eggs/mercurial-2.1.2-py2.6-linux-x86_64-ucs4.egg,
  /mnt/galaxyTools/galaxy-central/eggs/Whoosh-0.3.18-py2.6.egg, 
 /mnt/galaxyTools/galaxy-central/eggs/pycrypto-2.0.1-py2.6-linux-x86_64-ucs4.egg,
  
 /mnt/galaxyTools/galaxy-central/eggs/python_lzo-1.08_2.03_static-py2.6-linux-x86_64-ucs4.egg,
  
 /mnt/galaxyTools/galaxy-central/eggs/bx_python-0.7.1_7b95ff194725-py2.6-linux-x86_64-ucs4.egg,
  /mnt/galaxyTools/galaxy-central/eggs/amqplib-0.6.1-py2.6.egg, 
 /mnt/galaxyTools/galaxy-central/egg!
 s/pexpect-2.4-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/Babel-0.9.4-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/MarkupSafe-0.12-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/Mako-0.4.1-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/WebHelpers-0.2-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/simplejson-2.1.1-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/wchartype-0.1-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/elementtree-1.2.6_20050316-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/docutils-0.7-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/WebOb-0.8.5-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/Routes-1.12.3-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg, 
/mnt/galaxyTools/galaxy-central/eggs/PasteDeploy-1.3.3-py2.6.egg, 
/mnt/galaxyTools/galaxy-central/eggs/PasteScript-1.7.3-py2.6.egg, /mnt/gala!
 xyTools/galaxy-central/eggs/Paste-1.6-py2.6.egg, /mnt/galaxyTo!
 ols/gala
xy-central/lib, /usr/lib/python2.6/, /usr/lib/python2.6/plat-linux2, 
/usr/lib/python2.6/lib-tk, /usr/lib/python2.6/lib-old, 
/usr/lib/python2.6/lib-dynload Traceback (most recent call last): File 
/mnt/galaxyTools/galaxy-central/lib/galaxy/web/buildapp.py, line 82, in 
app_factory app = UniverseApplication( global_conf = global_conf, **kwargs ) 
File /mnt/galaxyTools/galaxy-central/lib/galaxy/app.py, line 25, in __init__ 
self.config.check() File 
/mnt/galaxyTools/galaxy-central/lib/galaxy/config.py, line 290, in check 
raise ConfigurationError(File not found: %s % path ) ConfigurationError: File 
not found: ./migrated_tools_conf.xml
 
 
 
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] More meaningful dataset names/easier method of identifying?

2012-04-24 Thread Dannon Baker
In changeset 7013:dae7eefe2f71 I added the full file path to the dataset View 
Details page.  Galaxy administrators will always see this, and if you set 
expose_dataset_path to True in your universe_wsgi.ini, users will see it as 
well.  Hopefully that's what you're looking for, but let me know if I've 
misunderstood what you're after and I can take another look.

-Dannon

On Apr 24, 2012, at 4:41 PM, Josh Nielsen wrote:

 Hello,
 
 For a while now with the Galaxy mirror that we have I have found on many 
 occasions a need to identify which dataset_*.dat files on the file system (in 
 the [galaxy_dist]/database/files/000/ directory) belong to which user, and 
 even for the same user to distinguish between their various datasets. Files 
 directly uploaded by the user will have a Galaxy job  dataset file name 
 which match - like a Galaxy job name of data 18 (for example) which 
 actually is reflective of the file name 'dataset_18.dat' on the file system. 
 However any analysis on that file thereafter that produces another dataset 
 does not give you a clue of the corresponding file name. For example, a Clip 
 on data 18 run some time later may be called 'dataset_44.dat' on the 
 filesystem, and a Map with Bowtie on data 18 that runs on the clipped 
 'dataset_44.dat' may produce an output file of 'dataset_53.dat'. 
 
 When debugging failed jobs, and after the user has rerun them for the 
 umpteenth time, there may be dozens of identical or near-identical files to 
 weed through, and the generic naming scheme is not helpful even though it is 
 sequential (also not easy to keep track of/match up unless you are watching 
 the file writes in the directory live). The current implementation makes 
 sense for internal usage and the code that uses it, but it is difficult for a 
 human to distinguish which files match the jobs in Galaxy. 
 
 It would be useful to have more meaningful dataset file names or an easier 
 way to identify them (a record that matches the internal and external 
 names) for administrative maintenance reasons so that I can delete files, or 
 possibly even export those .dat files to a network share where our users can 
 perform manual analysis on them. Could anyone point me to where in the code I 
 could look to make the dataset names more meaningful? Or perhaps I should 
 request of the Galaxy developers (as a feature) a way for the users 
 themselves to see under the metadata name of their job (like Map with 
 Bowtie on data 18) in the right side pane the *actual* corresponding file 
 and location on the file system path to it (dataset_53.dat, for example). Or 
 if not for users at least something for Administrators. Even a database that 
 has four columns for the internal/filesystem dataset name, the job metadata 
 name, the Galaxy job number (that the user sees), and the user that the 
 dataset belong!
 s to, would be helpful. A lot of our users are heavy into informatics though 
and would probably prefer that the user be able to see that information. Does 
anyone have any suggestions or thoughts about this?
 
 Thanks,
 Josh Nielsen
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Galaxy pbs scripts and job_working_directory files

2012-04-24 Thread Josh Nielsen
Hello again,

As I mentioned in a recent post I often find need to debug jobs running
from our local Galaxy mirror and I often have the need to look at the
script  data files that the job is trying to use in order to figure out
what is causing a problem. The directories containing those file are in
'[galaxy_dist]/database/pbs/' and
'[galaxy_dist]/database/job_working_directory/' for me. Each job that is
run gets a corresponding .sh file in the pbs/ directory (like 344.sh) which
will have the entire sequence of bash commands to execute the job with and
also a call to a wrapper script somewhere in the middle normally. That
script information is very useful, but the problem is that when a job fails
(often within the first 30 seconds of running it) the script is deleted and
there is no trace of it left in the directory. The same with the output or
job data files in job_working_directory/.

I have had to suffice with using the technique of coordinating with the
user when to (re)run their failed job and then quickly within the 30 second
window do a cp -R script_I_care_about.sh copy_of_script.sh command, so
that when the script is deleted I have a copy that I can examine. The same
goes with copying the job_working_directory/ files. I know that it would
get very cluttered in those directories if they were not automatically
cleaned/deleted but I find those files essential for debugging. Is there a
way to force Galaxy to retain those files (optionally) for debugging
purposes? Maybe make a new option in the universe.ini file for that purpose
that can be set for people who want it?

Thanks,
Josh Nielsen
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Galaxy pbs scripts and job_working_directory files

2012-04-24 Thread Dannon Baker
Josh,

Check out the cleanup_job setting in universe_wsgi.ini(and included below).  It 
sounds like 'cleanup_job = onsuccess' is exactly what you're looking for.

-Dannon

# Clean up various bits of jobs left on the filesystem after completion.  These
# bits include the job working directory, external metadata temporary files,
# and DRM stdout and stderr files (if using a DRM).  Possible values are:
# always, onsuccess, never
#cleanup_job = always

On Apr 24, 2012, at 5:00 PM, Josh Nielsen wrote:

 Hello again,
 
 As I mentioned in a recent post I often find need to debug jobs running from 
 our local Galaxy mirror and I often have the need to look at the script  
 data files that the job is trying to use in order to figure out what is 
 causing a problem. The directories containing those file are in 
 '[galaxy_dist]/database/pbs/' and 
 '[galaxy_dist]/database/job_working_directory/' for me. Each job that is run 
 gets a corresponding .sh file in the pbs/ directory (like 344.sh) which will 
 have the entire sequence of bash commands to execute the job with and also a 
 call to a wrapper script somewhere in the middle normally. That script 
 information is very useful, but the problem is that when a job fails (often 
 within the first 30 seconds of running it) the script is deleted and there is 
 no trace of it left in the directory. The same with the output or job data 
 files in job_working_directory/. 
 
 I have had to suffice with using the technique of coordinating with the user 
 when to (re)run their failed job and then quickly within the 30 second window 
 do a cp -R script_I_care_about.sh copy_of_script.sh command, so that when 
 the script is deleted I have a copy that I can examine. The same goes with 
 copying the job_working_directory/ files. I know that it would get very 
 cluttered in those directories if they were not automatically cleaned/deleted 
 but I find those files essential for debugging. Is there a way to force 
 Galaxy to retain those files (optionally) for debugging purposes? Maybe make 
 a new option in the universe.ini file for that purpose that can be set for 
 people who want it?
 
 Thanks,
 Josh Nielsen
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Galaxy pbs scripts and job_working_directory files

2012-04-24 Thread Josh Nielsen
Ah, so this would just betray my ignorance of current features. :-) I'll
give that a try!

Thanks,
Josh

On Tue, Apr 24, 2012 at 4:09 PM, Dannon Baker dannonba...@me.com wrote:

 Josh,

 Check out the cleanup_job setting in universe_wsgi.ini(and included
 below).  It sounds like 'cleanup_job = onsuccess' is exactly what you're
 looking for.

 -Dannon

 # Clean up various bits of jobs left on the filesystem after completion.
  These
 # bits include the job working directory, external metadata temporary
 files,
 # and DRM stdout and stderr files (if using a DRM).  Possible values are:
 # always, onsuccess, never
 #cleanup_job = always

 On Apr 24, 2012, at 5:00 PM, Josh Nielsen wrote:

  Hello again,
 
  As I mentioned in a recent post I often find need to debug jobs running
 from our local Galaxy mirror and I often have the need to look at the
 script  data files that the job is trying to use in order to figure out
 what is causing a problem. The directories containing those file are in
 '[galaxy_dist]/database/pbs/' and
 '[galaxy_dist]/database/job_working_directory/' for me. Each job that is
 run gets a corresponding .sh file in the pbs/ directory (like 344.sh) which
 will have the entire sequence of bash commands to execute the job with and
 also a call to a wrapper script somewhere in the middle normally. That
 script information is very useful, but the problem is that when a job fails
 (often within the first 30 seconds of running it) the script is deleted and
 there is no trace of it left in the directory. The same with the output or
 job data files in job_working_directory/.
 
  I have had to suffice with using the technique of coordinating with the
 user when to (re)run their failed job and then quickly within the 30 second
 window do a cp -R script_I_care_about.sh copy_of_script.sh command, so
 that when the script is deleted I have a copy that I can examine. The same
 goes with copying the job_working_directory/ files. I know that it would
 get very cluttered in those directories if they were not automatically
 cleaned/deleted but I find those files essential for debugging. Is there a
 way to force Galaxy to retain those files (optionally) for debugging
 purposes? Maybe make a new option in the universe.ini file for that purpose
 that can be set for people who want it?
 
  Thanks,
  Josh Nielsen
  ___
  Please keep all replies on the list by using reply all
  in your mail client.  To manage your subscriptions to this
  and other Galaxy lists, please use the interface at:
 
   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Cloudman- problem with galaxy update

2012-04-24 Thread Dave Lin
Hi Dannon,

Thanks for the quick reply. That worked. Really appreciate the quick and
helpful replies by the galaxy dev team.

Helpful tip for somebody else who might be reading this.

if sh run.sh --run-daemon doesn't work, try sh run.sh --daemon start
instead.

Best,
Dave



On Tue, Apr 24, 2012 at 1:47 PM, Dannon Baker dannonba...@me.com wrote:

 Hi Dave,

 The problem here is that the galaxy update failed to merge a change to
 run.sh because of minor customizations it has.  We'll have a long term fix
 out for this in the next week, but for now what you can do is ssh in to
 your instance and update run.sh yourself prior to restarting galaxy.  All
 you need to do is add 'migrated_tools_conf.xml.sample' to the SAMPLES in
 /mnt/galaxyTools/galaxy-central/run.sh, execute `sh run.sh --run-daemon`
 (or restart galaxy again from the admin page) and you should be good to go.

 -Dannon

 On Apr 24, 2012, at 4:36 PM, Dave Lin wrote:

  Dear Cloudman Team,
 
  I created a fresh galaxy instance using AMI: galaxy-cloudman-2011-03-22
 (ami-da58aab3)
  In case it matters, instance type = High-Memory Double Extra Large
 
  I was trying to use the Cloudman Admin Console to update Galaxy.  (
 http://bitbucket.org/galaxy/galaxy-central) but galaxy ran into an issue
 during the update process.
 
  Log message copied below:
 
  Any suggestions?
 
  Thanks in advance,
  Dave
 
 
 
 /mnt/galaxyTools/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1:
 RuntimeWarning: __builtin__.file size changed, may indicate binary
 incompatibility from csamtools import * python path is:
 /mnt/galaxyTools/galaxy-central/eggs/numpy-1.6.0-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/boto-2.2.2-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/mercurial-2.1.2-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/Whoosh-0.3.18-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/pycrypto-2.0.1-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/python_lzo-1.08_2.03_static-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/bx_python-0.7.1_7b95ff194725-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/amqplib-0.6.1-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/pexpect-2.4-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/Babel-0.9.4-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/MarkupSafe-0.12-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/Mako-0.4.1-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/WebHelpers-0.2-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/simplejson-2.1.1-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/wchartype-0.1-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/elementtree-1.2.6_20050316-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/docutils-0.7-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/WebOb-0.8.5-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/Routes-1.12.3-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg,
 /mnt/galaxyTools/galaxy-central/eggs/PasteDeploy-1.3.3-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/PasteScript-1.7.3-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/eggs/Paste-1.6-py2.6.egg,
 /mnt/galaxyTools/galaxy-central/lib, /usr/lib/python2.6/,
 /usr/lib/python2.6/plat-linux2, /usr/lib/python2.6/lib-tk,
 /usr/lib/python2.6/lib-old, /usr/lib/python2.6/lib-dynload Traceback (most
 recent call last): File
 /mnt/galaxyTools/galaxy-central/lib/galaxy/web/buildapp.py, line 82, in
 app_factory app = UniverseApplication( global_conf = global_conf, **kwargs
 ) File /mnt/galaxyTools/galaxy-central/lib/galaxy/app.py, line 25, in
 __init__ self.config.check() File
 /mnt/galaxyTools/galaxy-central/lib/galaxy/config.py, line 290, in check
 raise ConfigurationError(File not found: %s % path ) ConfigurationError:
 File not found: ./migrated_tools_conf.xml
 
 
 
 
 
  ___
  Please keep all replies on the list by using reply all
  in your mail client.  To manage your subscriptions to this
  and other Galaxy lists, please use the interface at:
 
   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Creating a galaxy tool in R - You must not use 8-bit bytestrings

2012-04-24 Thread Dan Tenenbaum
Apologies for originally posting this to galaxy-user; now I realize it
belongs here.

Hello,

I'm a galaxy newbie and running into several issues trying to adapt an
R script to be a galaxy tool.

I'm looking at the XY plotting tool for guidance
(tools/plot/xy_plot.xml), but I decided not to embed my script in XML,
but instead have it in a separate script file, that way I can still
run it from the command line and make sure it works as I make
incremental changes. (So my script starts with args -
commandArgs(TRUE)). Also, if it doesn't work, this suggests to me that
there is a problem with my galaxy configuration.

First, I tried using the r_wrapper.sh script that comes with the XY
plotting tool,  but it threw away my arguments:

An error occurred running this job: ARGUMENT
'/Users/dtenenba/dev/galaxy-dist/database/files/000/dataset_4.dat'
__ignored__

ARGUMENT '/Users/dtenenba/dev/galaxy-dist/database/files/000/dataset_3.dat'
__ignored__

ARGUMENT 'Fly' __ignored__

ARGUMENT 'Tagwise' __ignored__

etc.

So then I tried just switching to Rscript:

  command interpreter=bashRscript RNASeq.R $countsTsv $designTsv
$organism $dispersion $minimumCountsPerMillion
$minimumSamplesPerTranscript $out_file1 $out_file2/command

(My script produces as output a csv file and a pdf file. The final two
arguments I'm passing are the names of those files.)

But then I get an error that Rscript can't be found.

So I wrote a little wrapper script, Rscript_wrapper.sh:

#!/bin/sh

Rscript $*

And called that:
  command interpreter=bashRscript_wrapper.sh RNASeq.R $countsTsv
$designTsv $organism $dispersion $minimumCountsPerMillion
$minimumSamplesPerTranscript $out_file1 $out_file2/command

Then I got an error that RNASeq.R could not be found.

So then I added the absolute path to my R script to the command tag.
This seemed to work (that is, it got me further, to the next error),
but I'm not sure why I had to do this; in all the other tools I'm
looking at, the directory to the script to run does not have to be
specified; I assumed that the command would run in the appropriate
directory.

So now I've specified the full path to my R script:

  command interpreter=bashRscript_wrapper.sh
/Users/dtenenba/dev/galaxy-dist/tools/bioc/RNASeq.R $countsTsv
$designTsv $organism $dispersion $minimumCountsPerMillion
$minimumSamplesPerTranscript $out_file1 $out_file2/command

And I get the following long error, which includes all of the output
of my R script:

Traceback (most recent call last):
  File /Users/dtenenba/dev/galaxy-dist/lib/galaxy/jobs/runners/local.py,
line 133, in run_job
job_wrapper.finish( stdout, stderr )
  File /Users/dtenenba/dev/galaxy-dist/lib/galaxy/jobs/__init__.py,
line 725, in finish
self.sa_session.flush()
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/scoping.py,
line 127, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/session.py,
line 1356, in flush
self._flush(objects)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/session.py,
line 1434, in _flush
flush_context.execute()
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/unitofwork.py,
line 261, in execute
UOWExecutor().execute(self, tasks)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/unitofwork.py,
line 753, in execute
self.execute_save_steps(trans, task)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/unitofwork.py,
line 768, in execute_save_steps
self.save_objects(trans, task)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/unitofwork.py,
line 759, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/orm/mapper.py,
line 1413, in _save_obj
c = connection.execute(statement.values(value_params), params)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/engine/base.py,
line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/engine/base.py,
line 874, in _execute_clauseelement
return self.__execute_context(context)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/engine/base.py,
line 896, in __execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File 
/Users/dtenenba/dev/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/engine/base.py,
line 950, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, 

Re: [galaxy-dev] Delete multiple datasets

2012-04-24 Thread Jennifer Jackson

Hi Shaun,

I opened a ticket at bitbucket to track the progress of this request. If 
there is any more information, an updated email will be sent out from 
the team.


http://bitbucket.org/galaxy/galaxy-central/issue/749/delete-undelete-hide-unhide-purge-multiple

I am going to mention the current batch options, for others reading:

1 - Options - Show Deleted Datasets or Show Hidden Datasets
2 - Click on the x for each to do the initial delete
3 - Options - Purge Deleted Datasets

or

4 - Options - Clone as Clone only items that are not deleted
   (leave rest behind on the old history in one step)

Great suggestion,

Jen
Galaxy team


On 4/24/12 3:58 AM, SHAUN WEBB wrote:

add capability to delete multiple datasets at once, similar to the copy
datasets interface in the history options menu.

If I am running workflows and want to delete all the created datasets
(during testing or if I've used the wrong parameters etc) it is quite
tedious to delete each individual dataset. It would be useful to see and
delete any hidden datasets in a similar fashion.

Thanks
Shaun


--
Jennifer Jackson
http://galaxyproject.org
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] /bin/sh: samtools: not found--WORKAROUND

2012-04-24 Thread Michael Moore
There is apparently a persistent problem with samtools which normally lives
at /usr/bin/samtools.  I encountered a similar problem in Python when
uploading BAM files.

I did not resolve the problem.  I hacked for a while on binary.py in a lib/
subdirectory and used os.system to send myself mail describing the
effective path at various points, and I added a missing

logging.basicConfig()

statement and scattered some log.WARNING statements strategically.  All
this told me nothing.  So I made a few symlinks to samtools.  The one that
got things working was

ln -s /usr/bin/samtools /home/galaxy/bin/samtools

so--worked around but not resolved.

Michael

On Tue, Apr 17, 2012 at 12:15 PM, zhengqiu cai caizhq2...@yahoo.com.cnwrote:

 Hi All,

 I submitted a job to convert sam to bam, and the job was running forever
 without outputing the result. I then checked the log, and it read:
 Traceback (most recent call last):
  File /mnt/galaxyTools/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py,
 line 336, in finish_job
drm_job_state.job_wrapper.finish( stdout, stderr )
  File /mnt/galaxyTools/galaxy-dist/lib/galaxy/jobs/__init__.py, line
 637, in finish
dataset.set_meta( overwrite = False )
  File /mnt/galaxyTools/galaxy-dist/lib/galaxy/model/__init__.py, line
 875, in set_meta
return self.datatype.set_meta( self, **kwd )
  File /mnt/galaxyTools/galaxy-dist/lib/galaxy/datatypes/binary.py, line
 179, in set_meta
raise Exception, Error Setting BAM Metadata: %s % stderr
 Exception: Error Setting BAM Metadata: /bin/sh: samtools: not found

 It means that the samtools is not in the PATH. I tried to set the PATH in
 a couple of methods according the Galaxy documentation:
 1. put the path in the env.sh in the tool directory and symbolink default
 to the tool directory, e.g. default -
 =/mnt/galaxyTools/tools/samtools/0.1.18
 2. put -v PATH=/mnt/galaxyTools/tools/samtools/0.1.18 in ~/.sge_request
 3. put -v PATH=/mnt/galaxyTools/tools/samtools/0.1.18 in /path/sge_request

 none of them worked, and I got the above same problem.

 Then I checked the job log file in the job_working_directory, and it read:
 Samtools Version: 0.1.18 (r982:295)
 SAM file converted to BAM

 which shows that sge knows the PATH of samtools. To double check it, I
 added samtools index to Galaxy, and it worked well. I am very confused why
 SGE knows the tool path but cannot run the job correctly.

 The system I am using is ubuntu on EC2. I checked out the code from
 galaxy-dist on bitbucket. Other tools such as bwa and bowtie worked well
 using the same setting method(put env.sh in the tools directory to set the
 tool path)

 Thank you very much for any help or hints.

 Cai

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Cloud instance connection refused

2012-04-24 Thread Hui Zhao
Hi,
This is Julia from Memorial SLoan-Kettering Cancer Center. I've been using the 
galaxy Cloudman instance last year and it worked fine. But today when I tried 
to launch instance galaxy-cloudman-2011-03-22(ami-da58aab3). After the instance 
was running, it gave me tcp_error: A communication error occurred: Connection 
refused. Any ideas?

Thanks a lot,
Julia



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] About an additional tool

2012-04-24 Thread Ciara Ledero
To clear things up, I am using my own tool which uses samtools internally.
I think I have tried the SAM-to-BAM tool when I was exploring Galaxy and I
think it worked fine. By the way, I am using Perl.

Thanks for the tips! I'll get back here if something goes awry again.
CL


On Wed, Apr 25, 2012 at 12:54 AM, Jim Johnson johns...@umn.edu wrote:

 *
 I put a samtools_filter tool in the toolshed that uses samtools view command.
 I called the samtools view command from the command section of the 
 tool_config and redirected stderr to stdout to avoid having galaxy interpret 
 it as an error.

 Jim Johnson
 Minnesota Supercomputing Institute
 University of Minnesota


 On Tue, Apr 24, 2012 at 10:03 AM, Ciara Ledero lede...@gmail.com 
 lede...@gmail.com wrote:
 *

 *Hi there,

 I know? Galaxy already has a SAM-to-BAM converter, but part of my
 exercise/task is to incorporate a script that uses samtools' view command. I
 get this error:

 [samopen] SAM header is present: 66338 sequences.

 according to Galaxy. But this might not be an error at all. Is? there any
 way that I could tell Galaxy to ignore this and just continue with the
 script?

 Thanks in advance! Any help would be greatly appreciated.

 CL*




___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Cloud instance connection refused

2012-04-24 Thread Dan Tenenbaum
On Tue, Apr 24, 2012 at 9:16 AM, Hui Zhao zh...@cbio.mskcc.org wrote:
 Hi,
 This is Julia from Memorial SLoan-Kettering Cancer Center. I've been using
 the galaxy Cloudman instance last year and it worked fine. But today when I
 tried to launch instance galaxy-cloudman-2011-03-22(ami-da58aab3). After the
 instance was running, it gave me tcp_error: A communication error occurred:
 Connection refused. Any ideas?


Sounds like maybe a security group problem. Did you set up your
security group as described here:
http://wiki.g2.bx.psu.edu/Admin/Cloud

Search for Add Inbound Rules.
Dan

 Thanks a lot,
 Julia




 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/