Re: [galaxy-dev] Getting example_watch_folder.py to work...

2013-05-08 Thread Neil.Burdett
It looks like the problem may have to do with the file I am using ???

It is a *.nii.gz file.

Does galaxy try to uncompress ? do something to *.gz files when using the api 
(as it tries to do in ~/galaxy-dist/tools/data_source/upload.py

Does anyone know of where in the code I can edit so my mile can be uploaded?

Thanks
Neil

From: Burdett, Neil (ICT Centre, Herston - RBWH)
Sent: Wednesday, 8 May 2013 3:15 PM
To: galaxy-dev@lists.bx.psu.edu
Subject: RE: Getting example_watch_folder.py to work...

Further, it seems that it doesn't manage to get hold of the file specified in 
the input directory as I can see from the output:

http://barium-rbh/csiro/api/histories/964b37715ec9bd22/contents/2faba7054d92b2df

{

data_type: html,

deleted: false,

download_url: /csiro/datasets/2faba7054d92b2df/display?to_ext=html,

file_name: /home/galaxy/galaxy-dist/database/files/000/dataset_137.dat,

file_size: 194,

genome_build: ?,

id: 2faba7054d92b2df,

metadata_data_lines: null,

metadata_dbkey: ?,

misc_blurb: error,

misc_info: Wed May  8 15:07:27 2013\nbashScript is: 
/home/galaxy/galaxy-dist/tools/visualization/extractSlice-wrapper.sh\ninput_image
 is: None\ncat: None: No such file or directory\nFailed reading file 
/tmp/tmp.9LWrz6SaLy.nii.gz\nitk::ERROR: PNGImageIO(0x1abdcf0): PNGIma,

model_class: HistoryDatasetAssociation,

name: Extract 2D slice on None,

state: error,

visible: true

}


input_image is: None

Does anybody know why the file may not be getting read? It is being copied from 
the specified input directory to the output directory.

I have set :
allow_library_path_paste = True

and added my user to:
admin_users

Thanks for any help

Neil



From: Burdett, Neil (ICT Centre, Herston - RBWH)
Sent: Wednesday, May 08, 2013 2:46 PM
To: galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu
Subject: Getting example_watch_folder.py to work...
Hi,
 I'm trying to get the example_watch_folder.py to run but it seems to fail, 
and I'm not sure why?

I run:

 ./example_watch_folder.py 64f3209856a3cf4f2d034a1ad5bf851c 
http://barium-rbh/csiro/api/ /home/galaxy/galaxy-drop/input 
/home/galaxy/galaxy-drop/output My API Import f597429621d6eb2b

I got the workflow Id from :

http://barium-rbh/csiro/api/workflows

which gave me:

[

{

id: f597429621d6eb2b,

name: extract,

url: /csiro/api/workflows/f597429621d6eb2b

},

{

id: f2db41e1fa331b3e,

name: FULL CTE,

url: /csiro/api/workflows/f2db41e1fa331b3e

}

]

The output I get from the command line is:
{'outputs': ['ba0fa2aed4052bce'], 'history': 'ba03619785539f8c'}

The files I put into /home/galaxy/galaxy-drop/input get copied to 
/home/galaxy/galaxy-drop/output

But nothing else happens.

If I go to http://barium-rbh/csiro/api/histories

I can see:
{

id: ba03619785539f8c,

name: colin.nii.gz - extract,

url: /csiro/api/histories/ba03619785539f8c

},

However when I go to:
http://barium-rbh/csiro/api/histories/ba03619785539f8c

I get:
{

contents_url: /csiro/api/histories/ba03619785539f8c/contents,

id: ba03619785539f8c,

name: colin.nii.gz - extract,

state: error,

state_details: {

discarded: 0,

empty: 0,

error: 1,

failed_metadata: 0,

new: 0,

ok: 0,

queued: 0,

running: 0,

setting_metadata: 0,

upload: 0

}

}

Any ideas much appreciated

Thanks
Neil

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Track Job Runtime

2013-05-08 Thread Geert Vandeweyer

Hi,

Are there options available to track the actual runtime of jobs on a 
cluster and store them in the database? Or are there fields in the 
database that approximate the job execution duration?


This might be useful for fine-grained wall-time estimation in a crowded 
cluster environment. What I'd like to do is fetch an average runtime / 
mb of input data for a specific tool from the database, and than use 
this for wall-time estimation of new jobs in a dynamic job runner script.


Has this been done before?

Best,

Geert

--

Geert Vandeweyer, Ph.D.
Department of Medical Genetics
University of Antwerp
Prins Boudewijnlaan 43
2650 Edegem
Belgium
Tel: +32 (0)3 275 97 56
E-mail: geert.vandewe...@ua.ac.be
http://ua.ac.be/cognitivegenetics
http://www.linkedin.com/pub/geert-vandeweyer/26/457/726

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Track Job Runtime

2013-05-08 Thread Peter Cock
On Wed, May 8, 2013 at 10:08 AM, Geert Vandeweyer
geert.vandewey...@ua.ac.be wrote:
 Hi,

 Are there options available to track the actual runtime of jobs on a cluster
 and store them in the database?

Not yet, but I'd really like to have that information too.

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Missing test results on (Test) Tool Shed

2013-05-08 Thread Peter Cock
On Tue, May 7, 2013 at 7:02 PM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hi Peter,

 Missing test components implies a tool config that does not define a
 test (i.e, a missing test definition) or a tool config that defines a test,
 but the test's input or output files are missing from the repository.

This seems to be our point of confusion: I don't understand combining
these two categories - it seems unhelpful to me.

Tools missing a test definition clearly can't be tested - but since we'd
like every tool to have tests having this as an easily view listing is
useful both for authors and reviewers. It highlights tools which need
some work - or in some cases work on the Galaxy test framework itself.
They are neither passing nor failing tests - and it makes sense to list
them separately.

Tools with a test definition should be tested - if they are missing an
input or output file this is just a special case of a test failure (and can
be spotted without actually attempting to run the tool). This is clearly
a broken test and the tool author should be able to fix this easily (by
uploading the missing test data file)

 I don't see the benefit of the above where you place tools missing tests
 into a different category than tools with defined tests, but missing test 
 data.
 If any of the test components (test definition or required input or output 
 files)
 are missing, then the test cannot be executed, so defining it as a failing
 test in either case is a bit misleading.  It is actually a tool that is 
 missing
 test components that are required for execution which will result in a pass
 / fail status.

It is still a failing test (just for the trivial reason of missing a
test data file).

 It would be much simpler to change the filter for failing tests to include
 those that are missing test components so that the list of missing test
 components is a subset of the list of failing tests.

What I would like is three lists:

Latest revision: missing tool tests
 - repositories where at least 1 tool has no test defined

[The medium term TO-DO list for the Tool Author]

Latest revision: failing tool tests
 - repositories where at least 1 tool has a failing test (where I include
   tests missing their input or output test data files)

[The priority TO-DO list for the Tool Author]

Latest revision: all tool tests pass
 - repositories where every tool has tests and they all pass

[The good list, Tool Authors should aim to have everything here]

Right now http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus
would appear under both missing tool tests and failing tool tests,
but I hope to fix this and have this under missing tool tests only
(until my current roadblocks with the Galaxy Test Framework are
resolved).

I hope I've managed a clearer explanation this time,

Thanks,

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Missing test results on (Test) Tool Shed

2013-05-08 Thread Peter Cock
On Tue, May 7, 2013 at 2:10 PM, Dave Bouvier d...@bx.psu.edu wrote:
 Peter,

 As you've already noticed, I've tracked down and fixed the main issue that
 was causing inaccurate test results. Thank you again for the data you
 provided, which was of great help narrowing down the cause of the issue.

--Dave B.

Hi Dave,

I've got another problem set for you, some repositories are listed under
Latest revision: failing tool tests, yet there are no test results are shown
(positive or negative).

Tool shed revision: 9661:cb0432cfcc8a

My clinod wrapper with one test:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/a66a914c39b5
Revision 3:a66a914c39b5

My effective T3 wrapper with two tests:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/4567618bebbd
Revision 5:4567618bebbd

My seq_rename tool with one test:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/b51633a69a92
Revision 2:b51633a69a92

Regards,

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Error in installing Galaxy.

2013-05-08 Thread sridhar srinivasan
Hi ,

I am getting error during Installing galaxy locally.

Traceback (most recent call last):
  File
/illumina/apps/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py,
line 35, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /illumina/apps/galaxy/galaxy-dist/lib/galaxy/app.py, line 51, in
__init__
create_or_verify_database( db_url, kwargs.get( 'global_conf', {} ).get(
'__file__', None ), self.config.database_engine_options, app=self )
  File
/illumina/apps/galaxy/galaxy-dist/lib/galaxy/model/migrate/check.py, line
50, in create_or_verify_database
dataset_table = Table( dataset, meta, autoload=True )
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/schema.py,
line 108, in __call__
return type.__call__(self, name, metadata, *args, **kwargs)
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/schema.py,
line 236, in __init__
_bind_or_error(metadata).reflecttable(self,
include_columns=include_columns)
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/base.py,
line 1261, in reflecttable
conn = self.contextual_connect()
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/base.py,
line 1229, in contextual_connect
return self.Connection(self, self.pool.connect(),
close_with_result=close_with_result, **kwargs)
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 142, in connect
return _ConnectionFairy(self).checkout()
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 304, in __init__
rec = self._connection_record = pool.get()
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 161, in get
return self.do_get()
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 639, in do_get
con = self.create_connection()
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 122, in create_connection
return _ConnectionRecord(self)
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 198, in __init__
self.connection = self.__connect()
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 261, in __connect
connection = self.__pool._creator()
  File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/strategies.py,
line 80, in connect
raise exc.DBAPIError.instance(None, None, e)
OperationalError: (OperationalError) FATAL:  Ident authentication failed
for user galaxy
 None None

I created a user galaxy to install galaxy locally.

Thanks in Advance.

Sridhar
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Missing test results on (Test) Tool Shed

2013-05-08 Thread Dave Bouvier

Peter,

A technical issue prevented the tests from automatically running. I've 
resolved the issue and started a manual run, you should be seeing test 
results within 2-3 hours.


   --Dave B.

On 5/8/13 07:09:28.000, Peter Cock wrote:

On Tue, May 7, 2013 at 2:10 PM, Dave Bouvier d...@bx.psu.edu wrote:

Peter,

As you've already noticed, I've tracked down and fixed the main issue that
was causing inaccurate test results. Thank you again for the data you
provided, which was of great help narrowing down the cause of the issue.

--Dave B.


Hi Dave,

I've got another problem set for you, some repositories are listed under
Latest revision: failing tool tests, yet there are no test results are shown
(positive or negative).

Tool shed revision: 9661:cb0432cfcc8a

My clinod wrapper with one test:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/a66a914c39b5
Revision 3:a66a914c39b5

My effective T3 wrapper with two tests:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/4567618bebbd
Revision 5:4567618bebbd

My seq_rename tool with one test:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/b51633a69a92
Revision 2:b51633a69a92

Regards,

Peter


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Protocol for changing the section of an installed tool shed repo

2013-05-08 Thread Joachim Jacob | VIB |

Hi all,


Stupidly enough, I installed a tool shed repo under 'root' of the 
toolbox. Yes, it was EMBOSS5.0 (btw, installation went flawless!).


1. From the admin menu, I choose to unactivate the repo (manage tool 
shed repo's)
2. From the admin menu, I reinstalled the tool shed from Main Tool Shed, 
and chose to put it under a section 'EMBOSS'. This to no avail: all 
tools still under root.

3. I manually edited the shed_tool_conf.xml and added the section tags
4. The section is now displayed, containing the EMBOSS tools. BUT the 
tools under the root of the toolbox are still there.


Any assistance here?


Cheers,
Joachim

--
Joachim Jacob

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Invalid Galaxy URL: None - Installing Tools Shed

2013-05-08 Thread Dave Bouvier

Adam,

Normally, the Galaxy URL should be automatically determined and set in a 
cookie during the repository installation process.


To help track down this issue, could you provide the revision of Galaxy 
you're running, and the end of the paster.log file when this error occurs?



   --Dave B.

On 5/7/13 20:13:38.000, Adam Brenner wrote:

Howdy,

I am trying to install a tools shed item, Emboss, but when I do this
via the admin interface, I get the following text:

Repository installation is not possible due to an invalid Galaxy URL:
None. You may need to enable cookies in your browser.

I have searched the universe_wsgi.ini file and could not find anything
that looks like Galaxy URL. Any ideas on how to set the Galaxy URL?

Thanks,
-Adam

--
Adam Brenner
Computer Science, Undergraduate Student
Donald Bren School of Information and Computer Sciences

Research Computing Support
Office of Information Technology
http://www.oit.uci.edu/rcs/

University of California, Irvine
www.ics.uci.edu/~aebrenne/
aebre...@uci.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Error in installing Galaxy.

2013-05-08 Thread Hans-Rudolf Hotz

Hi Sridhar

Have you set up your PostgreSQL database correctly? and provide the 
right  username and password in the 'universe_wsgi.ini' file ?


See: 
http://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer?action=showredirect=Admin%2FConfig%2FPerformance#Switching_to_a_database_server


Also, see this old thread, where the same error has been reported before:
http://lists.bx.psu.edu/pipermail/galaxy-dev/2010-May/002624.html



Regards, Hans-Rudolf



On 05/08/2013 01:24 PM, sridhar srinivasan wrote:


Hi ,

I am getting error during Installing galaxy locally.

Traceback (most recent call last):
   File
/illumina/apps/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py,
line 35, in app_factory
 app = UniverseApplication( global_conf = global_conf, **kwargs )
   File /illumina/apps/galaxy/galaxy-dist/lib/galaxy/app.py, line 51,
in __init__
 create_or_verify_database( db_url, kwargs.get( 'global_conf', {}
).get( '__file__', None ), self.config.database_engine_options, app=self )
   File
/illumina/apps/galaxy/galaxy-dist/lib/galaxy/model/migrate/check.py,
line 50, in create_or_verify_database
 dataset_table = Table( dataset, meta, autoload=True )
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/schema.py,
line 108, in __call__
 return type.__call__(self, name, metadata, *args, **kwargs)
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/schema.py,
line 236, in __init__
 _bind_or_error(metadata).reflecttable(self,
include_columns=include_columns)
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/base.py,
line 1261, in reflecttable
 conn = self.contextual_connect()
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/base.py,
line 1229, in contextual_connect
 return self.Connection(self, self.pool.connect(),
close_with_result=close_with_result, **kwargs)
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 142, in connect
 return _ConnectionFairy(self).checkout()
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 304, in __init__
 rec = self._connection_record = pool.get()
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 161, in get
 return self.do_get()
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 639, in do_get
 con = self.create_connection()
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 122, in create_connection
 return _ConnectionRecord(self)
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 198, in __init__
 self.connection = self.__connect()
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/pool.py,
line 261, in __connect
 connection = self.__pool._creator()
   File
/illumina/apps/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/strategies.py,
line 80, in connect
 raise exc.DBAPIError.instance(None, None, e)
OperationalError: (OperationalError) FATAL:  Ident authentication failed
for user galaxy
  None None

I created a user galaxy to install galaxy locally.

Thanks in Advance.

Sridhar



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Invalid Galaxy URL: None - Installing Tools Shed

2013-05-08 Thread Greg Von Kuster
In addition, it's helpful to know which browser you are using. Safari has 
fairly strict policies on 3rd-party cookies if you have them blocked.  Chrome 
and firefox are less strict, so unblocking 3rd-party cookies if using Safari or 
simply switching to chrome or firefox will probably solve this issue.

Greg Von Kuster

On May 8, 2013, at 9:45 AM, Dave Bouvier wrote:

 Adam,
 
 Normally, the Galaxy URL should be automatically determined and set in a 
 cookie during the repository installation process.
 
 To help track down this issue, could you provide the revision of Galaxy 
 you're running, and the end of the paster.log file when this error occurs?
 
 
  --Dave B.
 
 On 5/7/13 20:13:38.000, Adam Brenner wrote:
 Howdy,
 
 I am trying to install a tools shed item, Emboss, but when I do this
 via the admin interface, I get the following text:
 
 Repository installation is not possible due to an invalid Galaxy URL:
 None. You may need to enable cookies in your browser.
 
 I have searched the universe_wsgi.ini file and could not find anything
 that looks like Galaxy URL. Any ideas on how to set the Galaxy URL?
 
 Thanks,
 -Adam
 
 --
 Adam Brenner
 Computer Science, Undergraduate Student
 Donald Bren School of Information and Computer Sciences
 
 Research Computing Support
 Office of Information Technology
 http://www.oit.uci.edu/rcs/
 
 University of California, Irvine
 www.ics.uci.edu/~aebrenne/
 aebre...@uci.edu
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Track Job Runtime

2013-05-08 Thread Bossers, Alex
+1 for me!
Alex



Van: galaxy-dev-boun...@lists.bx.psu.edu [galaxy-dev-boun...@lists.bx.psu.edu] 
namens Peter Cock [p.j.a.c...@googlemail.com]
Verzonden: woensdag 8 mei 2013 12:06
To: Geert Vandeweyer
Cc: galaxy-dev@lists.bx.psu.edu
Onderwerp: Re: [galaxy-dev] Track Job Runtime

On Wed, May 8, 2013 at 10:08 AM, Geert Vandeweyer
geert.vandewey...@ua.ac.be wrote:
 Hi,

 Are there options available to track the actual runtime of jobs on a cluster
 and store them in the database?

Not yet, but I'd really like to have that information too.

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Protocol for changing the section of an installed tool shed repo

2013-05-08 Thread Dave Bouvier

Joachim,

As you discovered, deactivating a repository does not give you the 
option to select a new tool panel section. However, if you select the 
option to remove the repository from disk, and then reinstall it, you 
will be presented with the tool panel section page, where you can 
uncheck the 'no changes' box and either select or create a tool panel 
section for the tools contained in the repository.


   --Dave B.

On 5/8/13 09:44:57.000, Joachim Jacob | VIB | wrote:

Hi all,


Stupidly enough, I installed a tool shed repo under 'root' of the
toolbox. Yes, it was EMBOSS5.0 (btw, installation went flawless!).

1. From the admin menu, I choose to unactivate the repo (manage tool
shed repo's)
2. From the admin menu, I reinstalled the tool shed from Main Tool Shed,
and chose to put it under a section 'EMBOSS'. This to no avail: all
tools still under root.
3. I manually edited the shed_tool_conf.xml and added the section tags
4. The section is now displayed, containing the EMBOSS tools. BUT the
tools under the root of the toolbox are still there.

Any assistance here?


Cheers,
Joachim


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Protocol for changing the section of an installed tool shed repo

2013-05-08 Thread Björn Grüning
Hi Joachim,

can you try to delete/rename integrated_tool_panel.xml, it is generated
during start-up.

Hope that helps,
Björn

 Hi all,
 
 
 Stupidly enough, I installed a tool shed repo under 'root' of the 
 toolbox. Yes, it was EMBOSS5.0 (btw, installation went flawless!).
 
 1. From the admin menu, I choose to unactivate the repo (manage tool 
 shed repo's)
 2. From the admin menu, I reinstalled the tool shed from Main Tool Shed, 
 and chose to put it under a section 'EMBOSS'. This to no avail: all 
 tools still under root.
 3. I manually edited the shed_tool_conf.xml and added the section tags
 4. The section is now displayed, containing the EMBOSS tools. BUT the 
 tools under the root of the toolbox are still there.
 
 Any assistance here?
 
 
 Cheers,
 Joachim
 



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Branden Timm

Hi All,
  I've been working to configure a new Galaxy instance to run jobs 
under Condor.  Things are 99% working at this point, but what seems to 
be happening is after the Condor job finishes Galaxy tries to clean up a 
cluster file that isn't there, namely the .ec (exit code) file.  
Relevant log info:


galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for 
job is: /home/GLBRCORG/galaxy/database/job_working_directory/001/1985
galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to 
condor runner
galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job 
destination (destination id: condor)

galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) 
submitting file /home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command 
is: python 
/home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
'/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
'/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
/home/GLBRCORG/galaxy/galaxy-central; 
/home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
/home/GLBRCORG/galaxy/database/files 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
/home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
/home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk 


galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15
galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job 
destination (destination id: condor)
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job 
has completed
galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to 
cleanup /home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] 
No such file or directory: 
'/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec'

galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR
galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended
galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up 
external metadata files


I've done a watch on the condor job directory, and as far as I can tell 
galaxy_1985.ec never gets created.  From a cursory look at 
lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the 
cleanup is happening in the AsynchronousJobState::cleanup method, which 
iterates on the cleanup_file_attributes list.  I naively tried to 
override cleanup_file_attributes in CondorJobState to disinclude 
'exit_code_file', to no avail.


I'm hoping somebody can spot where the hiccup is here.  Another question 
that is on my mind is should a failure to cleanup cluster files set the 
dataset state to ERROR?  An inspection of the output file from my job 
leads me to believe it finished just fine, and indicating failure to the 
user because Galaxy couldn't cleanup a 1b error code file seems a little 
extreme to me.


Thanks!

--
Branden Timm
bt...@energy.wisc.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Missing test results on (Test) Tool Shed

2013-05-08 Thread Greg Von Kuster
Hi Peter,

On May 8, 2013, at 6:45 AM, Peter Cock wrote:

 On Tue, May 7, 2013 at 7:02 PM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hi Peter,
 
 Missing test components implies a tool config that does not define a
 test (i.e, a missing test definition) or a tool config that defines a test,
 but the test's input or output files are missing from the repository.
 
 This seems to be our point of confusion: I don't understand combining
 these two categories - it seems unhelpful to me.

I feel this is just a difference of opinion.  Combining missing tests and 
missing test data into a single category is certainly justifiable.  Any 
repository that falls into this category clearly states to the owner what is 
missing, and the owner can easily know that work is needed to prepare the 
repository contents for testing, whether that work falls into the category of 
adding a missing test or adding missing test data.

 
 Tools missing a test definition clearly can't be tested - but since we'd
 like every tool to have tests having this as an easily view listing is
 useful both for authors and reviewers.

But is is an easily viewed listing.  It is currently very easy to determine if 
a tool is missing a defined test, is missing test data, or both.

 It highlights tools which need
 some work - or in some cases work on the Galaxy test framework itself.
 They are neither passing nor failing tests - and it makes sense to list
 them separately.
 
 Tools with a test definition should be tested

This is where I disagree.  It currently takes a few seconds for our check 
repositories for test components to crawl the entire main tool shed and set 
flags for those repositories missing test components.  However, the separate 
script that crawls the main tool shed and installs and tests repositories that 
are not missing test components currently takes hours to run even though less 
than 10% of the repositories are currently tests (due to missing test 
components on most of them).

Installing a testing repositories that have tools with defined tests but 
missing test data is potentially costly from a time perspective.  Let's take a 
simple example:

Repo A has 1 tool that includes a defined test, but is missing required test 
data from the repository.  The tool in repo A defines 2 3rd party tool 
dependencies that must be installed and compiled.  In addition, repo A defines 
a repository dependency whose ultimate chain of repository installations 
results in 4 additional repositories with 16 additional 3rd party tool 
dependencies, with a total installation time of 2 hours.  All of this time is 
taken in order to test the tool in repo A when we already know that it will not 
succeed because it is missing test data.  This is certainly a realistic 
scenario.


 - if they are missing an
 input or output file this is just a special case of a test failure (and can
 be spotted without actually attempting to run the tool).

Yes, but this is what we are doing now.  We are spotting this scenario without 
installing the repository or running any defined tests by running the tool.

 This is clearly
 a broken test and the tool author should be able to fix this easily (by
 uploading the missing test data file)

Yes, but this is already possible for them to clearly see without having to 
install the repository or run any tests.

 
 I don't see the benefit of the above where you place tools missing tests
 into a different category than tools with defined tests, but missing test 
 data.
 If any of the test components (test definition or required input or output 
 files)
 are missing, then the test cannot be executed, so defining it as a failing
 test in either case is a bit misleading.  It is actually a tool that is 
 missing
 test components that are required for execution which will result in a pass
 / fail status.
 
 It is still a failing test (just for the trivial reason of missing a
 test data file).
 
 It would be much simpler to change the filter for failing tests to include
 those that are missing test components so that the list of missing test
 components is a subset of the list of failing tests.
 
 What I would like is three lists:
 
 Latest revision: missing tool tests
 - repositories where at least 1 tool has no test defined
 
 [The medium term TO-DO list for the Tool Author]
 
 Latest revision: failing tool tests
 - repositories where at least 1 tool has a failing test (where I include
  tests missing their input or output test data files)
 
 [The priority TO-DO list for the Tool Author]
 
 Latest revision: all tool tests pass
 - repositories where every tool has tests and they all pass
 
 [The good list, Tool Authors should aim to have everything here]
 
 Right now http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus
 would appear under both missing tool tests and failing tool tests,
 but I hope to fix this and have this under missing tool tests only
 (until my current roadblocks with the Galaxy Test Framework are
 resolved).

[galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Branden Timm

Hi All,
  I've been working to configure a new Galaxy instance to run jobs 
under Condor.  Things are 99% working at this point, but what seems to 
be happening is after the Condor job finishes Galaxy tries to clean up a 
cluster file that isn't there, namely the .ec (exit code) file.  
Relevant log info:


galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for 
job is: /home/GLBRCORG/galaxy/database/job_working_directory/001/1985
galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to 
condor runner
galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job 
destination (destination id: condor)

galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) 
submitting file /home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command 
is: python 
/home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
'/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
'/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
/home/GLBRCORG/galaxy/galaxy-central; 
/home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
/home/GLBRCORG/galaxy/database/files 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
/home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
/home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk

galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15
galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job 
destination (destination id: condor)
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job 
has completed
galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to 
cleanup /home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] 
No such file or directory: 
'/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec'

galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR
galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended
galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up 
external metadata files


I've done a watch on the condor job directory, and as far as I can tell 
galaxy_1985.ec never gets created.  From a cursory look at 
lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the 
cleanup is happening in the AsynchronousJobState::cleanup method, which 
iterates on the cleanup_file_attributes list.  I naively tried to 
override cleanup_file_attributes in CondorJobState to disinclude 
'exit_code_file', to no avail.


I'm hoping somebody can spot where the hiccup is here.  Another question 
that is on my mind is should a failure to cleanup cluster files set the 
dataset state to ERROR?  An inspection of the output file from my job 
leads me to believe it finished just fine, and indicating failure to the 
user because Galaxy couldn't cleanup a 1b error code file seems a little 
extreme to me.


Thanks!

--
Branden Timm
bt...@energy.wisc.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Error installing migrated tool deps

2013-05-08 Thread Branden Timm

Confirmed working, thanks!

-Branden

On 5/7/2013 1:25 PM, Dave Bouvier wrote:

Branden,

Thank you for reporting this issue, I've committed a fix in 
9662:6c462a5a566d. You should be able to re-run your tool migration 
after updating to that revision.


   --Dave B.

On 5/7/13 11:09:31.000, Branden Timm wrote:

I recently upgraded to the latest galaxy-central, and was advised on
first run that two tools in tool_conf.xml had been removed from the
distribution, but could be installed from the tool shed.  I ran the
script that it generated, however the script fails with the following
messages:

No handlers could be found for logger
galaxy.tools.parameters.dynamic_options
/home/GLBRCORG/galaxy/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1: 


RuntimeWarning: __builtin__.file size changed, may indicate binary
incompatibility
   from csamtools import *
Repositories will be installed into configured tool_path location
../shed_tools
Skipping automatic install of repository ' bowtie_wrappers ' because it
has already been installed in location
../shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/bowtie_wrappers/0c7e4eadfb3c 



Traceback (most recent call last):
   File ./scripts/migrate_tools/migrate_tools.py, line 21, in module
 app = MigrateToolsApplication( sys.argv[ 1 ] )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/migrate/common.py, 


line 76, in __init__
 install_dependencies=install_dependencies )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 


line 75, in __init__
 self.install_repository( repository_elem, install_dependencies )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 


line 319, in install_repository
 install_dependencies=install_dependencies )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 


line 203, in handle_repository_contents
 persist=True )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/util/metadata_util.py, 


line 594, in generate_metadata_for_changeset_revision
 tool, valid, error_message = tool_util.load_tool_from_config( app,
app.security.encode_id( repository.id ), full_path )
AttributeError: 'MigrateToolsApplication' object has no attribute
'security'

Any help would be greatly appreciated.  Thanks!

--
Branden Timm
bt...@energy.wisc.edu

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Nate Coraor
On May 8, 2013, at 10:08 AM, Branden Timm wrote:

 Hi All, 
   I've been working to configure a new Galaxy instance to run jobs under 
 Condor.  Things are 99% working at this point, but what seems to be happening 
 is after the Condor job finishes Galaxy tries to clean up a cluster file that 
 isn't there, namely the .ec (exit code) file.  Relevant log info: 
 
 galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for job 
 is: /home/GLBRCORG/galaxy/database/job_working_directory/001/1985 
 galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to 
 condor runner 
 galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job destination 
 (destination id: condor) 
 galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched 
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) submitting 
 file /home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh   
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command is: 
 python /home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
 '/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
 '/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
 /home/GLBRCORG/galaxy/galaxy-central; 
 /home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
 /home/GLBRCORG/galaxy/database/files 
 /home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
 /home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
 /home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
 /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json 
 /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/GL!
 
BRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk
 
 galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15 
 galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job destination 
 (destination id: condor) 
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job is now 
 running 
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job is now 
 running 
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job has 
 completed 
 galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to cleanup 
 /home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] No such file 
 or directory: '/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec' 
 galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR 
 galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended 
 galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up external 
 metadata files 

Hi Branden,

The ec file is optional, and the message that it's unable to be cleaned up is a 
red herring in this case.  The state is being set to ERROR, i suspect because 
the check of its outputs on line 894 of lib/galaxy/jobs/__init__.py is failing:

 894 if ( self.check_tool_output( stdout, stderr, tool_exit_code, 
job )):

You might need to add some debugging to see where exactly this error 
determination is coming from.

--nate

 
 I've done a watch on the condor job directory, and as far as I can tell 
 galaxy_1985.ec never gets created.  From a cursory look at 
 lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the cleanup 
 is happening in the AsynchronousJobState::cleanup method, which iterates on 
 the cleanup_file_attributes list.  I naively tried to override 
 cleanup_file_attributes in CondorJobState to disinclude 'exit_code_file', to 
 no avail. 
 
 I'm hoping somebody can spot where the hiccup is here.  Another question that 
 is on my mind is should a failure to cleanup cluster files set the dataset 
 state to ERROR?  An inspection of the output file from my job leads me to 
 believe it finished just fine, and indicating failure to the user because 
 Galaxy couldn't cleanup a 1b error code file seems a little extreme to me. 
 
 Thanks! 
 
 -- 
 Branden Timm 
 bt...@energy.wisc.edu 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please 

Re: [galaxy-dev] Track Job Runtime

2013-05-08 Thread Geert Vandeweyer

Hi all,

I'm fiddling with this, and I have a proof of principle working for PBS 
jobs using very ugly sql-alchemy hack.


Idea is:
- if job goes from queued to running : store seconds since epoch in 
'runtime'
- if job goes from running to finished : compare time, store difference 
as runtime.


I've created an extra field 'runtime' for holding this info, using 
seconds since epoch.
When querying afterwards, one should filter for 'OK' jobs, and discard 
jobs that are still running.


right now, i have these statements to add timestamps to the database 
(somewhere in the check_watched_items function in pbs.py) :


self.sa_session.execute('UPDATE job SET runtime = :runtime WHERE id = 
:id',{'runtime':runtime,'id':galaxy_job_id})


Does anybody know how to translate this to a proper sqlalchemy statement 
such as (which does not work):


self.sa_session.query(self.model.Job).filter_by(id=galaxy_job_id).update({runtime:runtime},synchronize_session=False)
  or
sa_session.execute(self.sa_session.Table('job').update().values(runtime=runtime).where(id=galaxy_job_id))

If I can figure this out, I'll try to polish it and create a pull request.


Best,

Geert

On 05/08/2013 03:58 PM, Bossers, Alex wrote:

+1 for me!
Alex



Van: galaxy-dev-boun...@lists.bx.psu.edu [galaxy-dev-boun...@lists.bx.psu.edu] 
namens Peter Cock [p.j.a.c...@googlemail.com]
Verzonden: woensdag 8 mei 2013 12:06
To: Geert Vandeweyer
Cc: galaxy-dev@lists.bx.psu.edu
Onderwerp: Re: [galaxy-dev] Track Job Runtime

On Wed, May 8, 2013 at 10:08 AM, Geert Vandeweyer
geert.vandewey...@ua.ac.be wrote:

Hi,

Are there options available to track the actual runtime of jobs on a cluster
and store them in the database?

Not yet, but I'd really like to have that information too.

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/



--

Geert Vandeweyer, Ph.D.
Department of Medical Genetics
University of Antwerp
Prins Boudewijnlaan 43
2650 Edegem
Belgium
Tel: +32 (0)3 275 97 56
E-mail: geert.vandewe...@ua.ac.be
http://ua.ac.be/cognitivegenetics
http://www.linkedin.com/pub/geert-vandeweyer/26/457/726

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Missing test results on (Test) Tool Shed

2013-05-08 Thread Peter Cock
On Wed, May 8, 2013 at 3:28 PM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hi Peter,

 On May 8, 2013, at 6:45 AM, Peter Cock wrote:

 On Tue, May 7, 2013 at 7:02 PM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hi Peter,

 Missing test components implies a tool config that does not define a
 test (i.e, a missing test definition) or a tool config that defines a test,
 but the test's input or output files are missing from the repository.

 This seems to be our point of confusion: I don't understand combining
 these two categories - it seems unhelpful to me.

 I feel this is just a difference of opinion.  Combining missing tests and
 missing test data into a single category is certainly justifiable.  Any
 repository that falls into this category clearly states to the owner what
 is missing, and the owner can easily know that work is needed to
 prepare the repository contents for testing, whether that work falls
 into the category of adding a missing test or adding missing test data.

Speaking as a tool author, these are two rather different categories
which should not be merged. I personally would put tools with defined
tests but missing input/output files under failing tests not under
missing tests.

 Tools missing a test definition clearly can't be tested - but since we'd
 like every tool to have tests having this as an easily view listing is
 useful both for authors and reviewers.

 But is is an easily viewed listing.  It is currently very easy to determine
 if a tool is missing a defined test, is missing test data, or both.

No it isn't easily viewable - it is easy to get a combined listing of
repositories with (a) missing tests and/or (b) tests with missing files,
and then very tedious to look at these repositories one by one to see
which it is.

 It highlights tools which need
 some work - or in some cases work on the Galaxy test framework itself.
 They are neither passing nor failing tests - and it makes sense to list
 them separately.

 Tools with a test definition should be tested

 This is where I disagree. ... snip
 Installing a testing repositories that have tools with defined tests
 but missing test data is potentially costly from a time perspective.
 snip

I wasn't meaning to suggest you do that though - you're already
able to short cut these cases and mark the test as failed. These
are the quickest possible tests to run - they fail at the first hurdle.

 - if they are missing an input or output
 file this is just a special case of a test failure (and can
 be spotted without actually attempting to run the tool).

 Yes, but this is what we are doing now.  We are spotting
 this scenario without installing the repository or running
 any defined tests by running the tool.

Yes, and that is fine - I'm merely talking about how this information
is presented to the Tool Shed viewer.

 This is clearly
 a broken test and the tool author should be able to fix this easily (by
 uploading the missing test data file)

 Yes, but this is already possible for them to clearly see without
 having to install the repository or run any tests.

Indeed, but it this is a failing test and should (in my view) be
listed under failing tests not under missing tests.

We're just debating where to list such problem tools/repositories
in the Tool Shed's test results interface.

Regards,

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Branden Timm
Nate, thanks for the tip.  I'm adding some debugging info around that 
block now to inspect what is going on.


One thing I just remembered (been awhile since I debugged Galaxy tools) 
- does Galaxy still treat ANY stderr output as an indication of job 
failure?  There are two warnings in the stderr for the job:


WARNING:galaxy.datatypes.registry:Error loading datatype with extension 
'blastxml': 'module' object has no attribute 'BlastXml'
WARNING:galaxy.datatypes.registry:Error appending sniffer for datatype 
'galaxy.datatypes.xml:BlastXml' to sniff_order: 'module' object has no 
attribute 'BlastXml'

--
Branden Timm
bt...@energy.wisc.edu


On 5/8/2013 10:17 AM, Nate Coraor wrote:

On May 8, 2013, at 10:08 AM, Branden Timm wrote:


Hi All,
   I've been working to configure a new Galaxy instance to run jobs under 
Condor.  Things are 99% working at this point, but what seems to be happening 
is after the Condor job finishes Galaxy tries to clean up a cluster file that 
isn't there, namely the .ec (exit code) file.  Relevant log info:

galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for job is: 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985
galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to condor 
runner
galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job destination 
(destination id: condor)
galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) submitting file 
/home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command is: 
python /home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
'/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
'/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
/home/GLBRCORG/galaxy/galaxy-central; 
/home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
/home/GLBRCORG/galaxy/database/files 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
/home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
/home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/G!

LBRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk

galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15
galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job destination 
(destination id: condor)
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job is now 
running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job is now 
running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job has 
completed
galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to cleanup 
/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] No such file or 
directory: '/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec'
galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR
galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended
galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up external 
metadata files

Hi Branden,

The ec file is optional, and the message that it's unable to be cleaned up is a 
red herring in this case.  The state is being set to ERROR, i suspect because 
the check of its outputs on line 894 of lib/galaxy/jobs/__init__.py is failing:

  894 if ( self.check_tool_output( stdout, stderr, tool_exit_code, 
job )):

You might need to add some debugging to see where exactly this error 
determination is coming from.

--nate


I've done a watch on the condor job directory, and as far as I can tell 
galaxy_1985.ec never gets created.  From a cursory look at 
lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the cleanup is 
happening in the AsynchronousJobState::cleanup method, which iterates on the 
cleanup_file_attributes list.  I naively tried to override 
cleanup_file_attributes in CondorJobState to disinclude 'exit_code_file', to no 
avail.

I'm hoping somebody can spot where the hiccup is here.  Another question that 
is on my mind is should a failure to cleanup cluster files set the dataset 
state to ERROR?  An inspection of the output file from my job leads me to 
believe it finished just fine, and indicating failure to the user because 
Galaxy couldn't cleanup a 1b error code file seems a little extreme to me.

Thanks!

--

Re: [galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Nate Coraor
On May 8, 2013, at 12:21 PM, Branden Timm wrote:

 Nate, thanks for the tip.  I'm adding some debugging info around that block 
 now to inspect what is going on.
 
 One thing I just remembered (been awhile since I debugged Galaxy tools) - 
 does Galaxy still treat ANY stderr output as an indication of job failure?

Yes, unless the tool defines that error codes should be used instead.  Most 
tools do not.

 There are two warnings in the stderr for the job:
 
 WARNING:galaxy.datatypes.registry:Error loading datatype with extension 
 'blastxml': 'module' object has no attribute 'BlastXml'
 WARNING:galaxy.datatypes.registry:Error appending sniffer for datatype 
 'galaxy.datatypes.xml:BlastXml' to sniff_order: 'module' object has no 
 attribute 'BlastXml'

This is likely the problem, and should be fixable by updating your 
datatypes_conf.xml from the sample in your release.

--nate

 
 --
 Branden Timm
 bt...@energy.wisc.edu
 
 
 On 5/8/2013 10:17 AM, Nate Coraor wrote:
 On May 8, 2013, at 10:08 AM, Branden Timm wrote:
 
 Hi All,
   I've been working to configure a new Galaxy instance to run jobs under 
 Condor.  Things are 99% working at this point, but what seems to be 
 happening is after the Condor job finishes Galaxy tries to clean up a 
 cluster file that isn't there, namely the .ec (exit code) file.  Relevant 
 log info:
 
 galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for job 
 is: /home/GLBRCORG/galaxy/database/job_working_directory/001/1985
 galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to 
 condor runner
 galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job destination 
 (destination id: condor)
 galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) submitting 
 file /home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command is: 
 python /home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
 '/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
 '/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
 /home/GLBRCORG/galaxy/galaxy-central; 
 /home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
 /home/GLBRCORG/galaxy/database/files 
 /home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
 /home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
 /home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
 /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json 
 /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/!
 G!
 LBRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk
 galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15
 galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job destination 
 (destination id: condor)
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job is 
 now running
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job is 
 now running
 galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job has 
 completed
 galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to 
 cleanup /home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] No 
 such file or directory: 
 '/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec'
 galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR
 galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended
 galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up 
 external metadata files
 Hi Branden,
 
 The ec file is optional, and the message that it's unable to be cleaned up 
 is a red herring in this case.  The state is being set to ERROR, i suspect 
 because the check of its outputs on line 894 of lib/galaxy/jobs/__init__.py 
 is failing:
 
  894 if ( self.check_tool_output( stdout, stderr, 
 tool_exit_code, job )):
 
 You might need to add some debugging to see where exactly this error 
 determination is coming from.
 
 --nate
 
 I've done a watch on the condor job directory, and as far as I can tell 
 galaxy_1985.ec never gets created.  From a cursory look at 
 lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the 
 cleanup is happening in the AsynchronousJobState::cleanup method, which 
 iterates on the cleanup_file_attributes list.  I naively tried to override 
 cleanup_file_attributes in CondorJobState to disinclude 'exit_code_file', 
 to no avail.
 
 I'm hoping somebody can spot where the 

Re: [galaxy-dev] Step 3 of get Microbial Data install

2013-05-08 Thread Jennifer Jackson

Hi Sarah,

NCBI made some changes and the scripts to obtain all of this data in one 
go are considered deprecated (for now - a ticket to update the scripts 
is open here: https://trello.com/c/ehx5Gvr7). You may be able to work 
out how to adjust them yourself, if familiar with how the data is 
organized or simply have some programming background and wish to explore 
it and do some testing. The changes are not expected to be large, just 
are low priority by necessity for our team at this time.


Another alternative is to rsync what we have available. The  microbes 
directory would be the target (as listed in the wiki) - but be warned, 
this will be large. Also, be sure to add the genomes to the builds.txt 
file for them to work properly in your local UI, restart, etc.


Meanwhile, the recommended way to set up any microbial genomes not 
available through rsync or just in general would be the same as for all 
other genomes - following the instructions in this wiki and ones that 
link from it:

http://wiki.galaxyproject.org/Admin/Data%20Integration#Get_the_data
If you look at the genomes in the rsync area, you will see that we have 
some microbial data in the general top-level genome pool as well, so 
organizing the data this way is fine.


Thanks for your patience Sarah,

Jen
Galaxy team

On 4/19/13 7:16 AM, Sarah Maman wrote:

Hello,

I try to set Get Microbial Data in my local instance of Galaxy.
So, as explain in 
/path/to//src/galaxy/galaxy-dist/scripts/microbes/README.txt , step 1 
and 2 or OK, binary 'faToNib' is available on our cluster, but step 3 
generate empty files :


drwxr-xr-x 4 galaxy wbioinfo 100 18 avril 14:26 ..
-rw-r--r-- 1 galaxy wbioinfo 203 18 avril 14:26 harvest.txt
-rw-r--r-- 1 galaxy wbioinfo  69 18 avril 14:34 ncbi_to_ucsc.txt
-rw-r--r-- 1 galaxy wbioinfo   0 19 avril 15:47 sequence.txt
-rw-r--r-- 1 galaxy wbioinfo   0 19 avril 15:47 seq.loc
drwxr-xr-x 2 galaxy wbioinfo 118 19 avril 15:47 .


Could you please, hel me ?
Thnaks in advance,
Sarah Maman



--
Jennifer Hillman-Jackson
Galaxy Support and Training
http://galaxyproject.org

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Difficulties using repeat tagset with min attribute

2013-05-08 Thread Peter Cock
On Thu, May 3, 2012 at 12:20 AM, Cory Spencer cspen...@sprocket.org wrote:

 Hi all -

 I've been trying to get the repeat.../repeat tag working with a min 
 attribute for
 some time now, though without any success.  It works in other tools 
 distributed
 with Galaxy, but when I attempt to use it in one of our custom tools, it dies 
 with
 a AttributeError: 'ExpressionContext' object has no attribute 'keys' 
 exception.

 Can anybody offer any insight?

 The full traceback is:

 ⇝ AttributeError: 'ExpressionContext' object has no attribute 'keys'
 URL: http://localhost:8080/tool_runner?tool_id=scde-list-compare
 Module weberror.evalexception.middleware:364 in respond  view
   app_iter = self.application(environ, detect_start_response)
 Module paste.debug.prints:98 in __call__  view
   environ, self.app)
 Module paste.wsgilib:539 in intercept_output  view
   app_iter = application(environ, replacement_start_response)
 Module paste.recursive:80 in __call__  view
   return self.application(environ, start_response)
 Module paste.httpexceptions:632 in __call__  view
   return self.application(environ, start_response)
 Module galaxy.web.framework.base:160 in __call__  view
   body = method( trans, **kwargs )
 Module galaxy.web.controllers.tool_runner:68 in index  view
   template, vars = tool.handle_input( trans, params.__dict__ )
 Module galaxy.tools:1320 in handle_input  view
   state = self.new_state( trans )
 Module galaxy.tools:1248 in new_state  view
   self.fill_in_new_state( trans, inputs, state.inputs )
 Module galaxy.tools:1257 in fill_in_new_state  view
   state[ input.name ] = input.get_initial_value( trans, context )
 Module galaxy.tools.parameters.grouping:100 in get_initial_value  
 view
   rval_dict[ input.name ] = input.get_initial_value( trans, context )
 Module galaxy.tools.parameters.basic:1016 in get_initial_value  
 view
   return SelectToolParameter.get_initial_value( self, trans, context )
 Module galaxy.tools.parameters.basic:785 in get_initial_value  
 view
   if self.need_late_validation( trans, context ):
 Module galaxy.tools.parameters.basic:1022 in need_late_validation 
  view
   if super( ColumnListParameter, self ).need_late_validation( trans, 
 context ):
 Module galaxy.tools.parameters.basic:766 in need_late_validation  
 view
   for layer in context.itervalues():
 Module UserDict:116 in itervalues  view
   for _, v in self.iteritems():
 Module UserDict:109 in iteritems  view
   for k in self:
 Module UserDict:96 in __iter__  view
   for k in self.keys():
 AttributeError: 'ExpressionContext' object has no attribute 'keys'




Hi Cory,

Do you remember if you could solve this? I've used repeat a few times
even with a min value, but just hit the same issue as you:

⇝ AttributeError: 'ExpressionContext' object has no attribute 'keys'
URL: http://localhost/galaxy-dev/tool_runner?tool_id=seq_filter_by_id
Module weberror.evalexception.middleware:364 in respond view
  app_iter = self.application(environ, detect_start_response)
Module paste.recursive:84 in __call__ view
  return self.application(environ, start_response)
Module paste.httpexceptions:633 in __call__ view
  return self.application(environ, start_response)
Module galaxy.web.framework.base:132 in __call__ view
  return self.handle_request( environ, start_response )
Module galaxy.web.framework.base:190 in handle_request view
  body = method( trans, **kwargs )
Module galaxy.webapps.galaxy.controllers.tool_runner:82 in index view
  template, vars = tool.handle_input( trans, params.__dict__ )
Module galaxy.tools:1882 in handle_input view
  state = self.new_state( trans )
Module galaxy.tools:1810 in new_state view
  self.fill_in_new_state( trans, inputs, state.inputs )
Module galaxy.tools:1819 in fill_in_new_state view
  state[ input.name ] = input.get_initial_value( trans, context )
Module galaxy.tools.parameters.grouping:104 in get_initial_value view
  rval_dict[ input.name ] = input.get_initial_value( trans, context )
Module galaxy.tools.parameters.basic:1042 in get_initial_value view
  return SelectToolParameter.get_initial_value( self, trans, context )
Module galaxy.tools.parameters.basic:808 in get_initial_value view
  if self.need_late_validation( trans, context ):
Module galaxy.tools.parameters.basic:1048 in need_late_validation view
  if super( ColumnListParameter, self ).need_late_validation( trans, context 
 ):
Module galaxy.tools.parameters.basic:789 in need_late_validation view
  for layer in context.itervalues():
Module UserDict:116 in itervalues view
  for _, v in self.iteritems():
Module UserDict:109 

Re: [galaxy-dev] Difficulties using repeat tagset with min attribute

2013-05-08 Thread Peter Cock
On Wed, May 8, 2013 at 5:40 PM, Peter Cock p.j.a.c...@googlemail.com wrote:
 Hi Cory,

 Do you remember if you could solve this? I've used repeat a few times
 even with a min value, but just hit the same issue as you:

 AttributeError: 'ExpressionContext' object has no attribute 'keys'
 ...
 Module galaxy.tools.parameters.grouping:104 in get_initial_value view
  rval_dict[ input.name ] = input.get_initial_value( trans, context )
 Module galaxy.tools.parameters.basic:1042 in get_initial_value view
  return SelectToolParameter.get_initial_value( self, trans, context )
 Module galaxy.tools.parameters.basic:808 in get_initial_value view
  if self.need_late_validation( trans, context ):
 Module galaxy.tools.parameters.basic:1048 in need_late_validation view
  if super( ColumnListParameter, self ).need_late_validation( trans, context 
 ):
 Module galaxy.tools.parameters.basic:789 in need_late_validation view
  for layer in context.itervalues():
 Module UserDict:116 in itervalues view
  for _, v in self.iteritems():
 Module UserDict:109 in iteritems view
  for k in self:
 Module UserDict:96 in __iter__ view
  for k in self.keys():
 AttributeError: 'ExpressionContext' object has no attribute 'keys'

 I see this when trying to access the tool via the normal Galaxy web interface,
 and when running the tool's unit tests. Removing the min=1 value 'fixes' 
 this,
 but I do want at least one entry.

 The tool in question is here:
 https://bitbucket.org/peterjc/galaxy-central/commits/806d9526d5e846933bb02c9d3efb8ccc398609f4

 On the off chance I was using a special value as the repeat name, I tried
 changing that - no difference.

Progress, this works (no min value):

  repeat name=identifiers title=Tabular file(s) with sequence identifiers
param name=input_tabular type=data format=tabular
label=Tabular file containing sequence identifiers/
 param name=columns type=data_column data_ref=input_tabular
multiple=True numerical=False
   label=Column(s) containing sequence identifiers
   help=Multi-select list - hold the appropriate key
while clicking to select multiple columns
   validator type=no_options message=Pick at least one column/
 /param
   /repeat

This also work - using a min value for the repeat, but removing the
data_column parameter,

  repeat name=identifiers title=Tabular file(s) with sequence identifiers
param name=input_tabular type=data format=tabular
label=Tabular file containing sequence identifiers/
   /repeat

However what I want to use fails:

  repeat name=identifiers title=Tabular file(s) with sequence
identifiers min=1
param name=input_tabular type=data format=tabular
label=Tabular file containing sequence identifiers/
 param name=columns type=data_column data_ref=input_tabular
multiple=True numerical=False
   label=Column(s) containing sequence identifiers
   help=Multi-select list - hold the appropriate key
while clicking to select multiple columns
   validator type=no_options message=Pick at least one column/
 /param
   /repeat

So something bad is happening with the initial population of the first repeat
value (triggered by using min=1) from the data_column parameter.

I would guess Cory's example also used a non-trivial parameter type.

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] routing to a cluster or not on a per-tool basis

2013-05-08 Thread Dan Tenenbaum
On Tue, May 7, 2013 at 10:16 AM, Dan Tenenbaum dtene...@fhcrc.org wrote:
 On Tue, Apr 30, 2013 at 2:04 PM, Dannon Baker dannon.ba...@gmail.com wrote:
 Hey Dan,

 Sure, you can configure per-tool job runners.  This wiki page
 (http://wiki.galaxyproject.org/Admin/Config/Jobs) should get you started,
 but let me know if you run into any trouble.


 Thanks!

 Following up on this.

 As I mentioned before, this Galaxy instance was already configured to
 use the cluster by default. I did not set up the cluster or the Galaxy
 instance. This particular instance did not use the job_conf.xml file.
 But I updated it and set up a job_conf.xml and I was able to set it up
 so that some tools would be run on the local machine.

 However, in trying to get it set up so some jobs would run on the
 cluster (which is what happened by default before I came along) I ran
 into problems. Previously, it looked like the cluster stuff was
 configured in universe_wsgi.ini, and I guessed that these were the
 relevant lines:

 start_job_runners = drmaa
 default_cluster_job_runner = drmaa://-t 12:00 -A noaccount/

 So I tried to move that setup to job_conf.xml:

 ?xml version=1.0?
 !-- A sample job config that explicitly configures job running the
 way it is configured by default (if there is no explicit config). --
 job_conf
 plugins
 plugin id=local type=runner
 load=galaxy.jobs.runners.local:LocalJobRunner workers=4/
 plugin id=drmaa type=runner
 load=galaxy.jobs.runners.drmaa:DRMAAJobRunner/
 /plugins
 handlers
 handler id=main/
 /handlers
 destinations default=local
 destination id=local runner=local/
 destination id=cluster runner=drmaa
 param id=nativeSpecification-t 12:00 -A noaccount//param
 /destination
 /destinations
 tools
 !-- Tools can be configured to use specific destinations or handlers,
  identified by either the id or tags attribute.  If assigned to
  a tag, a handler or destination that matches that tag will be
  chosen at random.
  --
 tool id=mytesttool destination=cluster/
 /tools
 /job_conf

 When I run 'my test tool', however, it just hangs.

 Any tips?

To answer my own question...I think I have gotten this working, by
using the drmaa snippet from the job_conf.xml.sample_advanced file in
my galaxy distro:

destination id=real_user_cluster runner=drmaa
!-- TODO: The real user options should maybe not be
considered runner params. --
param
id=galaxy_external_runjob_scriptscripts/drmaa_external_runner.py/param
param
id=galaxy_external_killjob_scriptscripts/drmaa_external_killer.py/param
param
id=galaxy_external_chown_scriptscripts/external_chown_script.py/param
/destination

If I set up my test tool to run with this destination, it works.

Thanks,
Dan


 Thanks,
 Dan




 On Tue, Apr 30, 2013 at 4:47 PM, Dan Tenenbaum dtene...@fhcrc.org wrote:

 Hi,

 I have some tools that run really quickly without using any kind of
 cluster.
 I would prefer not to run these tools on a cluster, as the overhead of
 submitting these jobs makes them take much longer than they otherwise
 would.
 I have other tools that are computationally intensive and need to be
 run on a cluster.
 I would like to expose all these tools in the same Galaxy instance,
 but have some tools run on the cluster and others not.

 Is this possible?

 Thanks,
 Dan
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/



 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] question on fastqc

2013-05-08 Thread Kathryn Sun
Hello,

I've set up galaxy at the local linux. After data configuration, I run fastqc 
to check how it runs. Here I got error message as below --  

error
An error occurred running this job: Traceback (most recent call last): File 
../galaxy-dist/tools/rgenetics/rgFastQC.py, line 158, in assert 
os.path.isfile(opts.executable),'##rgFastQC.py error - cannot find executable 
%s' % opts.executable AssertionError: ##r

Here is the detail of the error --
Tool: FastQC:Read QC
Name:    FastQC_FASTQ Groomer on data 6.html
Created:    May 02, 2013
Filesize:    0 bytes
Dbkey:    mm9
Format:    html
Tool Version:    
Tool Standard Output:    stdout
Tool Standard Error:    stderr
Tool Exit Code:    1

Input Parameter     Value
Short read data from your current history     8: FASTQ Groomer on data 6
Title for the output file - to remind you what the job was for     FastQC
Contaminant list     No dataset

What's the problem? Thank you!
Kathryn___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Next GalaxyAdmins Meetup: May 15; Galaxy @ Pathogen Portal

2013-05-08 Thread Dave Clements
Hello all,

The next 
meetinghttp://wiki.galaxyproject.org/Community/GalaxyAdmins/Meetups/2013_05_15
of
the GalaxyAdmins
Grouphttp://wiki.galaxyproject.org/Community/GalaxyAdmins will
be held on May 15, 2013, at 10 AM Central US
timehttp://wiki.galaxyproject.org/Community/GalaxyAdmins/Meetups/2013_05_15
.

Andrew Warren of the Cyberinfrastructure
Divisionhttp://www.vbi.vt.edu/faculty/group_overview/Cyberinfrastructure_Division
of
the Virginia Bioinformatics Institute https://www.vbi.vt.edu/ at Virginia
Tech will talk about their Galaxy deploymenthttp://rnaseq.pathogenportal.org/
 at Pathogen Portal http://pathogenportal.org/, a highly customized
Galaxy installation, and also about the group's objectives and future plans.

Dannon Baker http://wiki.galaxyproject.org/DannonBaker will bring the
group up to speed on what's happening in the Galaxy project.

Date

May 15, 2013

Time

10 am Central US Time (-5 GMT)

Presentations

*Galaxy http://rnaseq.pathogenportal.org/ at Pathogen
Portalhttp://pathogenportal.org/
*
Andrew Warren, Virginia Bioinformatics Institute https://www.vbi.vt.edu/,
Virginia Tech
*Galaxy Project Update*
Dannon Baker http://wiki.galaxyproject.org/DannonBaker

Links

Meetup 
Linkhttps://globalcampus.uiowa.edu/join_meeting.html?meetingId=1262346908659
Add to 
calendarhttps://globalcampus.uiowa.edu/build_calendar.event?meetingId=1262346908659


We use the Blackboard Collaborate Web Conferencing
systemhttp://wiki.galaxyproject.org/Community/GalaxyAdmins/Meetups/WebinarTech
for
the meetup. Downloading the required applets in advance and using a
headphone with microphone to prevent audio feedback during the call is
recommended.

GalaxyAdmins http://wiki.galaxyproject.org/Community/GalaxyAdmins is a
discussion group for Galaxy community members who are responsible for large
Galaxy installations.
Thanks,
Dave Clements

-- 
http://galaxyproject.org/GCC2013
http://galaxyproject.org/
http://getgalaxy.org/
http://usegalaxy.org/
http://wiki.galaxyproject.org/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] migration error

2013-05-08 Thread Robert Baertsch
I upgraded to the latest galaxy-central and got an error when running migration 
script 115 which lengthens the password field from 40-255.

It failed saying that the table migration_tmp already exists.  I ran this 
without any existing database so I don't think it is anything on my end. Any 
pointers?

.schema migration_tmp
CREATE TABLE migration_tmp (
id INTEGER NOT NULL, 
create_time TIMESTAMP, 
update_time TIMESTAMP, 
tool_shed_repository_id INTEGER NOT NULL, 
name VARCHAR(255), 
version VARCHAR(40), 
type VARCHAR(40), 
uninstalled BOOLEAN, error_message TEXT, 
PRIMARY KEY (id), 
 FOREIGN KEY(tool_shed_repository_id) REFERENCES tool_shed_repository 
(id)
);


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] migration error

2013-05-08 Thread Dannon Baker
Hey Robert,

I assume this is sqlite?  And, when you say you ran this without any
existing database -- was this was a completely new clone of galaxy, or did
you update a prior installation and delete database/universe.sqlite
manually before running?

-Dannon




On Wed, May 8, 2013 at 2:07 PM, Robert Baertsch
robert.baert...@gmail.comwrote:

 I upgraded to the latest galaxy-central and got an error when running
 migration script 115 which lengthens the password field from 40-255.

 It failed saying that the table migration_tmp already exists.  I ran this
 without any existing database so I don't think it is anything on my end.
 Any pointers?

 .schema migration_tmp
 CREATE TABLE migration_tmp (
 id INTEGER NOT NULL,
 create_time TIMESTAMP,
 update_time TIMESTAMP,
 tool_shed_repository_id INTEGER NOT NULL,
 name VARCHAR(255),
 version VARCHAR(40),
 type VARCHAR(40),
 uninstalled BOOLEAN, error_message TEXT,
 PRIMARY KEY (id),
  FOREIGN KEY(tool_shed_repository_id) REFERENCES
 tool_shed_repository (id)
 );


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Error in installing Galaxy.

2013-05-08 Thread sridhar srinivasan
Hi Rudolf,

Thanks for the reply.

Previously i was using the

postgres://user:pass@host/galaxy  in 'universe_wsgi.ini'

now i tried with
postgres:///db_name?user=user_namepassword=your_pass

and it is working.. but i could'nt connect to the webpage
http://127.0.0.1:8080/ from another system in same network..

Sridhar



On Wed, May 8, 2013 at 7:16 PM, Hans-Rudolf Hotz h...@fmi.ch wrote:

 Hi Sridhar

 Have you set up your PostgreSQL database correctly? and provide the right
  username and password in the 'universe_wsgi.ini' file ?

 See: http://wiki.galaxyproject.org/**Admin/Config/Performance/**
 ProductionServer?action=show**redirect=Admin%2FConfig%**
 2FPerformance#Switching_to_a_**database_serverhttp://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer?action=showredirect=Admin%2FConfig%2FPerformance#Switching_to_a_database_server

 Also, see this old thread, where the same error has been reported before:
 http://lists.bx.psu.edu/**pipermail/galaxy-dev/2010-May/**002624.htmlhttp://lists.bx.psu.edu/pipermail/galaxy-dev/2010-May/002624.html



 Regards, Hans-Rudolf




 On 05/08/2013 01:24 PM, sridhar srinivasan wrote:


 Hi ,

 I am getting error during Installing galaxy locally.

 Traceback (most recent call last):
File
 /illumina/apps/galaxy/galaxy-**dist/lib/galaxy/webapps/**
 galaxy/buildapp.py,
 line 35, in app_factory
  app = UniverseApplication( global_conf = global_conf, **kwargs )
File /illumina/apps/galaxy/galaxy-**dist/lib/galaxy/app.py, line 51,
 in __init__
  create_or_verify_database( db_url, kwargs.get( 'global_conf', {}
 ).get( '__file__', None ), self.config.database_engine_**options,
 app=self )
File
 /illumina/apps/galaxy/galaxy-**dist/lib/galaxy/model/migrate/**
 check.py,
 line 50, in create_or_verify_database
  dataset_table = Table( dataset, meta, autoload=True )
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/schema.py,
 line 108, in __call__
  return type.__call__(self, name, metadata, *args, **kwargs)
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/schema.py,
 line 236, in __init__
  _bind_or_error(metadata).**reflecttable(self,
 include_columns=include_**columns)
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/engine/base.py,
 line 1261, in reflecttable
  conn = self.contextual_connect()
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/engine/base.py,
 line 1229, in contextual_connect
  return self.Connection(self, self.pool.connect(),
 close_with_result=close_with_**result, **kwargs)
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/pool.py,
 line 142, in connect
  return _ConnectionFairy(self).**checkout()
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/pool.py,
 line 304, in __init__
  rec = self._connection_record = pool.get()
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/pool.py,
 line 161, in get
  return self.do_get()
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/pool.py,
 line 639, in do_get
  con = self.create_connection()
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/pool.py,
 line 122, in create_connection
  return _ConnectionRecord(self)
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/pool.py,
 line 198, in __init__
  self.connection = self.__connect()
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/pool.py,
 line 261, in __connect
  connection = self.__pool._creator()
File
 /illumina/apps/galaxy/galaxy-**dist/eggs/SQLAlchemy-0.5.6_**
 dev_r6498-py2.6.egg/**sqlalchemy/engine/strategies.**py,
 line 80, in connect
  raise exc.DBAPIError.instance(None, None, e)
 OperationalError: (OperationalError) FATAL:  Ident authentication failed
 for user galaxy
   None None

 I created a user galaxy to install galaxy locally.

 Thanks in Advance.

 Sridhar



 __**_
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:

 http://galaxyproject.org/**search/mailinglists/http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, 

[galaxy-dev] Getting example_watch_folder.py to work...

2013-05-08 Thread Neil.Burdett
Hi,
   I'm still struggling to get the example_watch_folder.py to work. So any 
help much appreciated.

I've created a simple workflow, which essentially opens a text file and then 
writes out the data to a html file.

My xml file:
tool id=CopyTool name=Copy Tool
  descriptionTest/description
  command interpreter=perl$__root_dir__/tools/copy/copy.pl
  --input_image $inputImage
  --output_html $output
  /command

  inputs
param format=txt name=inputImage type=data label=Input Image /
  /inputs
  outputs
data format=html name=output label=${tool.name} on #echo 
os.path.basename( str ( $inputImage.name ) )# /
  /outputs
  help
Copy Tool

  /help
/tool

And copy.pl:
#!/usr/bin/perl
use strict;
use warnings;
use Getopt::Long;
my $input_image ;
my $output_html ;

# Get options from command line (i.e. galaxy)
my $result = GetOptions ( input_image=s = \$input_image,
  output_html=s = \$output_html )
  or die Bad options;

print input_image is: $input_image\n;

open FILE,$input_image or die $!;
my @lines = FILE;
close FILE;
my $numOfLines = scalar @lines;


# Create the output HTML, with links to the files and the square gif.
#
open HTML, , $output_html
or die Failed to create output HTML file '$output_html': $! ;

print HTMLEOF;
html
head
style
iframe {
border: 0px;
background: #ee ;
}
/style
/head
body
h1Copy Files tool/h1
h2Generated $numOfLines files/h2
EOF

# Put direct links to each output file
foreach my $sub_filename ( @lines )
{
  print HTML Direct link to the $sub_filename file.br/\n;
}

print HTML br/br/br/\n;

print HTML /body/html\n ;

close HTML ;

So pretty basic.

I run:
./example_watch_folder.py cce1b01926646d548f6ddc32ff01aa2e 
http://140.253.78.234/galaxy/api/ /home/galaxy/data_input 
/home/galaxy/data_output My API Import f2db41e1fa331b3e

and get the following output:
{'outputs': ['a799d38679e985db'], 'history': '33b43b4e7093c91f'}

I can see the sample.txt file I placed in the /home/galaxy/data_input has been 
put into the database:
/home/galaxy/galaxy-dist/database/files/000/dataset_8.dat

and:
http://140.253.78.234/galaxy/api/libraries/f2db41e1fa331b3e/contents/1cd8e2f6b131e891
shows the file has been uploaded:

{
data_type: txt,
date_uploaded: 2013-05-09T04:41:10.579602,
file_name: /home/galaxy/galaxy-dist/database/files/000/dataset_8.dat,
file_size: 76,
genome_build: ?,
id: 1cd8e2f6b131e891,
ldda_id: 1cd8e2f6b131e891,
message: ,
metadata_data_lines: 8,
metadata_dbkey: ?,
misc_blurb: 8 lines,
misc_info: uploaded txt file,
model_class: LibraryDataset,
name: sam2.txt,
template_data: {},
uploaded_by: t...@test.com,
uuid: null
}

However, looking in the histories:
http://140.253.78.234/galaxy/api/histories/33b43b4e7093c91f/contents/a799d38679e985db

The output file is zero as the input file can not be found as shown below in 
bold ...

{
accessible: true,
api_type: file,
data_type: html,
deleted: false,
display_apps: [],
display_types: [],
download_url: 
/galaxy/api/histories/33b43b4e7093c91f/contents/a799d38679e985db/display,
file_ext: html,
file_name: /home/galaxy/galaxy-dist/database/files/000/dataset_9.dat,
file_size: 0,
genome_build: ?,
hid: 1,
history_id: 33b43b4e7093c91f,
id: a799d38679e985db,
metadata_data_lines: null,
metadata_dbkey: ?,
misc_blurb: error,
misc_info: Thu May  9 14:42:10 2013input_image is: None\nNo such file or 
directory at /home/galaxy/galaxy-dist/tools/copy/copy.pl line 27.\n,
model_class: HistoryDatasetAssociation,
name: Copy Tool on None,
peek: null,
purged: false,
state: error,
uuid: null,
visible: true,
visualizations: []
}

Does anyone have any idea why the input_file is not passed in/obtained in the 
perl script?
as the script obviously copies it to the database so part of the script is 
working?

the file sam2.txt looks like this

Thanks again, sorry for swamping the list with this issue

Neil

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/