Re: [galaxy-dev] Run a tool's workflow one by one ?

2014-06-25 Thread Pat-74100
Hi John

Thanks for your reply.
I've copied your job_conf.xml.

Unfortunately, I've got an error when I ran run.sh:

galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Loading job configuration from 
./job_conf.xml
galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Read definition for handler 'main'
galaxy.jobs INFO 2014-06-25 09:06:17,610 Setting handlers default to child 
with id 'main'
Traceback (most recent call last):
  File 
my_repertory/galaxy-python/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py, 
line 39, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File my_repertory/galaxy-python/galaxy-dist/lib/galaxy/app.py, line 64, in 
__init__
self.job_config = jobs.JobConfiguration(self)
  File my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py, 
line 107, in __init__
self.__parse_job_conf_xml(tree)
  File my_repertory/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py, 
line 177, in __parse_job_conf_xml
self.default_destination_id = self.__get_default(destinations, 
self.destinations.keys())
  File my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py, 
line 300, in __get_default
raise Exception(No %s default specified, please specify a valid id or 
tag with the 'default' attribute % parent.tag)
Exception: No destinations default specified, please specify a valid id or 
tag with the 'default' attribute


I don't understand where is the problem.

Pat

 Date: Mon, 23 Jun 2014 22:19:27 -0500
 Subject: Re: [galaxy-dev] Run a tool's workflow one by one ?
 From: jmchil...@gmail.com
 To: leonardsqual...@hotmail.com
 CC: galaxy-dev@lists.bx.psu.edu
 
 Its looks like you were going to post an error message but didn't.
 That might help debug problem.
 
 There is no way currently at the workflow level to force one job to
 wait for another before completion (other than assigning an explicit
 input/output relationship between the steps). There is a Trello card
 for this here https://trello.com/c/h5qZlgU8.
 
 I am not sure that Trello card is really the best approach for this
 problem though. If it really is the case that these jobs can run
 simultaneously and they are not implicitly dependent on each other in
 some way not represented in the workflow - than it is likely they are
 running on a machine that just doesn't have enough resources (likely
 memory) to run these properly. The correct solution for this I think
 should be properly configuring a job_conf.xml file to not let Galaxy
 tools over consume memory.
 
 By default Galaxy will run 4 jobs simultaneously - any job of any time
 - regardless of memory consumption, threads used, etc This gist
 (https://gist.github.com/jmchilton/ff186b01d51d401623be) contains a
 job_conf.xml that you can stick in your Galaxy root directory to
 ensure a handful of tools (I used ids hilbert, fft, slm as example ids
 but you should replace these values with actual values of your tool)
 can only run one job at a time. All other jobs will continue t
 concurrently run two at a time beside these.
 
 If you are using a distributed resource manager (like sun grid engine,
 SLURM, Condor, etc...) then the solution is a little different. You
 should assign these tools to job destination that consume a whole node
 - you would to provide more information about the cluster hardware and
 software configuration for me to provide an example of this.
 
 Beside that the common advice about scaling up Galaxy holds - you
 should configure Postgres instead of sqlite, setup a proxy (nginx or
 Apache), disable debug in universe_wsgi.ini, etc See
 https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer
 for details. All of these things can help in situations like this.
 
 -John
 
 
 
 
 On Mon, Jun 23, 2014 at 11:22 AM, Pat-74100 leonardsqual...@hotmail.com 
 wrote:
  Hi !
 
  I have a big workflow and sometime when I launched it, I got an error
  message for some of my tools : unable to finish job.
  I think it's maybe Galaxy ran multiple job so I get this error message.
 
  I'm looking to run my workflow step by step.
 
  For example this workflow:
 
  http://hcsvlab.org.au/wp-content/uploads/2014/02/PsySoundTest1.png
 
  I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and SLM at
  the same time.
 
  Is it possible to make a workflow which wait to finish a job before run an
  another job ?
 
  Thanks
 
  ___
  Please keep all replies on the list by using reply all
  in your mail client.  To manage your subscriptions to this
  and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
 
  To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/
  ___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and 

Re: [galaxy-dev] How does Galaxy differentiate users who have not logged on?

2014-06-25 Thread Jan Kanis
Probably the same way by which it keeps track of you when you are logged
in: setting a cookie in the browser. I didn't verify this, but that is how
basically all web services do it. If you clear the galaxy cookies I expect
you will end up in a new empty workspace.


On 25 June 2014 01:18, Melissa Cline cl...@soe.ucsc.edu wrote:

 Hi folks,

 Quick question.  When two different users use the same Galaxy instance
 without logging in, how does Galaxy keep their identities straight, such
 that when they come back to their computers the next day, they see only
 their own workspace and not the other user's workspace?

 Thanks!

 Melissa


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] How does Galaxy differentiate users who have not logged on?

2014-06-25 Thread Dannon Baker
Jan is correct, this is what Galaxy does.

-Dannon


On Wed, Jun 25, 2014 at 6:29 AM, Jan Kanis jan.c...@jankanis.nl wrote:

 Probably the same way by which it keeps track of you when you are logged
 in: setting a cookie in the browser. I didn't verify this, but that is how
 basically all web services do it. If you clear the galaxy cookies I expect
 you will end up in a new empty workspace.


 On 25 June 2014 01:18, Melissa Cline cl...@soe.ucsc.edu wrote:

 Hi folks,

 Quick question.  When two different users use the same Galaxy instance
 without logging in, how does Galaxy keep their identities straight, such
 that when they come back to their computers the next day, they see only
 their own workspace and not the other user's workspace?

 Thanks!

 Melissa


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/



 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Enable FTP upload in Local Galaxy instance

2014-06-25 Thread Pat-74100
Dear Galaxy developers 

I'm trying to allow users to upload file via FTP. 

I've been to the tutorial website but as a beginner, I understand nothing...

https://wiki.galaxyproject.org/Admin/Config/UploadviaFTP?action=showredirect=Admin%2FConfig%2FUpload+via+FTP

I've configured the universe_wsgi.ini file with

ftp_upload_dir = galaxy_dist/database/files/
ftp_upload_site = any adress.

I use Filezilla to for FTP connecting but it doesn't works and it asks me a 
Port ...
Can someone provide to me a simply tutorial to enable ftp upload ?

Thanks

Pat
  ___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] setup_venv doesn't honour set_environment_for_install / installing lxml

2014-06-25 Thread Björn Grüning

Hi Janis,

that is not working and it is currently not clear if we will change it.
Please see the following Trello Card:

https://trello.com/c/NsLJv9la/61-clean-up-tool-shed-setup-actions

One of my project during the upcoming GCC hackathon is to implement a 
setup_python_environment, like the R, perl or ruby ones. With that in 
place your use case will be easy to implement. If you get your python 
package up and running at that time, this would be great!


Cheers,
Bjoern

Am 24.06.2014 18:57, schrieb Jan Kanis:

Ok, here is the tool_dependencies.xml. This
https://gist.github.com/JanKanis/650c88001c03ac4320fe#file-not_working_tool_dependencies
(also attached, if that survives the list) is what I would like my
tool_dependencies.xml to look like, but it doesn't work because the
installation of lxml fails, as it can't find the libxml2 headers. I am now
using this
https://gist.github.com/JanKanis/42b9cace27b9693a0677#file-workaround_tool_dependencies-xml
as a workaround. It works because lxml is installed from a shell_command
which does include the variables set from the set_environment_for_install
block. This is from the blast2html tool.

Jan


On 24 June 2014 15:30, Dave Bouvier d...@bx.psu.edu wrote:


Jan,

In order to help track down this issue, could you provide the
tool_dependencies.xml you're using?

   --Dave B.


On Tue 24 Jun 2014 05:40:30 AM EDT, Jan Kanis wrote:


In a tool_dependency.xml file I want to install python package lxml in
a virtual environment, as a tool I'm building needs it. The python
lxml package requires the libxml2 tool dependency. I have added a
set_environment_for_install action that refers to the libxml2
repository, but when python/pip tries to install lxml it fails,
apparently because it can't find the required headers. This appears to
be because the setup_virtualenv action does not include install
environment variables.

It seems to me that install environment variables should be sourced
for every following action that can do nontrivial things, not just
shell commands.

Alternatively, am I trying to install lxml the wrong way, is there a
better way? (I'm running on python 2.6)

Jan


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/







___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Unable to finish job - metadata settings

2014-06-25 Thread Preussner, Jens
Dear all,

I noticed a strange behaviour in our local galaxy installation. First of all, 
my universe_wsgi.ini contains retry_metadata_internally = False and 
cleanup_job = always. The tool writes its output simply into the 
job_working_directory and we move it via  mv static_filename.txt $output in 
the command-tag. This works fine.

When restarting the galaxy server and executing the tool after a fresh restart, 
everything is ok, there are no errors.
When executing the same tool a second time, galaxy brings a tool error 
stating that it was unable to finish the job. Nevertheless, the output files 
are all correct (but marked as red or failed).

The error report states:
Traceback (most recent call last):
  File /home/galaxy/galaxy-dist/lib/galaxy/jobs/runners/local.py, line 129, 
in queue_job
job_wrapper.finish( stdout, stderr, exit_code )
  File /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line 997, in 
finish
if ( not self.external_output_metadata.external_metadata_set_successfully( 
dataset, self.sa_session ) and self.app.config.retry_metadata_internally ):
  File /home/galaxy/galaxy-dist/lib/galaxy/datatypes/metadata.py, line 731, 
in external_metadata_set_successfully
rval, rstring = json.load( open( metadata_files.filename_results_code ) )
IOError: [Errno 2] No such file or directory: 
u'/home/galaxy/galaxy-dist/database/job_working_directory/000/59/metadata_results_HistoryDatasetAssociation_281_oHFjx0'

And in the logfile you can find multiple entries like that:
galaxy.datatypes.metadata DEBUG 2014-06-25 14:29:35,466 Failed to cleanup 
external metadata file (filename_results_code) for 
HistoryDatasetAssociation_281: [Errno 2] No such file or directory: 
'/home/galaxy/galaxy-dist/database/job_working_directory/000/59/metadata_results_HistoryDatasetAssociation_281_oHFjx0'

The  if-statement in /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, 
line 997 should evaluate to False, since 
self.app.config.retry_metadata_internally is set to False in the 
universe_wsgi.ini but it seems it doesn't in this case?
Anyone having experienced such a behavior? Any suggestions how to go on and 
solve the issue?
Many thanks!

Jens

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Unable to finish job - metadata settings

2014-06-25 Thread John Chilton
Hey Jens,

  I have tried a few different things and I have been unable to
replicate the behavior locally.

  Is this tool specific or configuration specific - i.e. do you see
this behavior only with a specific tool or do the concatenate datasets
tool experience this as well say?

  If it is tool specific - you (or the underlying application) may be
deleting metadata_results_XXX files in the working directory as part
of the job? If you are convinced the tool is not deleting these files
but it is a tool-specific problem can you pass along the tool you are
using (or better a minimal version of the tool that produces this
behavior.)

  If it is configuration specific and you can get it to happen with
many different tools - can you try to do it against a clean copy of
galaxy-dist or galaxy-central and pass along the exact updates
(universe_wsgi.ini, job_conf.xml, etc...) that you used to configure
Galaxy to cause it to produce this error.

  Couple more things - it hasn't gone into the if in the
jobs/__init__.py statement - it is doing that first check
external_metadata_set_successfully when the error occurs - so I don't
think it is a problem with the retry_metadata_internally configuration
option.

  Additionally, you are using the local job runner so Galaxy will
always retry_metadata_internally - the local job runner doesn't try
to embed the metadata calculation into the job like the cluster job
runners (so retry_metadata_internally doesn't really matter with the
local job runner... right now anyway). If you don't want the metadata
calculation to be embedded into the local job runner job the way
cluster job runners do it (so retry_metadata_internally does in fact
matter) there was an option added to the local job runner last release
that I realized I hadn't documented - you can add param
id=embed_metadata_in_jobTrue/param to the local job destination
in job_conf.xml.

  More information about this new option here:
https://bitbucket.org/galaxy/galaxy-central/commits/75c63b579ccdd63e0558dd9aefce7786677dbacd

-John

On Wed, Jun 25, 2014 at 7:41 AM, Preussner, Jens
jens.preuss...@mpi-bn.mpg.de wrote:
 Dear all,



 I noticed a strange behaviour in our local galaxy installation. First of
 all, my universe_wsgi.ini contains “retry_metadata_internally = False” and
 “cleanup_job = always”. The tool writes its output simply into the
 job_working_directory and we move it via  mv static_filename.txt $output
 in the command-tag. This works fine.



 When restarting the galaxy server and executing the tool after a fresh
 restart, everything is ok, there are no errors.

 When executing the same tool a second time, galaxy brings a “tool error”
 stating that it was unable to finish the job. Nevertheless, the output files
 are all correct (but marked as red or failed).



 The error report states:

 Traceback (most recent call last):

   File /home/galaxy/galaxy-dist/lib/galaxy/jobs/runners/local.py, line
 129, in queue_job

 job_wrapper.finish( stdout, stderr, exit_code )

   File /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line 997, in
 finish

 if ( not
 self.external_output_metadata.external_metadata_set_successfully( dataset,
 self.sa_session ) and self.app.config.retry_metadata_internally ):

   File /home/galaxy/galaxy-dist/lib/galaxy/datatypes/metadata.py, line
 731, in external_metadata_set_successfully

 rval, rstring = json.load( open( metadata_files.filename_results_code )
 )

 IOError: [Errno 2] No such file or directory:
 u'/home/galaxy/galaxy-dist/database/job_working_directory/000/59/metadata_results_HistoryDatasetAssociation_281_oHFjx0'



 And in the logfile you can find multiple entries like that:

 galaxy.datatypes.metadata DEBUG 2014-06-25 14:29:35,466 Failed to cleanup
 external metadata file (filename_results_code) for
 HistoryDatasetAssociation_281: [Errno 2] No such file or directory:
 '/home/galaxy/galaxy-dist/database/job_working_directory/000/59/metadata_results_HistoryDatasetAssociation_281_oHFjx0'



 The  if-statement in /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py,
 line 997 should evaluate to False, since
 self.app.config.retry_metadata_internally is set to False in the
 universe_wsgi.ini but it seems it doesn’t in this case?

 Anyone having experienced such a behavior? Any suggestions how to go on and
 solve the issue?

 Many thanks!



 Jens




 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To 

Re: [galaxy-dev] Run a tool's workflow one by one ?

2014-06-25 Thread Pat-74100
Thanks John, now it works !

You look like a Galaxy professional. Maybe you can help me again ? I've made an 
another topic about a FTP setting to upload large file. I've been to the Galaxy 
wiki tutorials but I don't understand a lot ...

Pat

 Date: Wed, 25 Jun 2014 07:49:36 -0500
 Subject: Re: [galaxy-dev] Run a tool's workflow one by one ?
 From: jmchil...@gmail.com
 To: leonardsqual...@hotmail.com
 CC: galaxy-dev@lists.bx.psu.edu
 
 There was a problem with the config I sent you - it defines two
 destinations for jobs but doesn't specify a default. I have updated
 the gist (and actually tried loading it in Galaxy this time):
 https://gist.github.com/jmchilton/ff186b01d51d401623be. Hope this
 helps you make progress on this issue.
 
 -John
 
 On Wed, Jun 25, 2014 at 2:09 AM, Pat-74100 leonardsqual...@hotmail.com 
 wrote:
  Hi John
 
  Thanks for your reply.
  I've copied your job_conf.xml.
 
  Unfortunately, I've got an error when I ran run.sh:
 
  galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Loading job configuration from
  ./job_conf.xml
  galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Read definition for handler 'main'
  galaxy.jobs INFO 2014-06-25 09:06:17,610 Setting handlers default to child
  with id 'main'
  Traceback (most recent call last):
File
  my_repertory/galaxy-python/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py,
  line 39, in app_factory
  app = UniverseApplication( global_conf = global_conf, **kwargs )
File my_repertory/galaxy-python/galaxy-dist/lib/galaxy/app.py, line 64,
  in __init__
  self.job_config = jobs.JobConfiguration(self)
File
  my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py, line
  107, in __init__
  self.__parse_job_conf_xml(tree)
File my_repertory/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py,
  line 177, in __parse_job_conf_xml
  self.default_destination_id = self.__get_default(destinations,
  self.destinations.keys())
File my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py,
  line 300, in __get_default
  raise Exception(No %s default specified, please specify a valid id or
  tag with the 'default' attribute % parent.tag)
  Exception: No destinations default specified, please specify a valid id or
  tag with the 'default' attribute
 
 
  I don't understand where is the problem.
 
  Pat
 
  Date: Mon, 23 Jun 2014 22:19:27 -0500
  Subject: Re: [galaxy-dev] Run a tool's workflow one by one ?
  From: jmchil...@gmail.com
  To: leonardsqual...@hotmail.com
  CC: galaxy-dev@lists.bx.psu.edu
 
 
  Its looks like you were going to post an error message but didn't.
  That might help debug problem.
 
  There is no way currently at the workflow level to force one job to
  wait for another before completion (other than assigning an explicit
  input/output relationship between the steps). There is a Trello card
  for this here https://trello.com/c/h5qZlgU8.
 
  I am not sure that Trello card is really the best approach for this
  problem though. If it really is the case that these jobs can run
  simultaneously and they are not implicitly dependent on each other in
  some way not represented in the workflow - than it is likely they are
  running on a machine that just doesn't have enough resources (likely
  memory) to run these properly. The correct solution for this I think
  should be properly configuring a job_conf.xml file to not let Galaxy
  tools over consume memory.
 
  By default Galaxy will run 4 jobs simultaneously - any job of any time
  - regardless of memory consumption, threads used, etc This gist
  (https://gist.github.com/jmchilton/ff186b01d51d401623be) contains a
  job_conf.xml that you can stick in your Galaxy root directory to
  ensure a handful of tools (I used ids hilbert, fft, slm as example ids
  but you should replace these values with actual values of your tool)
  can only run one job at a time. All other jobs will continue t
  concurrently run two at a time beside these.
 
  If you are using a distributed resource manager (like sun grid engine,
  SLURM, Condor, etc...) then the solution is a little different. You
  should assign these tools to job destination that consume a whole node
  - you would to provide more information about the cluster hardware and
  software configuration for me to provide an example of this.
 
  Beside that the common advice about scaling up Galaxy holds - you
  should configure Postgres instead of sqlite, setup a proxy (nginx or
  Apache), disable debug in universe_wsgi.ini, etc See
  https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer
  for details. All of these things can help in situations like this.
 
  -John
 
 
 
 
  On Mon, Jun 23, 2014 at 11:22 AM, Pat-74100 leonardsqual...@hotmail.com
  wrote:
   Hi !
  
   I have a big workflow and sometime when I launched it, I got an error
   message for some of my tools : unable to finish job.
   I think it's maybe Galaxy ran multiple job so I get this error message.
  
   

Re: [galaxy-dev] How does Galaxy differentiate users who have not logged on?

2014-06-25 Thread Daniel Blankenberg
Slightly off topic, but I would like to caution that if you want to be able to 
ensure that your workspace is still available across time periods/browser 
sessions on the public site, then you should most definitely use registered 
accounts, as we will not be able to recover any lost anonymous histories for 
anyone.


Thanks for using Galaxy,

Dan

On Jun 25, 2014, at 6:46 AM, Dannon Baker dannon.ba...@gmail.com wrote:

 Jan is correct, this is what Galaxy does.
 
 -Dannon
 
 
 On Wed, Jun 25, 2014 at 6:29 AM, Jan Kanis jan.c...@jankanis.nl wrote:
 Probably the same way by which it keeps track of you when you are logged in: 
 setting a cookie in the browser. I didn't verify this, but that is how 
 basically all web services do it. If you clear the galaxy cookies I expect 
 you will end up in a new empty workspace. 
 
 
 On 25 June 2014 01:18, Melissa Cline cl...@soe.ucsc.edu wrote:
 Hi folks,
 
 Quick question.  When two different users use the same Galaxy instance 
 without logging in, how does Galaxy keep their identities straight, such that 
 when they come back to their computers the next day, they see only their own 
 workspace and not the other user's workspace?
 
 Thanks!
 
 Melissa
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Updated Freebayes Wrapper - Bugs

2014-06-25 Thread Lance Parsons

Thanks for the major update of the Freebayes wrapper, excellent!

I've run into two issues, however.

1) When using set allelic scope I get the following error:

Fatal error: Exit code 1 ()
freebayes: unrecognized option `--min-repeat-length'
did you mean --min-repeat-size ?


2) When using a vcf file as input I get the following error:

Fatal error: Exit code 1 ()
open: No such file or directory
[bgzf_check_bgzf] failed to open the file: input_variant_vcf.vcf.gz
[tabix++] was bgzip used to compress this file? input_variant_vcf.vcf.gz


I'd be happy to help resolve these.  Is there a bitbucket repo somewhere 
to submit pull requests?


--
Lance Parsons - Scientific Programmer
134 Carl C. Icahn Laboratory
Lewis-Sigler Institute for Integrative Genomics
Princeton University

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Upload files to a data library - Invalid Paths

2014-06-25 Thread Dooley, Damion
Hi folks,
In Upload files to a data library form, seems like whatever valid path I 
enter for linking to data on the server, even with galaxy as user and r 
permissions on the folder and files - I get the invalid paths message after 
submitting?  The basic form info:

   ***
   Invalid paths:
   /projects/gia/gia_analysis/blast/homologs_index_Spade_genomes/xml_reports/

   Upload option: Upload files from filesystem paths
   File Format: blastxml

   Paths to upload
   /projects/gia/gia_analysis/blast/homologs_index_Spade_genomes/xml_reports/

   Preserve directory structure?
   [yes or no]

   Copy data into Galaxy?
   Link to files without copying into Galaxy
   ***

Does anyone know a trick to this?  Would be nice to have more helpful error 
explanation here!

My galaxy: 5e605ed6069f+ (stable) release_2014.02.10 .  I think it is a pretty 
normal installation.

Thanks for any tips,

Regards,

Damion

Hsiao lab, BC Public Health Microbiology  Reference Laboratory, BC Centre for 
Disease Control
655 West 12th Avenue, Vancouver, British Columbia, V5Z 4R4 Canada
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Upload files to a data library - Invalid Paths

2014-06-25 Thread Dooley, Damion
p.s. I do have allow_library_path_paste = True set in universe_wsgi.ini

Damion
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Upload files to a data library - Invalid Paths - Solved

2014-06-25 Thread Dooley, Damion
Solved.  It was actually a file permissions thing on an ancestral folder.  I 
will add a trello card to suggest slightly better error reporting on this 
though.

Regards,

Damion

Hsiao lab, BC Public Health Microbiology  Reference Laboratory, BC Centre for 
Disease Control
655 West 12th Avenue, Vancouver, British Columbia, V5Z 4R4 Canada

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] job_working_directory

2014-06-25 Thread Shrum, Donald C
I'd like to set the job_working_directory to the user's home directory.

I'm using an apache proxy with ldap authentication.  I'm assuming I need to 
strip the remote_user_maildomain off the user name to do something like...

job_working_directory = ~%user/galaxy_working_directory

Is this possible?


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Proposed cd-hit tool dependency change in galaxy

2014-06-25 Thread Björn Grüning

Hi Qi,

thank you very much for you contribution.
I will try to get your patch to the correct person during the upcoming 
Galaxy Conference.


Cheers,
Bjoern


Am 13.06.2014 17:29, schrieb Qi, Chuyang:

Hi all,

When installing and testing the cd-hit tool from jjohnson from the main galaxy 
toolshed into my local instance of galaxy, I encountered a problem where galaxy 
cannot find the cd-hit program.

The error was:
/bin/sh: 1: cd-hit-est: not found

As I debugged the problem, I found that there is an bug in the tool dependency. 
It installs the cd-hit programs but never moves the programs into the 
configurable galaxy install directory. This could cause many problems in the 
situation where users don't have cd-hit already installed on their computer and 
must depend on the tool dependency to run the tool.

I propose we add the following code to the tool dependency:

action type=move_file
sourcecd-hit/source
destination$INSTALL_DIR/destination
  /action

action type=move_file
sourcecd-hit-est/source
destination$INSTALL_DIR/destination
  /action

After I made these changes, I pushed the changes into a new repo on my local 
toolshed, I managed to successfully install cd-hit into a local and dev-test 
instance of galaxy and the cd-hit tool ran successfully.

Thanks for your understanding,
Charles Qi





___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] job_working_directory

2014-06-25 Thread John Chilton
There is certainly no flag in Galaxy to allow this.

Even if you tried to hack it in - there would be some problems. Galaxy
writes out some files to that working directory before running the job
- so there would need to be more chown-ing and stuff happening at the
beginning of the job - probably before it even got to the runner
component. If Galaxy could assume these directories already existed
and didn't need to run the jobs as the real user - this would be a lot
easier (not easy, just easier) to hack into Galaxy - but I assume you
want to run these jobs as the real user.

I've created a Trello card for this request: https://trello.com/c/n1UitJTb.

Sorry I don't have better news.

-John

On Wed, Jun 25, 2014 at 3:51 PM, Shrum, Donald C dcsh...@admin.fsu.edu wrote:
 I'd like to set the job_working_directory to the user's home directory.

 I'm using an apache proxy with ldap authentication.  I'm assuming I need to 
 strip the remote_user_maildomain off the user name to do something like...

 job_working_directory = ~%user/galaxy_working_directory

 Is this possible?


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/