[galaxy-dev] Error executing tool: 'hg19'
Hi, After a quick try with visualising track in Trackster (importing one chromosome of hg19 - which did not succeed BTW), none of the tools in my local galaxy appear to work. They all send this error message: Error executing tool: 'hg19' This bug has been reported before, but I was wondering if somebody suggest a fix for this? Thanks! Joachim. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] LDAP authentification
Hello, I managed to connect to Galaxy to LDAP ;-) Three points were blocking for me: * Being root of my virtual machine can carry out tests * I confused login / password of two LDAP, so I thought that my authentication method was not good while I was using the wrong password ... * It is better not to go through a proxy 1 - Set configuration file of Galaxy: universe_wsgi.ini to delegate user authentication to an upstream proxy Apache: Users and Security use_remote_user = True remote_user_maildomain = toulouse.inra.fr 2 - Create a file type htaccess file named galaxy.conf (in / etc / httpd / conf.d /): For reasons of performance and safety, it is advisable not to use a. htaccess but a galaxy.conf file in the main server configuration (Apache), because the latter will be charged a once when the server starts. With an .htaccess file, this file will be charged at each access. RewriteEngine on Location /galaxy # Define the authentication method AuthType Basic AuthName Galaxy AuthBasicProvider ldap AuthLDAPURL ldap :/ / server URL: 389/... AuthzLDAPAuthoritative off Require valid-user RequestHeader set REMOTE_USER %{AUTHENTICATE_uid}e / Location RewriteRule ^ / $ galaxy / galaxy / [R] RewriteRule ^ / galaxy / static / style / (. *) / var/www/html/galaxy/static/june_2007_style/blue / $ 1 [L] RewriteRule ^ / galaxy / static / scripts / (. *) /vVar / www / html / galaxy / static / scripts / packed / $ 1 [L] RewriteRule ^ / galaxy / static / (. *) / var / www / html / galaxy / static / $ 1 [L] RewriteRule ^ / galaxy / favicon.ico / var / www / html / galaxy / static / favicon.ico [L] RewriteRule ^ / galaxy / robots.txt / var / www / html / galaxy / static / robots.txt [L] RewriteRule ^ / galaxy (. *) http://ip:port $ 1 [P] As Galaxy is not installed in root directory but in a galaxy directory (var / www / html / galaxy /), so following changes are needed: 1 - Add a RewriteRule 2 - Do not go through a proxy 3 - REMOTE_USER variable is AUTHENTICATE_uid ( AUTHENTICATE_ sAMAccountName for Windows AD) 4 - To generate dynamic URLs, it is necessary to configure prefix in universe_wsgi.ini : [Filter: proxy-prefix] use = egg: # prefix PasteDeploy prefix = / galaxy [App: main] filter-with = proxy-prefix cookie_path = / galaxy If you are not root on the virtual machine, create a symlink from / etc / httpd / conf.d / to galaxy.conf 3 - Some useful checks Verify Apache version and Apache modules because each directive must have an associated module: Directive → Related module (which mod_ldap) AuthType → mod_auth_basic.so AuthBasicProvider → mod_authnz_ldap and mod_authz_ldap Rewrite (for proxy) → mod_rewrite.so RequestHeader→ mod_headers Check that the galaxy is installed on ldap using this command: ldapsearch-x-h LDAP URL : port-b dc When you make a modification in galaxy.conf, restart Apache (or graful). In httpd.conf, so that access management is authorized by the file. # # AccessFileName: The name of the file to look for in EACH directory # For additional configuration directives. See also the AllowOverride # Directive. # AccessFileName. Htaccess Check: Chmod 777 galaxy.conf 4 - Finally, restart run.sh (sh run.sh ) Thanks A LOT for your help, Sarah ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Galaxy server configuration question
Hi Nate. I am ok now. The code I copied from wiki was using galaxy_dist, while my folder name is galaxy-dist. After I changed the path in the code, the problem solved. Best, Huayan On 13 Feb, 2012, at 11:40 AM, Huayan Gao wrote: Hi Nate, I removed the proxy section in httpd file and got the following screenshot. It seems working but not in the way we expected. I will keep looking for the solution but do you know how to fix it? It seems to say, the file .../static/welcome.html is missing or something like that. Thanks, Huayan Screen Shot 2012-02-13 at 11.37.57 AM.png On 10 Feb, 2012, at 10:28 AM, Huayan Gao wrote: Hi Nate, Yes, I did follow the instructions. But I came to the question in httpd.conf file. I put galaxy-dist under my document root which is /var/www/html/. When my server is up, I can access my UCSC genome browser mirror site through my ip address, for example, http://61.244. xxx.xxx. Then how should I set up in httpd.conf file so I can access galaxy using my ip address, for example, http://61.244. xxx.xxx/galaxy? Thanks, Best, Huayan On 10 Feb, 2012, at 1:17 AM, Nate Coraor wrote: On Feb 8, 2012, at 1:00 AM, Huayan Gao wrote: Dear Sir or Madam, I am installing a galaxy server on CentOS with UCSC Genome Browser mirror site. The mirror site works well. I installed the galaxy in the same server. Now my question is: how to set up httpd.conf file so I can access both websites(UCSC Genome Browser, and Galaxy) remotely? Hi Huayan, Have you consulted the production server documentation? http://usegalaxy.org/production --nate Best, Huayan ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] generate dynamic select list based on other input dataset
Holger, Have you looked at how dynamic options work and whether they would be sufficient for your use? See thefilter tag syntax for details: http://wiki.g2.bx.psu.edu/Admin/Tools/Tool%20Config%20Syntax#A.3Cfilter.3E_tag_set To specifically address your problem: can you determine the particular place where the syntax error is appearing? And can you provide a link to the thread that you're using as a starting point? Thanks, J. On Feb 10, 2012, at 3:43 PM, Holger Klein wrote: Dear all, I'm still stuck with the problem to dynamically generate an option list extracted from a user-selectable input dataset. Does anybody have experience here, or is this not possible at all? Have a nice weekend, Holger On 02/07/2012 09:58 PM, Holger Klein wrote: Dear all, I have a working module which generates wig files for genomic annotation from a single column of a bigger input data matrix (Input A). In the current state, the user has to input the column name (Input B) from which to calculate the values in the wig file. Now I'd like to modify the xml in such a way, that depending on the input dataset (Input A) a dynamic list for Input B is generated. I found Hans-Rudolf Hotz' hints from some time ago on this list and thought that the following would be a good start: param name = InputB label = InputBName format = data type = select help = Use tickboxes to select model display = radio dynamic_options = getInputBOptions($InputA) / code file=getInputBOptionsFromInputA.py getInputBOptionsFromInputA.py contains a single function def getInputBOptions($InputA): ## parse Input A ## create list InputBOptions return(InputBOptions) Using this approach I get an invalid syntax message when trying to even open the module - in any case I have the feeling that something is still missing here. Did anybody solve a similar problem already and could give me a hint on how to solve that? Cheers, Holger -- Dr. Holger Klein Core Facility Bioinformatics Institute of Molecular Biology gGmbH (IMB) http://www.imb-mainz.de/ Tel: +49(6131) 39 21511 ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Error executing tool: 'hg19'
Hello Joachim, After a quick try with visualising track in Trackster (importing one chromosome of hg19 - which did not succeed BTW), none of the tools in my local galaxy appear to work. What are the steps you're taking to produce this issue? They all send this error message: Error executing tool: 'hg19' Are you seeing this error in failed datasets? If not, where are you seeing this error? This bug has been reported before, but I was wondering if somebody suggest a fix for this? Can you provide a link to the thread/issue where this has been reported? Thanks, J. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Status on importing BAM file into Library does not update
On Feb 8, 2012, at 9:32 PM, Fields, Christopher J wrote: 'samtools sort' seems to be running on our server end as well (not on the cluster). I may look into it a bit more myself. Snapshot of top off our server (you can see our local runner as well): PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 3950 galaxy20 0 1303m 1.2g 676 R 99.7 15.2 234:48.07 samtools sort /home/a-m/galaxy/dist-database/file/000/dataset_587.dat /home/a-m/galaxy/dist-database/tmp/tmp9tv6zc/sorted 5417 galaxy20 0 1186m 104m 5384 S 0.3 1.3 0:15.08 python ./scripts/paster.py serve universe_wsgi.runner.ini --server-name=runner0 --pid-file=runner0.pid --log-file=runner0.log --daemon Hi Chris, 'samtools sort' is run by groom_dataset_contents, which should only be called from within the upload tool, which should run on the cluster unless you still have the default local override for it in your job runner's config file. Ryan's instance is running 'samtools index' which is in set_meta which is supposed to be run on the cluster if set_metadata_externally = True, but can be run locally under certain conditions. --nate chris On Jan 20, 2012, at 10:43 AM, Shantanu Pavgi wrote: Just wanted to add that we have consistently seen this issue of 'samtools index' running locally on our install. We are using SGE scheduler. Thanks for pointing out details in the code Nate. -- Shantanu. On Jan 20, 2012, at 9:35 AM, Nate Coraor wrote: On Jan 18, 2012, at 11:54 AM, Ryan Golhar wrote: Nate - Is there a specific place in the Galaxy code that forks the samtools index on bam files on the cluster or the head node? I really need to track this down. Hey Ryan, Sorry it's taken so long, I've been pretty busy. The relevant code is in galaxy-dist/lib/galaxy/datatypes/binary.py, in the Bam class. When Galaxy runs a tool, it creates a Job, which is placed inside a JobWrapper in lib/galaxy/jobs/__init__.py. After the job execution is complete, the JobWrapper.finish() method is called, which contains: if not self.app.config.set_metadata_externally or \ ( not self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ) \ and self.app.config.retry_metadata_internally ): dataset.set_meta( overwrite = False ) Somehow, this conditional is being entered. Since set_metadata_externally is set to True, presumably the problem is external_metadata_set_successfully() is returning False and retry_metadata_internally is set to True. If you leave behind the relevant job files (cleanup_job = never) and have a look at the PBS and metadata outputs you may be able to see what's happening. Also, you'll want to set retry_metadata_internally = False. --nate On Fri, Jan 13, 2012 at 12:54 PM, Ryan Golhar ngsbioinformat...@gmail.com wrote: I re-uploaded 3 BAM files using the Upload system file paths. runner0.log shows: galaxy.jobs DEBUG 2012-01-13 12:50:08,442 dispatching job 76 to pbs runner galaxy.jobs INFO 2012-01-13 12:50:08,555 job 76 dispatched galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:50:08,697 (76) submitting file /home/galaxy/galaxy-dist-9/database/pbs/76.sh galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:50:08,697 (76) command is: python /home/galaxy/galaxy-dist-9/tools/data_source/upload.py /home/galaxy/galaxy-dist-9 /home/galaxy/galaxy-dist-9/datatypes_conf.xml /home/galaxy/galaxy-dist-9/database/tmp/tmpqrVYY7 208:/home/galaxy/galaxy-dist-9/database/job_working_directory/76/dataset_208_files:None 209:/home/galaxy/galaxy-dist-9/database/job_working_directory/76/dataset_209_files:None 210:/home/galaxy/galaxy-dist-9/database/job_working_directory/76/dataset_210_files:None; cd /home/galaxy/galaxy-dist-9; /home/galaxy/galaxy-dist-9/set_metadata.sh ./database/files ./database/tmp . datatypes_conf.xml ./database/job_working_directory/76/galaxy.json galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:50:08,699 (76) queued in default queue as 114.localhost.localdomain galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:50:09,037 (76/114.localhost.localdomain) PBS job state changed from N to R galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:51:09,205 (76/114.localhost.localdomain) PBS job state changed from R to E galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:51:10,206 (76/114.localhost.localdomain) PBS job state changed from E to C galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:51:10,206 (76/114.localhost.localdomain) PBS job has completed successfully 76.sh shows: [galaxy@bic pbs]$ more 76.sh #!/bin/sh GALAXY_LIB=/home/galaxy/galaxy-dist-9/lib if [ $GALAXY_LIB != None ]; then if [ -n $PYTHONPATH ]; then export PYTHONPATH=$GALAXY_LIB:$PYTHONPATH else export PYTHONPATH=$GALAXY_LIB fi fi cd
Re: [galaxy-dev] generate dynamic select list based on other input dataset
Hi Jeremy, I understood the filter tags help if you want to filter input data based on options in a .loc file, right? My aim is to extract the column names of an input dataset, present them in a selection box or dropdown list, let the user choose one, and process the input set (with two inputs, the input set itself and the selection made based on the input set). Do you think that something like this is possible at all? I attach the source of my dummy module and the python library which I use via code file= The Syntax error I get is SyntaxError: invalid syntax (string, line 1) The complete traceback is below. The thread I was referring to in my mail can be found here: http://www.mail-archive.com/galaxy-dev@lists.bx.psu.edu/msg03666.html (Dec 12, 2011; Dynamic Tool Parameter Lists). Cheers, Holger Module weberror.evalexception.middleware:364 in respond view app_iter = self.application(environ, detect_start_response) Module paste.debug.prints:98 in __call__ view environ, self.app) Module paste.wsgilib:539 in intercept_output view app_iter = application(environ, replacement_start_response) Module paste.recursive:80 in __call__ view return self.application(environ, start_response) Module paste.httpexceptions:632 in __call__ view return self.application(environ, start_response) Module galaxy.web.framework.base:160 in __call__ view body = method( trans, **kwargs ) Module galaxy.web.controllers.tool_runner:68 in index view template, vars = tool.handle_input( trans, params.__dict__ ) Module galaxy.tools:1147 in handle_input view state = self.new_state( trans ) Module galaxy.tools:1075 in new_state view self.fill_in_new_state( trans, inputs, state.inputs ) Module galaxy.tools:1084 in fill_in_new_state view state[ input.name ] = input.get_initial_value( trans, context ) Module galaxy.tools.parameters.basic:788 in get_initial_value view options = list( self.get_options( trans, context ) ) Module galaxy.tools.parameters.basic:641 in get_options view return eval( self.dynamic_options, self.tool.code_namespace, other_values ) SyntaxError: invalid syntax (string, line 1) On 02/13/2012 03:04 PM, Jeremy Goecks wrote: Holger, Have you looked at how dynamic options work and whether they would be sufficient for your use? See thefilter tag syntax for details: http://wiki.g2.bx.psu.edu/Admin/Tools/Tool%20Config%20Syntax#A.3Cfilter.3E_tag_set http://wiki.g2.bx.psu.edu/Admin/Tools/Tool Config Syntax#A.3Cfilter.3E_tag_set To specifically address your problem: can you determine the particular place where the syntax error is appearing? And can you provide a link to the thread that you're using as a starting point? Thanks, J. On Feb 10, 2012, at 3:43 PM, Holger Klein wrote: Dear all, I'm still stuck with the problem to dynamically generate an option list extracted from a user-selectable input dataset. Does anybody have experience here, or is this not possible at all? Have a nice weekend, Holger On 02/07/2012 09:58 PM, Holger Klein wrote: Dear all, I have a working module which generates wig files for genomic annotation from a single column of a bigger input data matrix (Input A). In the current state, the user has to input the column name (Input B) from which to calculate the values in the wig file. Now I'd like to modify the xml in such a way, that depending on the input dataset (Input A) a dynamic list for Input B is generated. I found Hans-Rudolf Hotz' hints from some time ago on this list and thought that the following would be a good start: param name = InputB label = InputBName format = data type = select help = Use tickboxes to select model display = radio dynamic_options = getInputBOptions($InputA) / code file=getInputBOptionsFromInputA.py getInputBOptionsFromInputA.py contains a single function def getInputBOptions($InputA): ## parse Input A ## create list InputBOptions return(InputBOptions) Using this approach I get an invalid syntax message when trying to even open the module - in any case I have the feeling that something is still missing here. Did anybody solve a similar problem already and could give me a hint on how to solve that? Cheers, Holger -- Dr. Holger Klein Core Facility Bioinformatics Institute of Molecular Biology gGmbH (IMB) http://www.imb-mainz.de/ Tel: +49(6131) 39 21511 ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ -- Dr. Holger Klein Core Facility Bioinformatics Institute of Molecular Biology gGmbH (IMB) http://www.imb-mainz.de/ Tel: +49(6131) 39 21511 def getDynamicOptions(Outfile): MO = open(Outfile, r) header
Re: [galaxy-dev] Batch limit on Wokflows
Sorry for the delay, but I am using a galaxy-dist clone, and have updated to revision 26920e20157f+ tip. I'm still receiving the same issue with the revision. Maybe I should try out galaxy-central? Thanks, Robert From: Dannon Baker [dannonba...@me.com] Sent: Friday, February 10, 2012 11:50 AM To: Petit III, Robert A. Cc: galaxy-dev@lists.bx.psu.edu Subject: Re: [galaxy-dev] Batch limit on Wokflows What revision is your Galaxy instance at? I'm not seeing this behavior on tip with a simple test, it may have been something we've fixed in a more recent revision. -Dannon On Feb 10, 2012, at 11:14 AM, Petit III, Robert A. wrote: Hi there, I've run into an issue on my local galaxy install. When I have 20 or more datasets in my history, I no longer get the option to 'Enable/Disable selection of multiple input files...' I instead get a broken drop down list. I say broken because its as though the drop down list and input box have merged together. Is there a setting I need to change to correct this? Thanks Robert This e-mail message (including any attachments) is for the sole use of the intended recipient(s) and may contain confidential and privileged information. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this message (including any attachments) is strictly prohibited. If you have received this message in error, please contact the sender by reply e-mail message and destroy all copies of the original message (including attachments). ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Status on importing BAM file into Library does not update
On Feb 8, 2012, at 11:58 AM, Ryan Golhar wrote: Hi Nate - I finally got a chance to look at this briefly, but I must admit, my Python skills are lacking. In the Bam class in binary.py, all I see are calls to proc = subprocess.Popen( args=command, shell=True, cwd=tmp_dir, stderr=open( stderr_name, 'wb' ) ) which, to me, look like calls to execute a command. So maybe Galaxy is running samtools on the webserver because of this? This is indeed the place in the code where samtools called, but that code can be called from within the external metadata setting tool or from the job runner. In your case, it's happening in the job runner despite having set_metadata_externally = True. Could you check the conditionals in the earlier email I sent: The relevant code is in galaxy-dist/lib/galaxy/datatypes/binary.py, in the Bam class. When Galaxy runs a tool, it creates a Job, which is placed inside a JobWrapper in lib/galaxy/jobs/__init__.py. After the job execution is complete, the JobWrapper.finish() method is called, which contains: if not self.app.config.set_metadata_externally or \ ( not self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ) \ and self.app.config.retry_metadata_internally ): dataset.set_meta( overwrite = False ) Somehow, this conditional is being entered. Since set_metadata_externally is set to True, presumably the problem is external_metadata_set_successfully() is returning False and retry_metadata_internally is set to True. If you leave behind the relevant job files (cleanup_job = never) and have a look at the PBS and metadata outputs you may be able to see what's happening. Also, you'll want to set retry_metadata_internally = False. Namely, try adding the following right above that conditional: log.debug(' %s: %s' % (type(self.app.config.set_metadata_externally), self.app.config.set_metadata_externally)) log.debug(' %s: %s' % (type(self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ), self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ))) log.debug(' %s: %s' % (type(self.app.config.retry_metadata_internally), self.app.config.retry_metadata_internally)) I am guessing self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ) is returning False, and self.app.config.retry_metadata_internally is True, so then we'd need to determine why external metadata is failing for this job. --nate On Fri, Jan 20, 2012 at 11:43 AM, Shantanu Pavgi pa...@uab.edu wrote: Just wanted to add that we have consistently seen this issue of 'samtools index' running locally on our install. We are using SGE scheduler. Thanks for pointing out details in the code Nate. -- Shantanu. On Jan 20, 2012, at 9:35 AM, Nate Coraor wrote: On Jan 18, 2012, at 11:54 AM, Ryan Golhar wrote: Nate - Is there a specific place in the Galaxy code that forks the samtools index on bam files on the cluster or the head node? I really need to track this down. Hey Ryan, Sorry it's taken so long, I've been pretty busy. The relevant code is in galaxy-dist/lib/galaxy/datatypes/binary.py, in the Bam class. When Galaxy runs a tool, it creates a Job, which is placed inside a JobWrapper in lib/galaxy/jobs/__init__.py. After the job execution is complete, the JobWrapper.finish() method is called, which contains: if not self.app.config.set_metadata_externally or \ ( not self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ) \ and self.app.config.retry_metadata_internally ): dataset.set_meta( overwrite = False ) Somehow, this conditional is being entered. Since set_metadata_externally is set to True, presumably the problem is external_metadata_set_successfully() is returning False and retry_metadata_internally is set to True. If you leave behind the relevant job files (cleanup_job = never) and have a look at the PBS and metadata outputs you may be able to see what's happening. Also, you'll want to set retry_metadata_internally = False. --nate On Fri, Jan 13, 2012 at 12:54 PM, Ryan Golhar ngsbioinformat...@gmail.com wrote: I re-uploaded 3 BAM files using the Upload system file paths. runner0.log shows: galaxy.jobs DEBUG 2012-01-13 12:50:08,442 dispatching job 76 to pbs runner galaxy.jobs INFO 2012-01-13 12:50:08,555 job 76 dispatched galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:50:08,697 (76) submitting file /home/galaxy/galaxy-dist-9/database/pbs/76.sh galaxy.jobs.runners.pbs DEBUG 2012-01-13 12:50:08,697 (76) command is: python /home/galaxy/galaxy-dist-9/tools/data_source/upload.py
Re: [galaxy-dev] history box odd behaviour
On Feb 13, 2012, at 11:22 AM, Bossers, Alex wrote: When I press the eye icon on the running blast history box it drops an error in the mid panel that the xml is not there yet (XML-parse error). But than the box changes to green while the blast is still running in the background (checked with htop). The subsequent parsers remain in queue as well. Shouldn't the box remain yellow while processing? Or is something wrong with our datatype configurations...although I cannot remember we have changed anything in there. Alex: I noticed this too but didn't take the time to file it as a bug - i think it is a bug. If something is wrong with your setup it's also wrong with mine in the same way. brad Brad Langhorst langho...@neb.com 978-380-7564 ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Status on importing BAM file into Library does not update
On Feb 13, 2012, at 9:45 AM, Nate Coraor wrote: On Feb 8, 2012, at 9:32 PM, Fields, Christopher J wrote: 'samtools sort' seems to be running on our server end as well (not on the cluster). I may look into it a bit more myself. Snapshot of top off our server (you can see our local runner as well): PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 3950 galaxy20 0 1303m 1.2g 676 R 99.7 15.2 234:48.07 samtools sort /home/a-m/galaxy/dist-database/file/000/dataset_587.dat /home/a-m/galaxy/dist-database/tmp/tmp9tv6zc/sorted 5417 galaxy20 0 1186m 104m 5384 S 0.3 1.3 0:15.08 python ./scripts/paster.py serve universe_wsgi.runner.ini --server-name=runner0 --pid-file=runner0.pid --log-file=runner0.log --daemon Hi Chris, 'samtools sort' is run by groom_dataset_contents, which should only be called from within the upload tool, which should run on the cluster unless you still have the default local override for it in your job runner's config file. Yes, that is likely the problem. Our cluster was running an old version of python (v2.4) that was also UCS2 (bx_python broke), so we were running locally. That was rectified this past week (the admins insisted on not installing a python version locally, so we insisted back they install something modern using UCS4). I tested a single upload with success off the cluster, so I would guess this is rectified (I'll confirm that). Is there any information on data grooming on the wiki? I only found info relevant to FASTQ grooming, not SAM/BAM. Ryan's instance is running 'samtools index' which is in set_meta which is supposed to be run on the cluster if set_metadata_externally = True, but can be run locally under certain conditions. --nate Will have to check, but I believe we have not set that yet either. We are in the midst of moving all jobs to the cluster, just rectifying the various issues with disparate python versions, etc. which now seem to be rectified, so that will shortly be resolved as well. chris ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Uploading large file in browser
On Feb 9, 2012, at 7:05 PM, Nic Barker wrote: Hi All, Apologies for resurrecting an old thread, but an HTML5 chunk uploader as Hyunsoo previously suggested would actually be very useful for my organisation, in terms of streamlining our workflow for less technically able users. I was wondering if there was any intention to include functionality like this at some point, although I'm perfectly happy to use other methods if there isn't. Cheers, -Nic Hi Nic, I started working on HTML5 uploading a while back but didn't get too far with it. We'll definitely get back to rewriting uploading at some point. --nate On 31/01/2012, at 7:12 AM, Nate Coraor wrote: On Jan 26, 2012, at 4:29 AM, Hans-Rudolf Hotz wrote: On 01/25/2012 08:20 PM, Kim, Hyunsoo wrote: Hi again, I'm kind of lost here. Does Data Library allow regular users to upload files directly to Galaxy from their remote work station, or does it allow users to use files that are already exist in Galaxy? just to clarify: regular user can copy (ie: using scp) their data to the galaxy server, and then use the advantages of Data Libraries, see the user_library_import_dir option in the 'universe_wsgi.ini' file. Regards, Hans It's also possible to use the FTP Upload functionality with a protocol other than standard FTP (i.e. with scp) so that users can upload scp'd files directly to a history. --nate My intention was to using HTML5 file uploader which chunks large file( 2GB) into smaller pieces so that regular users can upload large files through Galaxy's GUI without external FTP tool. Thanks, Hyunsoo -Original Message- From: Hans-Rudolf Hotz [mailto:h...@fmi.ch] Sent: Wednesday, January 25, 2012 4:02 AM To: Kim, Hyunsoo Cc: galaxy-dev@lists.bx.psu.edu Subject: Re: [galaxy-dev] Uploading large file in browser On 01/24/2012 10:27 PM, Kim, Hyunsoo wrote: Hello, I have local instance of galaxy and wanted to modify upload file so that I will be able to upload large files ( 2GB). The reason I am trying to do this in browser is that extra tools for FTP do not really work in my environment because of all the constraints and firewalls. I came up with jQuery file upload tool (http://blueimp.github.com/jQuery-File-Upload/), and the tool seems fine if it is possible me to integrate into my galaxy instance. My questions are: -Is it too cumbersome to achieve this goal with external tools? -how deep should I hit the galaxy (at the code level) to integrate this jquery tool? -Are there any alternatives to upload large files in browser without FTP? as an alternative: have you looked into using Data Libraries, see: http://wiki.g2.bx.psu.edu/Admin/Data%20Libraries/Libraries http://wiki.g2.bx.psu.edu/Admin/Data%20Libraries/Uploading%20Library%20Files Regards, Hans Thanks, Daniel ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Stalled upload jobs under Admin, Manage jobs
On Feb 10, 2012, at 6:47 AM, Peter Cock wrote: Hello all, I've noticed we have about a dozen stalled upload jobs on our server from several users. e.g. Job IDUserLast Update ToolState Command LineJob Runner PID/Cluster ID 2352 21 hours agoupload1 upload NoneNoneNone ... 2339 19 hours agoupload1 upload NoneNoneNone The job numbers are consecutive (2339 to 2352) and reflect a problem for a couple of hours yesterday morning. I believe this was due to the underlying file system being unmounted (without restarting Galaxy), and at the time restarting Galaxy fixed uploading files. Test jobs since then have completed normally - but these zombie jobs remain. Using the Stop jobs option does not clear these dead upload jobs. Restarting the Galaxy server does not clear them either. This is our production server and was running galaxy-dist, changeset 5743:720455407d1c - which I have now updated to the current release, 6621:26920e20157f - which makes no difference to these stalled jobs. Does anyone have any insight into what might be wrong, and how to get rid of these zombie tasks? Hi Peter, Are you using the nginx upload module? There's no way to fix these from within Galaxy, unfortunately. You'll have to update them in the database. --nate Thanks, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Error executing tool: 'hg19'
This bug has been fixed in galaxy-central but, as per one of the threads you found, has not made it to galaxy-dist yet. Try this: 1. assuming you haven't made any changes to datatypes_conf.xml, copy datatypes_conf.xml.sample to datatypes_conf.xml 2. visit the custom builds page (User tab -- Custom Builds) and see if the Page loads properly; if it does, your custom build should work and everything should be fine. 3. this is a user-specific problem, so you can always create a new user and you should be good to go. If you want to continue to use the problematic account and the above doesn't work, you'll need to manually--via SQL--delete the user preferences for the problematic user. Best, J. On Feb 13, 2012, at 10:30 AM, Joachim Jacob wrote: For completeness: this is a local galaxy instance running for few months now, updated recently (27 Jan 'release'). Hello Joachim, After a quick try with visualising track in Trackster (importing one chromosome of hg19 - which did not succeed BTW), none of the tools in my local galaxy appear to work. What are the steps you're taking to produce this issue? I had a BAM file that I wanted to test to view with Trackster. I clicked that icon in the dataset . In the next screen: save in new visualation, which I called test. And I selected there 'Add a custom build'. Next I could select which fasta from my history containing the reference: selected the correct one. I named it hg19_chrom21. But then, clicking OK, gave an error. From then on, all tools give the error: Error executing tool: 'hg19'. The point is, I cannot recreate that BAM file, since the required tools are not working anymore... Basically, my Galaxy has become useless. Before I dig up my backup, I hope somebody can help me? They all send this error message: Error executing tool: 'hg19' Are you seeing this error in failed datasets? If not, where are you seeing this error? This error appears in the middle pane, after clicking execute. This bug has been reported before, but I was wondering if somebody suggest a fix for this? Can you provide a link to the thread/issue where this has been reported? http://www.mail-archive.com/galaxy-dev@lists.bx.psu.edu/msg04216.html http://www.mail-archive.com/galaxy-dev@lists.bx.psu.edu/msg03995.html Thanks, J. Thank for looking into this. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Problem with self.object_store.exists in set_dataset_sizes.py
On Feb 10, 2012, at 7:06 AM, Peter Cock wrote: Hi all, I've just updated our Galaxy to the current release 26920e20157f which now includes quota support. This means I need to run set_dataset_sizes.py in order to record the current usage, as explained here: http://wiki.g2.bx.psu.edu/Admin/Disk%20Quotas However, I have run into a problem: $ sudo -u galaxy python2.6 scripts/set_dataset_sizes.py Loading Galaxy model... Processing 3011 datasets... Completed 0% Traceback (most recent call last): File scripts/set_dataset_sizes.py, line 45, in module dataset.set_total_size() File lib/galaxy/model/__init__.py, line 702, in set_total_size if self.object_store.exists(self, extra_dir=self._extra_files_path or dataset_%d_files % self.id, dir_only=True): AttributeError: 'NoneType' object has no attribute 'exists' According to the comments in that file, self.object_store should be initialized in mapping.py (method init) by app.py - apparently that isn't happening. Has anyone else seen this? Yup, fixed in da8591377954. --nate Thanks, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Galaxy server configuration question
On Feb 13, 2012, at 3:08 AM, Huayan Gao wrote: Hi Nate. I am ok now. The code I copied from wiki was using galaxy_dist, while my folder name is galaxy-dist. After I changed the path in the code, the problem solved. Hi Huayan, I've updated the wiki to refer to galaxy-dist rather than galaxy_dist. Sorry for the confusion. I would suggest moving galaxy out of /var/www/html. From the documentation: Please note that Galaxy should never be located on disk inside Apache's DocumentRoot. By default, this would expose all of Galaxy (including datasets) to anyone on the web. Galaxy is a proxied application and as such, only the static content like javascript and images are served directly by Apache (and this is set up with the RewriteRules), everything else is passed through to the Galaxy application via a proxied http connection. Right now I could presumably use the URL http://server/galaxy/galaxy-dist/database/files/000/dataset_1.dat to view a dataset directly. --nate Best, Huayan On 13 Feb, 2012, at 11:40 AM, Huayan Gao wrote: Hi Nate, I removed the proxy section in httpd file and got the following screenshot. It seems working but not in the way we expected. I will keep looking for the solution but do you know how to fix it? It seems to say, the file .../static/welcome.html is missing or something like that. Thanks, Huayan Screen Shot 2012-02-13 at 11.37.57 AM.png On 10 Feb, 2012, at 10:28 AM, Huayan Gao wrote: Hi Nate, Yes, I did follow the instructions. But I came to the question in httpd.conf file. I put galaxy-dist under my document root which is /var/www/html/. When my server is up, I can access my UCSC genome browser mirror site through my ip address, for example, http://61.244. xxx.xxx. Then how should I set up in httpd.conf file so I can access galaxy using my ip address, for example, http://61.244. xxx.xxx/galaxy? Thanks, Best, Huayan On 10 Feb, 2012, at 1:17 AM, Nate Coraor wrote: On Feb 8, 2012, at 1:00 AM, Huayan Gao wrote: Dear Sir or Madam, I am installing a galaxy server on CentOS with UCSC Genome Browser mirror site. The mirror site works well. I installed the galaxy in the same server. Now my question is: how to set up httpd.conf file so I can access both websites(UCSC Genome Browser, and Galaxy) remotely? Hi Huayan, Have you consulted the production server documentation? http://usegalaxy.org/production --nate Best, Huayan ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] LDAP authentification
On Feb 13, 2012, at 7:38 AM, Sarah Maman wrote: Hello, I managed to connect to Galaxy to LDAP ;-) Three points were blocking for me: * Being root of my virtual machine can carry out tests * I confused login / password of two LDAP, so I thought that my authentication method was not good while I was using the wrong password ... * It is better not to go through a proxy Hi Sarah, Thanks very much for reporting back with your findings. This should be very helpful for people who stumble on to similar problems in the future. 1 - Set configuration file of Galaxy: universe_wsgi.ini to delegate user authentication to an upstream proxy Apache: Users and Security use_remote_user = True remote_user_maildomain = toulouse.inra.fr 2 - Create a file type htaccess file named galaxy.conf (in / etc / httpd / conf.d /): For reasons of performance and safety, it is advisable not to use a. htaccess but a galaxy.conf file in the main server configuration (Apache), because the latter will be charged a once when the server starts. With an .htaccess file, this file will be charged at each access. RewriteEngine on Location /galaxy # Define the authentication method AuthType Basic AuthName Galaxy AuthBasicProvider ldap AuthLDAPURL ldap :/ / server URL: 389/... AuthzLDAPAuthoritative off Require valid-user RequestHeader set REMOTE_USER %{AUTHENTICATE_uid}e / Location RewriteRule ^ / $ galaxy / galaxy / [R] RewriteRule ^ / galaxy / static / style / (. *) / var/www/html/galaxy/static/june_2007_style/blue / $ 1 [L] RewriteRule ^ / galaxy / static / scripts / (. *) /vVar / www / html / galaxy / static / scripts / packed / $ 1 [L] RewriteRule ^ / galaxy / static / (. *) / var / www / html / galaxy / static / $ 1 [L] RewriteRule ^ / galaxy / favicon.ico / var / www / html / galaxy / static / favicon.ico [L] RewriteRule ^ / galaxy / robots.txt / var / www / html / galaxy / static / robots.txt [L] RewriteRule ^ / galaxy (. *) http://ip:port $ 1 [P] As Galaxy is not installed in root directory but in a galaxy directory (var / www / html / galaxy /), so following changes are needed: This is probably not a good idea. From the documentation: Please note that Galaxy should never be located on disk inside Apache's DocumentRoot. By default, this would expose all of Galaxy (including datasets) to anyone on the web. Galaxy is a proxied application and as such, only the static content like javascript and images are served directly by Apache (and this is set up with the RewriteRules), everything else is passed through to the Galaxy application via a proxied http connection. Right now I could presumably use the URL http://server/galaxy/galaxy-dist/database/files/000/dataset_1.dat to view a dataset directly. 1 - Add a RewriteRule 2 - Do not go through a proxy Can you clarify this? I'm a bit confused, since if you are connecting to Apache to access Galaxy, you are going through a proxy. 3 - REMOTE_USER variable is AUTHENTICATE_uid ( AUTHENTICATE_ sAMAccountName for Windows AD) I've added this to the wiki page, thanks! --nate 4 - To generate dynamic URLs, it is necessary to configure prefix in universe_wsgi.ini : [Filter: proxy-prefix] use = egg: # prefix PasteDeploy prefix = / galaxy [App: main] filter-with = proxy-prefix cookie_path = / galaxy If you are not root on the virtual machine, create a symlink from / etc / httpd / conf.d / to galaxy.conf 3 - Some useful checks Verify Apache version and Apache modules because each directive must have an associated module: Directive → Related module (which mod_ldap) AuthType → mod_auth_basic.so AuthBasicProvider → mod_authnz_ldap and mod_authz_ldap Rewrite (for proxy) → mod_rewrite.so RequestHeader→ mod_headers Check that the galaxy is installed on ldap using this command: ldapsearch-x-h LDAP URL : port-b dc When you make a modification in galaxy.conf, restart Apache (or graful). In httpd.conf, so that access management is authorized by the file. # # AccessFileName: The name of the file to look for in EACH directory # For additional configuration directives. See also the AllowOverride # Directive. # AccessFileName. Htaccess Check: Chmod 777 galaxy.conf 4 - Finally, restart run.sh (sh run.sh ) Thanks A LOT for your help, Sarah ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] cufflinks could not move file as directed by from_work_dir
On Feb 13, 2012, at 11:07 AM, p.baska...@leeds.ac.uk wrote: Hi Nate, I got the following error in the log file after running cufflinks on the local instance of galaxy: Hi Praveen, Please use the galaxy-dev mailing list (CC'd) for these types of questions so that you can receive a more timely and accurate response. galaxy.jobs.runners.drmaa DEBUG 2012-02-13 16:03:26,703 (340/2364848) state change: job is running 10.12.152.44 - - [13/Feb/2012:16:03:29 +0100] POST /root/history_item_updates HTTP/1.1 200 - http://localhost:8181/history; Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.46 Safari/535.11 galaxy.jobs.runners.drmaa DEBUG 2012-02-13 16:03:33,081 (340/2364848) state change: job finished normally galaxy.jobs DEBUG 2012-02-13 16:03:33,203 finish(): Could not move /nobackup/galaxy/database/job_working_directory/340/cufflinks_out/genes.fpkm_tracking to /nobackup/galaxy/database/files/000/dataset_397.dat as directed by from_work_dir galaxy.jobs DEBUG 2012-02-13 16:03:33,236 finish(): Could not move /nobackup/galaxy/database/job_working_directory/340/cufflinks_out/isoforms.fpkm_tracking to /nobackup/galaxy/database/files/000/dataset_398.dat as directed by from_work_dir 10.12.152.44 - - [13/Feb/2012:16:03:34 +0100] POST /root/history_item_updates HTTP/1.1 200 - http://localhost:8181/history; Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.46 Safari/535.11 10.12.152.44 - - [13/Feb/2012:16:03:34 +0100] POST /root/history_get_disk_size HTTP/1.1 200 - http://localhost:8181/history; Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.46 Safari/535.11 galaxy.jobs DEBUG 2012-02-13 16:03:36,271 job 340 ended galaxy.datatypes.metadata DEBUG 2012-02-13 16:03:36,272 Cleaning up external metadata files galaxy.datatypes.metadata DEBUG 2012-02-13 16:03:36,489 Failed to cleanup MetadataTempFile temp files from ../../../../../../nobackup/galaxy/database/tmp/metadata_out_HistoryDatasetAssociation_529_nHaLSL: No JSON object could be decoded: line 1 column 0 (char 0) It looks like the job has failed and what you're seeing here is the failure of the job finish/cleanup process since the job failed to produce expected outputs. Try setting cleanup_job = never in your Galaxy config file and then have a look at the DRM's output and error files to see if this reveals anything about why the job failed. --nate can you please help me to solve this. Thanks Praveen ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Status on importing BAM file into Library does not update
On Feb 13, 2012, at 11:52 AM, Fields, Christopher J wrote: On Feb 13, 2012, at 9:45 AM, Nate Coraor wrote: On Feb 8, 2012, at 9:32 PM, Fields, Christopher J wrote: 'samtools sort' seems to be running on our server end as well (not on the cluster). I may look into it a bit more myself. Snapshot of top off our server (you can see our local runner as well): PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 3950 galaxy20 0 1303m 1.2g 676 R 99.7 15.2 234:48.07 samtools sort /home/a-m/galaxy/dist-database/file/000/dataset_587.dat /home/a-m/galaxy/dist-database/tmp/tmp9tv6zc/sorted 5417 galaxy20 0 1186m 104m 5384 S 0.3 1.3 0:15.08 python ./scripts/paster.py serve universe_wsgi.runner.ini --server-name=runner0 --pid-file=runner0.pid --log-file=runner0.log --daemon Hi Chris, 'samtools sort' is run by groom_dataset_contents, which should only be called from within the upload tool, which should run on the cluster unless you still have the default local override for it in your job runner's config file. Yes, that is likely the problem. Our cluster was running an old version of python (v2.4) that was also UCS2 (bx_python broke), so we were running locally. That was rectified this past week (the admins insisted on not installing a python version locally, so we insisted back they install something modern using UCS4). I tested a single upload with success off the cluster, so I would guess this is rectified (I'll confirm that). Is there any information on data grooming on the wiki? I only found info relevant to FASTQ grooming, not SAM/BAM. FASTQ grooming runs voluntarily as a tool. The datatype grooming method is only called at the end of the upload tool, and is only defined for the Bam datatype (although other datatypes could define it). I believe it's implemented this way because it was deemed inefficient to force FASTQ grooming when the FASTQ may already be in an acceptable format. I am not sure why the same determination was not made for BAM, so perhaps one of my colleagues will clarify that. Ryan's instance is running 'samtools index' which is in set_meta which is supposed to be run on the cluster if set_metadata_externally = True, but can be run locally under certain conditions. --nate Will have to check, but I believe we have not set that yet either. We are in the midst of moving all jobs to the cluster, just rectifying the various issues with disparate python versions, etc. which now seem to be rectified, so that will shortly be resolved as well. set_metadata_externally = True should just work and will significantly decrease the performance penalty taken on the server and by the (effectively single-threaded) Galaxy process. --nate chris ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] 2012 Galaxy Community Conference (GCC2012): Now Accepting Abstracts
Hello all, Abstracts http://wiki.g2.bx.psu.edu/Events/GCC2012/Abstracts are now being accepted for oral presentations at the 2012 Galaxy Community Conference (GCC2012) http://wiki.g2.bx.psu.edu/Events/GCC2012. Submissions on any topics of interest to the Galaxy community are encouraged. Areas of interest include, but are not limited to: - Best practices for local Galaxy installation and management - Integrating tools and/or data sources into the Galaxy framework - Deploying galaxy on different infrastructures - Compelling or novel uses of Galaxy for biomedical analysis See the GCC2011 program http://wiki.g2.bx.psu.edu/Events/GCC2011 for an idea of the breadth of topics that can be covered. Oral presentations will be approximately 15-20 minutes long, including time for question and answer. There will also be an opportunity for lightning talks, which will be solicited at the meeting. The submission deadline is April 16. See the GCC2012 Abstracts http://wiki.g2.bx.psu.edu/Events/GCC2012/Abstracts page for more details and how to submit. GCC2012 http://wiki.g2.bx.psu.edu/Events/GCC2012 will be held, July 25-27 in Chicago, Illinois, United States. The main meeting will run for two full days http://wiki.g2.bx.psu.edu/Events/GCC2012/Program, and be preceded by a full day of training workshopshttp://wiki.g2.bx.psu.edu/Events/GCC2012/Program. If you are a bioinformatics tool developer, data provider, workflow developer, power bioinformatics user, sequencing or bioinformatics core staff, or a data and analysis archival specialist, then GCC2012 is relevant to you. Registration will open in March. GCC2012 is hosted by the University of Illinois at Chicago http://uic.edu/, the University of Illinois at Urbana-Champaign http://illinois.edu/, and the Computation Institute http://www.ci.anl.gov/. Links: http://galaxyproject.org/GCC2012 http://galaxyproject.org/wiki/Events/GCC2012/Abstracts Thanks, Dave Clements -- http://galaxyproject.org/ http://getgalaxy.org/ http://usegalaxy.org/ http://galaxyproject.org/wiki/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Batch limit on Wokflows
OK, I've tried the latest revision og 'Galaxy-Central' and am still running into the same issue, when I try to run a work flow when there are 20+ datasets in my history. I might try setting it up on another machine to see if I run into the same issue. Thanks, Robert From: Petit III, Robert A. Sent: Monday, February 13, 2012 11:05 AM To: galaxy-dev@lists.bx.psu.edu Subject: RE: [galaxy-dev] Batch limit on Wokflows Sorry for the delay, but I am using a galaxy-dist clone, and have updated to revision 26920e20157f+ tip. I'm still receiving the same issue with the revision. Maybe I should try out galaxy-central? Thanks, Robert From: Dannon Baker [dannonba...@me.com] Sent: Friday, February 10, 2012 11:50 AM To: Petit III, Robert A. Cc: galaxy-dev@lists.bx.psu.edu Subject: Re: [galaxy-dev] Batch limit on Wokflows What revision is your Galaxy instance at? I'm not seeing this behavior on tip with a simple test, it may have been something we've fixed in a more recent revision. -Dannon On Feb 10, 2012, at 11:14 AM, Petit III, Robert A. wrote: Hi there, I've run into an issue on my local galaxy install. When I have 20 or more datasets in my history, I no longer get the option to 'Enable/Disable selection of multiple input files...' I instead get a broken drop down list. I say broken because its as though the drop down list and input box have merged together. Is there a setting I need to change to correct this? Thanks Robert This e-mail message (including any attachments) is for the sole use of the intended recipient(s) and may contain confidential and privileged information. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this message (including any attachments) is strictly prohibited. If you have received this message in error, please contact the sender by reply e-mail message and destroy all copies of the original message (including attachments). ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Running workflows on mulitple paired-end data sets
Hi,Is there an intelligent way to run a workflow that starts with more than one pair of paired-end data sets in a workflow?I can run multiple workflows when there is a SINGLE input file, but in the case of many paired end workflows, you need to provide two paired samples and just adding another input workflow step does not work, since you can only select a single file in the second input workflow set.I realize that there is an issue with the cycling through the datasets since they need to be paired-up and it would be very easy to mess this up if you allow two (or more) input data sets, but since paired end data is going to be very common, I would assume a special paired end input workflow step that is intelligent and can pair up the input datasets before handing them off to the workflow would be very useful.I can think of workarounds ofcourse, such as creating an interlaced file containing both pairs and then just de-interlacing them in the workflow, but that is kludgy and will result in a lot of data duplication for no reason...I have to run 88 samples which will soon grow to over 200 samples, so running each step manually is not really an option and I would hate to have to program the workflow steps myself...Any ideas?ThanksThon___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] history box odd behaviour
Ok. Just filed a bug report. Alex -Oorspronkelijk bericht- Van: Langhorst, Brad [mailto:langho...@neb.com] Verzonden: maandag 13 februari 2012 17:38 Aan: Bossers, Alex CC: galaxy-dev@lists.bx.psu.edu Onderwerp: Re: [galaxy-dev] history box odd behaviour On Feb 13, 2012, at 11:22 AM, Bossers, Alex wrote: When I press the eye icon on the running blast history box it drops an error in the mid panel that the xml is not there yet (XML-parse error). But than the box changes to green while the blast is still running in the background (checked with htop). The subsequent parsers remain in queue as well. Shouldn't the box remain yellow while processing? Or is something wrong with our datatype configurations...although I cannot remember we have changed anything in there. Alex: I noticed this too but didn't take the time to file it as a bug - i think it is a bug. If something is wrong with your setup it's also wrong with mine in the same way. brad Brad Langhorst langho...@neb.com 978-380-7564 ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/