[galaxy-dev] Fwd: tool restrict access
Hi, in order to restrict the access of a tool to logged users I'm trying to use Cheetah for editing the xml config file. I was wondering if a solution like the one below should work. Although no errors are reported while loading the tool it does not perform the check on the email address and anonymous users still see the tool. Any ideas of what it's wrong with this solution? Cheers, I. ?xml version=1.0? tool name=RSS site id=rss1 tool_type=data_source descriptionRSS site/description #if $__user_email__ == displayYou are not authorized to use this tool/display #else command interpreter=python data_source.py $output $__app__.config.output_size_limit /command options sanitize=False refresh=True/ #end if /tool Messaggio originale Oggetto: [galaxy-dev] tool restrict access Data: Mon, 02 Jan 2012 18:36:53 +0100 Mittente: Ivan Merelli ivan.mere...@itb.cnr.it A: galaxy-dev@lists.bx.psu.edu galaxy-dev@lists.bx.psu.edu Hi, how can I restrict the access of a Galaxy tool to a specific user in an login free instance of Galaxy? I see a suggestion in this post http://gmod.827538.n3.nabble.com/Galaxy-Tool-permission-Access-td3348890.html but it's really workround, I was seeking for a cleaner solution... Thanks, Ivan ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Best practices with data on clusters
Hi Nate, On Jan 3, 2012, at 10:15 PM, Nate Coraor wrote: That said, if you have a lot interim steps that produce large data that then get merged via some process back to final outputs, it absolutely makes sense to use local disk for those steps (assuming local disk is large enough - another problem that we sometimes encounter). Wouldn't mean that most of the workflows dealing with NGS data should run on local disks? d /* Davide Cittaro, PhD Head of Bioinformatics Core Center for Translational Genomics and Bioinformatics San Raffaele Scientific Institute Via Olgettina 58 20132 Milano Italy Office: +39 02 26439140 Mail: cittaro.dav...@hsr.itmailto:cittaro.dav...@hsr.it Skype: daweonline */ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] tool_type=data_source_async
Does this functionality exist? If so, how do we get it working? Sorry to bump! On 1/3/12 8:50 AM, Matt Vincent matt.vinc...@jax.org wrote: ost recent version of Galaxy. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Galaxy tool's error report
On Wed, Jan 4, 2012 at 5:24 AM, Timothy Wu 2hug...@gmail.com wrote: Hi, I'm executing a R script via Python's os.system() (using Rscript executable which allow executing R script in the command line). This script makes use of library that will attempt to load up Tcl/Tk interface. Though I don't see anything even if I'm running the commands on Windows interactive R console, it will attempt to connect to the display server unsuccessfully during Galaxy's execution (and fail because $DISPLAY is not set). The program runs just fine since I have the output I wanted. But Galaxy sees error, and I'm suspecting it's because of this DISPLAY thing. I don't understand how Galaxy detects something goes wrong. If anything I thought it's my Python script return code that should matter. But how does Galaxy know?! And how do I fix it? Timothy Currently you must suppress any warning messages to stderr, see https://bitbucket.org/galaxy/galaxy-central/issue/325/ Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] tool_type=data_source_async
Hi Matt, The asynchronous protocol should be working fine. Can you confirm that your flow is similar to: - User starts at Galaxy, gets sent to external resource with 'GALAXY_URL' parameter. - User browses external site and selects options, sends data to Galaxy by using the GALAXY_URL parameter, providing with it a URL parameter that contains where Galaxy should inform the external site of the final GALAXY_URL - Galaxy contacts 'URL', with a new GALAXY_URL (the page content of accessing 'URL' should end with 'OK') - When data is ready, the external site contacts the new GALAXY_URL, providing 'URL' which contains final data and 'STATUS' which should be 'OK' (when successful) - Data is loaded into the Galaxy history. If your tool flow is following this template, but not working, or if there are some other problems, can you provide the log output of trying to use the tool and any other information that might be helpful? Please let us know if we can provide additional information. Thanks for using Galaxy, Dan On Jan 3, 2012, at 8:50 AM, Matt Vincent wrote: Hello all, I am trying to configure an Asynchronous tool (I can get it to work synchronously). My configuration looks something like this for the tool: ?xml version=1.0? tool name=mytoolname id=myunique_tool_id_1 tool_type=data_source_async descriptionmytool description/description command interpreter=pythondata_source.py $output $__app__.config.output_size_limit/command inputs action=http://myurl; check_values=false method=post displayGo to MyTool $GALAXY_URL/display /inputs request_param_translation request_param galaxy_name=URL_method remote_name=URL_method missing=post / request_param galaxy_name=URL remote_name=URL missing= / request_param galaxy_name=jobname remote_name=jobname missing=N/A / /request_param_translation uihints minwidth=800/ outputs data name=output format=zip / /outputs options sanitize=False refresh=True/ /tool This works fine and downloads the data, but I was expecting Galaxy to post another GALAXY_URL parameter for me to generate the data and than post back to Galaxy once done. This is described here... http://wiki.g2.bx.psu.edu/Admin/Internals/Data%20Sources However, I never receive “another” GALAXY_URL as descripbed in Step 1 of the Asynchronous data depositing section. I am using the most recent version of Galaxy. Can someone please show an example? Matt ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Problem with uploading SMALL file to Galaxy on EC2 and get client intended to send too large body
Thon, This is a problem with the default client_max_body_size option in nginx being set far too small in the nginx.conf on the cloud AMI. It'll be fixed with our next AMI update, but you could also SSH in to your instance, edit the nginx.conf to change the client_max_body_size to something more appropriate for your needs and restart the nginx process. If you'd rather not do that (you'd have to do it for every instance, unfortunately, since that volume is not persisted after shutdown), as a workaround the URL upload will work correctly with any size file if you're able to host the file you want to upload somewhere local. Thanks! Dannon On Dec 9, 2011, at 3:08 PM, Thon deBoer wrote: I am trying to setup a instance of Galaxy on the EC2 cloud and everything seems to be going OK. Everything installs correctly and I can start Galaxy with no problems. But when I try to upload a SMALL BAM file (~5 MB), it just hangs there and never completes... When I look at the error.log I see the following error message: 2011/12/09 19:27:32 [error] 929#0: *1204 client intended to send too large body: 4719398 bytes, client: 173.195.189.92, server: localhost, request: POST /tool_runner/index HTTP/1.1, host: ec2-50-17-113-5.compute-1.amazonaws.com, referrer: http://ec2-50-17-113-5.compute-1.amazonaws.com/tool_runner?tool_id=upload1; It seems to have a problem with the size of the body... Anyone have an idea how to fix this? I am using ami-da58aab3 Regards, Thon de Boer, Ph.D. Bioinformatics Guru +1-650-799-6839 thondeb...@me.com LinkedIn Profile ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] How to define comment character for output files
Hello, 1. How to define a character used at the beginning of a comment for ourput file. i.e. if # is used at the start of the comment Galaxy recognises this, but what if I want to use some other character. is it possible to define this inside outputs? outputs data name=output format=custom_format label=${input.name}_mappedreads.yyy /data !-- The_comment_character =@ -- /outputs 2. How does the Number of comment lines populated in the file properties ? (form where does Galaxy get this information from when the auto detect button is pressed ?) Regards, Sabry. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Problem password authentication failed for user postgres
On Jan 3, 2012, at 8:35 PM, Huayan Gao wrote: Thanks Nate! I also want to do a mirror site of Galaxy Server in asia, but found the instruction is not so clear/detailed. Do we have a detailed one? Hi Huayan, Please keep replies on the mailing list for the benefit and collaboration of the entire Galaxy community. Have you seen the documentation at: http://usegalaxy.org/production If so, and this is unclear, can you provide specifics about what you need to do that is not covered by the documentation. Thanks, --nate Best, Huayan On 4 Jan, 2012, at 4:16 AM, Nate Coraor wrote: On Dec 8, 2011, at 10:12 PM, Huayan Gao wrote: Dear all, I am trying to install Galaxy in my Mac. And got the following error: password authentication failed for user postgres. I have installed postgres using dmg file and created a database called galaxydb under /Library/PostGresql/9.1 folder. I am able to run psql command. I also created postgres user for it. I modified the universe_wsgi.ini file for the database connection. Could you help me check what I should do next to fix the problem? Hi Huayan, Sorry for the delay in response. It probably isn't a good idea to use the 'postgres' superuser to run Galaxy. I'd suggest creating a new Postgres user (role) for Galaxy. The simplest thing to do if you're connecting with a unix socket (i.e. you're not specifying a hostname or IP address in database_connection) is to make the Postgres username match the username of the user running the Galaxy process, and to not require a password for that user. You may need the following in pg_hba.conf, if it's not already there: localall all ident So, if I have a local system username of 'nate', I would create a postgres user named 'nate' with access to a database 'galaxy', and set database_connection: database_connection postgres:///galaxy --nate Here is part output when I run the run.sh command. - galaxy.model.migrate.check DEBUG 2011-12-09 11:02:30,748 psycopg2 egg successfully loaded for postgres dialect Traceback (most recent call last): File /Users/huayangao/galaxy-dist/lib/galaxy/web/buildapp.py, line 82, in app_factory app = UniverseApplication( global_conf = global_conf, **kwargs ) File /Users/huayangao/galaxy-dist/lib/galaxy/app.py, line 39, in __init__ create_or_verify_database( db_url, kwargs.get( 'global_conf', {} ).get( '__file__', None ), self.config.database_engine_options ) File /Users/huayangao/galaxy-dist/lib/galaxy/model/migrate/check.py, line 54, in create_or_verify_database dataset_table = Table( dataset, meta, autoload=True ) File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/schema.py, line 108, in __call__ return type.__call__(self, name, metadata, *args, **kwargs) File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/schema.py, line 236, in __init__ _bind_or_error(metadata).reflecttable(self, include_columns=include_columns) File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/engine/base.py, line 1261, in reflecttable conn = self.contextual_connect() File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/engine/base.py, line 1229, in contextual_connect return self.Connection(self, self.pool.connect(), close_with_result=close_with_result, **kwargs) File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/pool.py, line 142, in connect return _ConnectionFairy(self).checkout() File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/pool.py, line 304, in __init__ rec = self._connection_record = pool.get() File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/pool.py, line 161, in get return self.do_get() File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/pool.py, line 639, in do_get con = self.create_connection() File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/pool.py, line 122, in create_connection return _ConnectionRecord(self) File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/pool.py, line 198, in __init__ self.connection = self.__connect() File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/pool.py, line 261, in __connect connection = self.__pool._creator() File /Users/huayangao/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.7.egg/sqlalchemy/engine/strategies.py, line 80, in connect raise exc.DBAPIError.instance(None, None, e) OperationalError: (OperationalError) FATAL: password authentication failed for user postgres None None - Best, Huayan
Re: [galaxy-dev] [galaxy-user] (OperationalError) unable to open database file
On Jan 4, 2012, at 4:40 AM, Cai Shaojiang wrote: Thanks, Nate, I now switched to MySQL, and the problem disappeared. Thanks. The manual highly recommend postgresql, is there any critical point to use it, instead of mysql? I am just more familiar with mysql. Thanks. PostgreSQL support receives the most attention from the development team because it's what we use, and we've found that SQLAlchemy (the database abstraction layer used by Galaxy) seems to work better with PostgreSQL. --nate On Wed, Jan 4, 2012 at 1:58 AM, Nate Coraor n...@bx.psu.edu wrote: On Dec 29, 2011, at 12:15 AM, Cai Shaojiang wrote: Dear friends, We are trying to install galaxy on the server (ubuntu 11), just by following the steps on the page Get Galaxy: Galaxy Download and Installation of galaxy wiki. But when we start running it, it shows the following error message. OperationalError: (OperationalError) unable to open database file u'INSERT INTO galaxy_session (create_time, update_time, user_id, remote_host, remote_addr, referer, current_history_id, session_key, is_valid, prev_session_id, disk_usage) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' ['2011-12-29 04:52:46.575702', '2011-12-29 04:52:46.575719', None, '10.50.70.45', '10.50.70.45', None, None, '5a38b2a7e6d77a7145726cb0881eadf6', 1, None, None] It seems something wrong with the write permission. Could you give any hint where the problem could be? Thanks. Hi Cai, I've moved this over to the galaxy-dev list since it pertains to a local installation. Please make sure that the galaxy-dist/database/ directory is writable by the user running the Galaxy server. Also, if you plan to use this server for anything other than single-user development, I would suggest switching to a PostgreSQL server. This is trivial on Ubuntu (apt-get install postgresql, createuser/createdb, then edit universe_wsgi.ini as described at http://usegalaxy.org/production ). --ndate Best regards. Yours: Cai ___ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org. Please keep all replies on the list by using reply all in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list: http://lists.bx.psu.edu/listinfo/galaxy-dev To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ -- Cai Shaojiang Department of Information Systems, School of Computing, National University of Singapore Tel: +65-65167355 ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Best practices with data on clusters
On Jan 4, 2012, at 6:03 AM, Cittaro Davide wrote: Hi Nate, On Jan 3, 2012, at 10:15 PM, Nate Coraor wrote: That said, if you have a lot interim steps that produce large data that then get merged via some process back to final outputs, it absolutely makes sense to use local disk for those steps (assuming local disk is large enough - another problem that we sometimes encounter). Wouldn't mean that most of the workflows dealing with NGS data should run on local disks? It depends on the location and ordering of the steps - If you're parallelizing single steps across multiple nodes, it wouldn't make sense. If you run multiple steps serially on a single node, then you could work locally between those steps. --nate d /* Davide Cittaro, PhD Head of Bioinformatics Core Center for Translational Genomics and Bioinformatics San Raffaele Scientific Institute Via Olgettina 58 20132 Milano Italy Office: +39 02 26439140 Mail: cittaro.dav...@hsr.it Skype: daweonline */ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] load balancing and proxy settings with Apache
I'm at the final stage of deploying our Galaxy instance. I'm implementing the proxy server and load balancing with 1 job runner, and 5 web runners using Apache. I'm following the guide on: http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Web%20Application%20Scaling All the Galaxy universe_wsgi.[webapp|runner].ini changes are fine. When editing the Apache configuration, I discovered a slight problem. First off, instead of modifying /etc/httpd/conf/httpd.conf, I created /etc/httpd/conf.d/galaxy.conf. The /etc/httpd/conf/httpd.conf reads /etc/httpd/conf/*.conf for additional settings. I thought it best to put changes here instead of polluting httpd.conf. One of the last changes in the docs is to Apache's rewriting rules is to change RewriteRule ^/galaxy(.*) http://localhost:8080$1 [P] to RewriteRule ^(.*) balancer://galaxy$1 [P] The problem with this however is that http://machine/galaxy not longer works. Only http://machine/galaxy/ (note the trailing backslash). Without the trailing backslash, HTTP requests never get past Apache's rewrite rules. I *think* the correct change should be: RewriteRule ^/galaxy(.*) balancer://galaxy$1 [P] This seems to work for me (so far) and allows initial requests with and without the trailing backslash. Can I recommend this as a change to the docs? Also, can I recommend documenting a separate galaxy.conf for Apache? I'd be happy to provide mine as a model, if you'd like. Ryan ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] galaxy command line
Usha: Galaxy is essentially a wrapper around other command line tools. So, while you could probably extract some of its pieces to run on the command line, I don't know why you would want to. brad -- Brad Langhorst New England Biolabs langho...@neb.com From: Usha Reddy usha.reddy...@gmail.commailto:usha.reddy...@gmail.com Date: Tue, 3 Jan 2012 13:07:25 -0500 To: galaxy-...@bx.psu.edumailto:galaxy-...@bx.psu.edu Subject: [galaxy-dev] galaxy command line Galaxy is a web based platform. Can Galaxy be run as a command line tool? Thanks, Usha ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Empty files when importing with no copy from NFS source
Hi, While trying to import some fastq files to a data library, using Upload directory of files and Link to files without copying into Galaxy, I end with empty datasets. My data files are accessed through NFS and I'm using the recommended option -noac: 10.2.90.89:/projects on /local/projects type nfs (rw,noac,sloppy,addr=10.2.90.89) To import the files, I created links to them in the directory configured in universe_wsgi.ini. If I try to import the same files but with the option Copy files into Galaxy, the files are correctly imported. Any advice what to look for? Thanks, Carlos ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] galaxy command line
On Wed, Jan 4, 2012 at 12:04 PM, Langhorst, Brad langho...@neb.com wrote: Usha: Galaxy is essentially a wrapper around other command line tools. So, while you could probably extract some of its pieces to run on the command line, I don't know why you would want to. I would argue it would be beneficial to invoke pipelines from the command line instead of being forced to use the web-based interface. command line is beneficial for large #'s of datasets that need to be analyzed. Ryan ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] galaxy command line
On Wed, Jan 4, 2012 at 2:25 PM, Ryan ngsbioinformat...@gmail.com wrote: On Wed, Jan 4, 2012 at 12:04 PM, Langhorst, Brad langho...@neb.com wrote: Usha: Galaxy is essentially a wrapper around other command line tools. So, while you could probably extract some of its pieces to run on the command line, I don't know why you would want to. I would argue it would be beneficial to invoke pipelines from the command line instead of being forced to use the web-based interface. command line is beneficial for large #'s of datasets that need to be analyzed. Ryan I think Ryan is right and I think that's exactly the niche for the API. Usha you could take a look into: http://wiki.g2.bx.psu.edu/Learn/API It seems there is very limited documentation, but it might help you to see if what you want is already possible. Regards, Carlos ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] galaxy command line
Ryan: I didn't understand Usha's question the same way you did. I agree that it's useful to run workflows in a more automated way. I have not run workflows using the Galaxy API yet, but I did see some documentation on it and plan to try it soon. http://wiki.g2.bx.psu.edu/Learn/API/Examples Maybe someone with more expertise has something more to say about this. Brad -- Brad Langhorst New England Biolabs langho...@neb.com From: Ryan ngsbioinformat...@gmail.commailto:ngsbioinformat...@gmail.com Date: Wed, 4 Jan 2012 14:25:44 -0500 To: Brad Langhorst langho...@neb.commailto:langho...@neb.com Cc: Usha Reddy usha.reddy...@gmail.commailto:usha.reddy...@gmail.com, galaxy-...@bx.psu.edumailto:galaxy-...@bx.psu.edu galaxy-...@bx.psu.edumailto:galaxy-...@bx.psu.edu Subject: Re: [galaxy-dev] galaxy command line On Wed, Jan 4, 2012 at 12:04 PM, Langhorst, Brad langho...@neb.commailto:langho...@neb.com wrote: Usha: Galaxy is essentially a wrapper around other command line tools. So, while you could probably extract some of its pieces to run on the command line, I don't know why you would want to. I would argue it would be beneficial to invoke pipelines from the command line instead of being forced to use the web-based interface. command line is beneficial for large #'s of datasets that need to be analyzed. Ryan ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] How to import data from a non-local source using SCP?
I have a cluster (cluster A) set up with Galaxy. Our sequencing data gets mapped to hg19, then the resulting BAM files are placed on a SAN connected to a different cluster (cluster B) that cluster A does not have NFS access to. We cannot install an FTP server on cluster B either. The only way to get data from cluster B to cluster A is to use scp. Is there a way to set up a Data Library in Galaxy on cluster A that refers to non-local data and transfers the data from cluster B when needed? Or is it possible to have a Galaxy instance on cluster B share data with a Galaxy instance on cluster A? ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Galaxy Server on Bio-Linux
On Tue, Jan 3, 2012 at 11:36 AM, Nate Coraor n...@bx.psu.edu wrote: On Dec 20, 2011, at 1:09 PM, Bicak, Mesude wrote: Dear Galaxy Developers, We work in Professor Dawn Field's group (Molecular Evolution and Bioinformatics Research Group) at the NERC Environmental Bioinformatics Centre (NEBC) of the Centre for Ecology and Hydrology (CEH) Research Institute based in Oxford. We develop and distribute Bio-Linux (http://nebc.nerc.ac.uk/tools/bio-linux), which is a customised Ubuntu distribution that comes with 500+ bioinformatics packages. Within our research group we also provide bioinformatics analysis for NERC-funded researchers, and recently started looking into Galaxy as well. It didn't take us long to discover its power and we would like to enable Bio-Linux users to install, run and maintain the Galaxy server with minimal effort, also with the aim to spread the word on Galaxy in Europe! Recently we took on a project with Dr. Casey Bergman from University of Manchester as the Principal Investigator, to package all the necessary Galaxy dependencies for Ubuntu/Bio-Linux. As many pre-requisities are already included with Bio-Linux, we are already some way down this path. New packages that we create will appear in a Launchpad PPA (https://launchpad.net/~nebc/+archive/galaxy). We will be happy to hear any comments regarding these efforts and we hope that this will be a useful resource for all Galaxy users. Once the initial packaging is done, we hope to collaborate with Galaxy team in maintaining and improving this resource. Best wishes, Tim, Soon and Mesude Hi Mesude, This is fantastic, thanks for letting us know, and please do post up if there is anything we can help with. Also, if you're not aware, Galaxy's cloud offering is built on CloudBioLinux, which itself is built on Bio-Linux. --nate This is very good news, I'm personally a big fan of avoiding as much as possible installing from source on production systems, as things can get easily out of hand when trying to keep everything updated. Mesude, I recently joined Debian Med (http://www.debian.org/devel/debian-med/ and https://launchpad.net/~debian-med) which the exact same goal of getting as much tools as possible packaged for Debian/Ubuntu. I see Tim Booth is an active member in Debian Med and I was wondering if your group is planning to keep the development of these packages in the git/svn repositories of Debian Med. I wouldn't want to duplicate your efforts. I like very much the use of PPAs while waiting for packages to be officially included in Debian and trickle down to Ubuntu, process that could be somehow slow at times. Just today I was able to upload Tophat's package being developed at Debian Med to the PPA, I'm pretty sure it needs more work, but you might be interested in taking a look. Kind regards, Carlos ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] galaxy command line
On Jan 4, 2012, at 2:34 PM, Langhorst, Brad wrote: Ryan: I didn't understand Usha's question the same way you did. I agree that it's useful to run workflows in a more automated way. I have not run workflows using the Galaxy API yet, but I did see some documentation on it and plan to try it soon. http://wiki.g2.bx.psu.edu/Learn/API/Examples Maybe someone with more expertise has something more to say about this. Brad Hi All, It is indeed possible to run workflows from the command line via the API. Have a look at the sample script galaxy-dist/scripts/api/workflow_execute.py to see how it's done. Sorry for the lack of documentation, although this is finally in progress. --nate -- Brad Langhorst New England Biolabs langho...@neb.com From: Ryan ngsbioinformat...@gmail.com Date: Wed, 4 Jan 2012 14:25:44 -0500 To: Brad Langhorst langho...@neb.com Cc: Usha Reddy usha.reddy...@gmail.com, galaxy-...@bx.psu.edu galaxy-...@bx.psu.edu Subject: Re: [galaxy-dev] galaxy command line On Wed, Jan 4, 2012 at 12:04 PM, Langhorst, Brad langho...@neb.com wrote: Usha: Galaxy is essentially a wrapper around other command line tools. So, while you could probably extract some of its pieces to run on the command line, I don't know why you would want to. I would argue it would be beneficial to invoke pipelines from the command line instead of being forced to use the web-based interface. command line is beneficial for large #'s of datasets that need to be analyzed. Ryan ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Status on importing BAM file into Library does not update
I'm adding Data Libraries to my local galaxy instance. I'm doing this by importing directories that contain bam and bai files. I see the bam/bai files get added on the admin page and the Message is This job is running. qstat shows the job run and complete. I checked my runner0.log and it registers the PBS job completed successfully. But the web page never updates. I tried to refresh the page by navigating away from it then back to it, but it still reads This job is running. How do I fix this? ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Status on importing BAM file into Library does not update
On Wed, Jan 4, 2012 at 5:17 PM, Ryan Golhar ngsbioinformat...@gmail.comwrote: I'm adding Data Libraries to my local galaxy instance. I'm doing this by importing directories that contain bam and bai files. I see the bam/bai files get added on the admin page and the Message is This job is running. qstat shows the job run and complete. I checked my runner0.log and it registers the PBS job completed successfully. But the web page never updates. I tried to refresh the page by navigating away from it then back to it, but it still reads This job is running. How do I fix this? Some more information...I check my head node and I see samtools is running there. Its running 'samtools index'. So two problems: 1) samtools is not using the cluster. I assume this is a configuration setting somewhere. 2) Why is galaxy trying to index the bam files if the bai files exists in the same directory as the bam file. The BAM files are sorted and have 'SO:coordinate'. I also have samtools-0.1.18 installed. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Empty files when importing with no copy from NFS source
On Wed, Jan 4, 2012 at 12:59 PM, Carlos Borroto carlos.borr...@gmail.com wrote: Hi, While trying to import some fastq files to a data library, using Upload directory of files and Link to files without copying into Galaxy, I end with empty datasets. My data files are accessed through NFS and I'm using the recommended option -noac: 10.2.90.89:/projects on /local/projects type nfs (rw,noac,sloppy,addr=10.2.90.89) To import the files, I created links to them in the directory configured in universe_wsgi.ini. If I try to import the same files but with the option Copy files into Galaxy, the files are correctly imported. I kept doing some testing trying to find the reason for this issue, and although I can't still find a solution, I can say, in the exact same conditions, galaxy-dist is able to import correctly the file, while galaxy-central is not. Thanks, Carlos ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Status on importing BAM file into Library does not update
On Wed, Jan 4, 2012 at 5:17 PM, Ryan Golhar ngsbioinformat...@gmail.comwrote: I'm adding Data Libraries to my local galaxy instance. I'm doing this by importing directories that contain bam and bai files. I see the bam/bai files get added on the admin page and the Message is This job is running. qstat shows the job run and complete. I checked my runner0.log and it registers the PBS job completed successfully. But the web page never updates. I tried to refresh the page by navigating away from it then back to it, but it still reads This job is running. How do I fix this? Some more information...I check my head node and I see samtools is running there. Its running 'samtools index'. So two problems: 1) samtools is not using the cluster. I assume this is a configuration setting somewhere. 2) Why is galaxy trying to index the bam files if the bai files exists in the same directory as the bam file. The BAM files are sorted and have 'SO:coordinate'. I also have samtools-0.1.18 installed. It also appears: 3) Galaxy is unable to import .bai files. It says there was an error importing these files The uploaded binary file contains inappropriate content 4) Galaxy is trying to change the permissions on the files I'm importing (as links). Thankfully the data tree is read-only. If I'm linking Galaxy to my date, why does Galaxy want to change the permissions? This seems like something it shouldn't be doing i.e. Galaxy should leave external data alone. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Galaxy Proxy by an External Apache Server
Hi guys, I am trying to setup a new galaxy for my institute. The galaxy itself runs fine in our private network. Now I am trying to configure the proxy so that it can be accessed publicly. The official guide here http://wiki.g2.bx.psu.edu/Admin/Config/Apache%20Proxy It seems to assume both galaxy and Apache are in the same server, but this is not the case for us. We have had a dedicated Apache server is running for all public sites. I still tried to follow the recommended rewrite rules, modified the path URL. We installed the galaxy in a NFS location which can be accessed from Apache server. IfModule mod_rewrite.c RewriteEngine on RewriteRule ^/galaxy$ /galaxy/ [R] RewriteRule ^/galaxy/static/style/(.*) /NFS/PATH/TO/galaxy_dist/static/june_2007_style/blue/$1 [L] RewriteRule ^/galaxy/static/scripts/(.*) /NFS/PATH/TO/galaxy_dist/static/scripts/packed/$1 [L] RewriteRule ^/galaxy/static/(.*) /NFS/PATH/TO/galaxy_dist/static/$1 [L] RewriteRule ^/galaxy/favicon.ico /NFS/PATH/TO/galaxy_dist/static/favicon.ico [L] RewriteRule ^/galaxy/robots.txt /NFS/PATH/TO/galaxy_dist/static/robots.txt [L] RewriteRule ^/galaxy(.*) http://galaxy.privatenet.org:8080$1 [P] /IfModule When I tested from http://www.publicnet.org/galaxy, it gives 404 error. The log shows that error caused by /var/www/galaxy not found. Then I created a symlink /var/www/galaxy links to /NFS/PATH/TO/galaxy_dist this time, the URL shows the contents of the entire galaxy_dist directory. Will be very appreciated if anyone can point out which part went wrong. Kind regards, Derrick ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/