Re: [galaxy-dev] [galaxy-user] Why SGE needed for galaxy ?
Hi, Thanks for answers , Actually i tried to understand cloudman role in galaxy , do you have an architecture paper which i should read ? for e.g. what i unserstood si that cloud man runs a python server and manage SGE through it via some python script . (may be i am wrong) Our proposed installation is like this: Users launch cloud man from Open nebula cloud, when cloudman is running they can add more nodes which are endup in same private cloud. As you suggested in this particular case I should have same images both for (master cloudman) and workers , am i right ? actually i built one image via https://bitbucket.org/galaxy/cloudman . DO you suggest that I should built image from CBL tree you mentioned below ? which wd have both cloudman and galaxy together. BR Zeeshan On W6-Feb 8, 2013, at 10:21 PM, Enis Afgan wrote: As far as actually building the image, the recommended method is to use CloudBioLinux build scripts: https://github.com/chapmanb/cloudbiolinux There is a CloudMan flavor of CBL that allows you to build only CloudMan- and Galaxy-required parts: https://github.com/chapmanb/cloudbiolinux/tree/master/contrib/flavor/cloudman On Sat, Feb 9, 2013 at 12:24 AM, Dannon Baker dannonba...@me.com wrote: The workers don't need their own copy of galaxy installed, but a shared filesystem is a requirement for galaxy (in any cluster environment -- see the galaxy wiki for more http://wiki.galaxyproject.org/Admin/Config/Performance/Cluster). Cloudman handles managing NFS for you and sharing the galaxy/tools/index/data volumes. In order for workers to communicate with the master instance, they'll need the cloudman installation as well, so you should use the same image. Now that I've answered that, I'm not sure I totally understand your proposed installation yet, but if you're suggesting bypassing cloudman for installation on a private cloud it should be possible. You'd want the master instance up full time running as the galaxy front end, dispatching jobs to a separate cluster managed by SGE/PBS/whatever. Basically the standard cluster configuration outlined in the wiki above, but you'd want your worker nodes automatically configured to mount the shared directories and join the PBS/SGE queue so they could handle jobs. Depending on what type of private cloud you're working with, it might be easier to just see if you can get cloudman to work :) Lastly, I swapped this message to galaxy-dev since it's about installation nuts and bolts. -Dannon On Feb 8, 2013, at 3:02 AM, Zeeshan Ali Shah zas...@pdc.kth.se wrote: Dear Enis, thanks for reply and being you as cloudman developer it is good to see you in the list . Q2: On Workers node we need galaxy installed with its shared directories ? like galaxyindices , galaxydata Q3: For a private cloud setup do you prefare to have a master image with cloudman and galaxy and use the same image for workers as well ? or worker images can be vanilla OS ? BR Zeeshan On W6-Feb 7, 2013, at 11:50 PM, Enis Afgan wrote: Hi Zeeshan, In order to gain from the scalability of the cloud, SGE does need to run. However, CloudMan sets all that up and manages it going forward. Enis On Fri, Feb 8, 2013 at 8:59 AM, Zeeshan Ali Shah zas...@pdc.kth.se wrote: Hi, It seems that cloud man need SGE for scaling . Does SGE need also when run cloud on private cloud ? Zeeshan ___ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org. Please keep all replies on the list by using reply all in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list: http://lists.bx.psu.edu/listinfo/galaxy-dev To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org. Please keep all replies on the list by using reply all in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list: http://lists.bx.psu.edu/listinfo/galaxy-dev To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] psiblast
On Tue, Feb 5, 2013 at 4:28 PM, Luobin Yang yangl...@isu.edu wrote: Hi, Peter, You are right, I thought there was already a datatype defined for PSSM, but it turned out there isn't one yet. I attached a draft tool configuration file for psiblast, it was based on the blastp configuration file, please revise as you like. The options that are specific to psiblast are the following: 1. num_iterations 2. pseudocount 3. inclusion_ethresh Other options should be the same as the blastp program. So a typical command line would be the same as blastp except adding above options if non-default values are used. I can see there are two optional output files, one is the PSSM and the other is ASCII PSSM. Besides using a typical query input, psiblast can take a PSSM file or a multiple sequence alignment file as the input. Luobin Thanks Luobin, I've checked that into my development repository (on my tools branch): https://bitbucket.org/peterjc/galaxy-central/commits/f8f43f8494abdd228998d4e9fe67b0f2378494e0 That will allow me to track changes etc as we work on the datatypes. Note I am not yet including ncbi_psiblast_wrapper.xml on the Galaxy Tool Shed. Regards, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] run_functional_tests.sh -sid option
On Wed, Nov 3, 2010 at 11:48 AM, Peter pe...@maubp.freeserve.co.uk wrote: On Tue, Nov 2, 2010 at 1:57 PM, Nate Coraor n...@bx.psu.edu wrote: Peter wrote: Hi, I'm trying to use run_functional_tests.sh to run all the tests for a section (group of tools). I've read the output from: ./run_functional_tests.sh help For example, from the tools_conf.xml.sample we have section name=ENCODE Tools id=EncodeTools tool file=encode/gencode_partition.xml / tool file=encode/random_intervals.xml / /section And looking at these tools' XML files, tool id=gencode_partition1 name=Gencode Partition and: tool id=random_intervals1 name=Random Intervals I'd like to run the functional tests for the ENCODE tools, Using the switch for an individual tool id (-id) works, ./run_functional_tests.sh -id gencode_partition1 ... Ran 1 test in 22.302s ... and so does this (well, it say the tool doesn't have any tests which is true in this example): ./run_functional_tests.sh -id random_intervals1 ... Ran 1 test in 0.027s ... However, I also tried using the section id switch (-sid), ./run_functional_tests.sh -sid EncodeTools ... Ran 0 tests in 0.000s ... I also tried using the section name, /run_functional_tests.sh -sid ENCODE Tools Is this (-sid not working) a known issue, or am I using it wrong? As presently written it expects the name concatenated with the id, with spaces converted to dashes. You can see the list of valid sections by running 'python tool_list.py'. Everything after section:: is what would follow the '-sid' flag. --nate Ah - that worked. I don't really understand that design choice when you have a section id, but how about a clarification along these lines since the current text doesn't make sense to me: diff -r ae6c56ad5a49 run_functional_tests.sh --- a/run_functional_tests.sh Wed Nov 03 11:42:24 2010 + +++ b/run_functional_tests.sh Wed Nov 03 11:46:02 2010 + @@ -10,7 +10,8 @@ echo 'run_functional_tests.sh' for testing all the tools in functional directory echo 'run_functional_tests.sh aaa' for testing one test case of 'aaa' ('aaa' is the file name with path) echo 'run_functional_tests.sh -id bbb' for testing one tool with id 'bbb' ('bbb' is the tool id) - echo 'run_functional_tests.sh -sid ccc' for testing one section with sid 'ccc' ('ccc' is the string after 'section::') + echo 'run_functional_tests.sh -sid ccc' for testing one section with sid 'ccc' ('ccc' is the string after + echo 'section::' in the --list output, not just the id from tool_conf XML) echo 'run_functional_tests.sh -list' for listing all the tool ids elif [ $1 = '-id' ]; then python ./scripts/functional_tests.py -v functional.test_toolbox:TestForTool_$2 --with-nosehtml --html-report-file run_functional_tests.html Thanks, Peter Hi all, I'm running into a little trouble with the --sid option, something seems to have broken: $ ./run_functional_tests.sh --sid NCBI_BLAST+-ncbi_blast_plus_tools ... Usage: functional_tests.py [options] functional_tests.py: error: no such option: --sid functional_tests.py ERROR 2013-02-11 11:34:42,959 Failure running tests Traceback (most recent call last): File ./scripts/functional_tests.py, line 430, in main test_config.configure( sys.argv ) File /mnt/galaxy/galaxy-central/eggs/nose-0.11.1-py2.6.egg/nose/config.py, line 249, in configure options, args = self._parseArgs(argv, cfg_files) File /mnt/galaxy/galaxy-central/eggs/nose-0.11.1-py2.6.egg/nose/config.py, line 237, in _parseArgs return parser.parseArgsAndConfigFiles(argv[1:], cfg_files) File /mnt/galaxy/galaxy-central/eggs/nose-0.11.1-py2.6.egg/nose/config.py, line 133, in parseArgsAndConfigFiles return self._parser.parse_args(args, values) File /usr/lib64/python2.6/optparse.py, line 1396, in parse_args self.error(str(err)) File /usr/lib64/python2.6/optparse.py, line 1578, in error self.exit(2, %s: error: %s\n % (self.get_prog_name(), msg)) File /usr/lib64/python2.6/optparse.py, line 1568, in exit sys.exit(status) SystemExit: 2 ... Reading run_functional_tests.sh it handles the --sid argument via tool_list.py, it calls this command which seems to work: $ python tool_list.py NCBI_BLAST+-ncbi_blast_plus_tools functional.test_toolbox:TestForTool_ncbi_blastn_wrapper functional.test_toolbox:TestForTool_ncbi_blastp_wrapper functional.test_toolbox:TestForTool_ncbi_blastx_wrapper functional.test_toolbox:TestForTool_ncbi_tblastn_wrapper functional.test_toolbox:TestForTool_ncbi_tblastx_wrapper functional.test_toolbox:TestForTool_blastxml_to_tabular Before digging much further, does the --sid argument still work for others, or might it be just something on my setup which has gone awry? Thanks, Peter
[galaxy-dev] Bug Re: Data library not properly showing up
Hi all, For the problem below, it appears the the permissions were not set correctly. I have found that, in the Admin section, selecting multiple folders in a data library and trying to change the permissions of those, does not work. I select the folder, and in the dropdown box below select 'Edit permissions'. Next, the folders get unchecked and a message on top appears: You must select at least one dataset. I can reproduce it every time. Galaxy changeset: 8525:a4113cc1cb5e This is related to enhancement request on Trello: Show roles associated with data libraries https://trello.com/card/show-roles-associated-with-data-libraries/506338ce32ae458f6d15e4b3/229 cheers Joachim Joachim Jacob Rijvisschestraat 120, 9052 Zwijnaarde Tel: +32 9 244.66.34 Bioinformatics Training and Services (BITS) http://www.bits.vib.be @bitsatvib On 02/08/2013 04:29 PM, Joachim Jacob |VIB| wrote: Hi all, The data library I have created is only showing up partially. I have made a data libray, via het Admin menu, with one folder in it. I changed permissions, because I wanted it only to be visible to me. That worked fine. Next I created plenty more folders in that data library. Now, the folders created afterwards are not visible in the data library, when accessed via 'Shared data'. I have played a lot with changing permissions of those libraries, but to no avail. Cheers, Joachim ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Environment variables reset after manually restarting Galaxy
Hello all, After a *reboot* of our Galaxy server, the environment variables are set correctly. However, after *restarting* the Galaxy process on a running server, by logging in as Galaxy and running the init script on CentOS as service galaxyd restart, the environment variables seems to be messed up After this manual Galaxy restart, some tools are not found, apparently caused by a modification of the environment variable PATH. Can somebody provide me perhaps with insight on what is causing this, and how to avoid this? My environment variables are set in /etc/profile.d/galaxy_environment_setup (which is a symbolic link to - /home/galaxy/environment_setup ) Thanks, Joachim -- Joachim Jacob Rijvisschestraat 120, 9052 Zwijnaarde Tel: +32 9 244.66.34 Bioinformatics Training and Services (BITS) http://www.bits.vib.be @bitsatvib ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] DustMasker tool for ncbi_blast_plus
On Fri, Feb 8, 2013 at 4:30 PM, Nicola Soranzo sora...@crs4.it wrote: Il giorno mer, 06/02/2013 alle 20.01 +0100, Nicola Soranzo ha scritto: Hi Peter, I added these file formats mostly as placeholders for a future implementation. Now I have changed a bit the tool by removing acclist and seqloc_xml formats since they are not recognized by the last versions of dustmasker (I also sent an email to blast-h...@ncbi.nlm.nih.gov to inform them of this bug). As before, you can find the new version at: https://bitbucket.org/nsoranzo/ncbi_blast_plus I stripped the old commit and did a new one, not a very good practice, sorry about that. It seems to have confused the bitbucket page a little, but I have checked in your initial wrapper to my development repository (I use the tools branch): https://bitbucket.org/peterjc/galaxy-central/commits/2284d485e36f74f19b0dbe78709b098d9eba4ef6 Note I'm not going to include this in the Tool Shed release yet, we need to sort out the file format definitions first. Hi Peter, I've added a new commit to this repo which updates the test output files to (recommended) BLAST 2.2.26+, since functional tests were returning errors. Hope you find it useful. Also applied to my branch, thank you - I'd forgotten to update that (but intend at some point to refresh the test files and dependency install to use BLAST 2.2.27+ instead): https://bitbucket.org/peterjc/galaxy-central/commits/f1f912f63bb4174f434e3f47eac58f2cfa3753e6 Sadly I've not actually got the unit tests to run at all yet, see: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-February/013245.html Regards, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] run_functional_tests.sh -sid option
Peter, It looks like you have two dashes in your -sid option, whereas the parser expects just one. --Dave B. On 2/11/13 06:56:03.000, Peter wrote: On Wed, Nov 3, 2010 at 11:48 AM, Peter pe...@maubp.freeserve.co.uk wrote: On Tue, Nov 2, 2010 at 1:57 PM, Nate Coraor n...@bx.psu.edu wrote: Peter wrote: Hi, I'm trying to use run_functional_tests.sh to run all the tests for a section (group of tools). I've read the output from: ./run_functional_tests.sh help For example, from the tools_conf.xml.sample we have section name=ENCODE Tools id=EncodeTools tool file=encode/gencode_partition.xml / tool file=encode/random_intervals.xml / /section And looking at these tools' XML files, tool id=gencode_partition1 name=Gencode Partition and: tool id=random_intervals1 name=Random Intervals I'd like to run the functional tests for the ENCODE tools, Using the switch for an individual tool id (-id) works, ./run_functional_tests.sh -id gencode_partition1 ... Ran 1 test in 22.302s ... and so does this (well, it say the tool doesn't have any tests which is true in this example): ./run_functional_tests.sh -id random_intervals1 ... Ran 1 test in 0.027s ... However, I also tried using the section id switch (-sid), ./run_functional_tests.sh -sid EncodeTools ... Ran 0 tests in 0.000s ... I also tried using the section name, /run_functional_tests.sh -sid ENCODE Tools Is this (-sid not working) a known issue, or am I using it wrong? As presently written it expects the name concatenated with the id, with spaces converted to dashes. You can see the list of valid sections by running 'python tool_list.py'. Everything after section:: is what would follow the '-sid' flag. --nate Ah - that worked. I don't really understand that design choice when you have a section id, but how about a clarification along these lines since the current text doesn't make sense to me: diff -r ae6c56ad5a49 run_functional_tests.sh --- a/run_functional_tests.sh Wed Nov 03 11:42:24 2010 + +++ b/run_functional_tests.sh Wed Nov 03 11:46:02 2010 + @@ -10,7 +10,8 @@ echo 'run_functional_tests.sh' for testing all the tools in functional directory echo 'run_functional_tests.sh aaa' for testing one test case of 'aaa' ('aaa' is the file name with path) echo 'run_functional_tests.sh -id bbb' for testing one tool with id 'bbb' ('bbb' is the tool id) - echo 'run_functional_tests.sh -sid ccc' for testing one section with sid 'ccc' ('ccc' is the string after 'section::') + echo 'run_functional_tests.sh -sid ccc' for testing one section with sid 'ccc' ('ccc' is the string after + echo 'section::' in the --list output, not just the id from tool_conf XML) echo 'run_functional_tests.sh -list' for listing all the tool ids elif [ $1 = '-id' ]; then python ./scripts/functional_tests.py -v functional.test_toolbox:TestForTool_$2 --with-nosehtml --html-report-file run_functional_tests.html Thanks, Peter Hi all, I'm running into a little trouble with the --sid option, something seems to have broken: $ ./run_functional_tests.sh --sid NCBI_BLAST+-ncbi_blast_plus_tools ... Usage: functional_tests.py [options] functional_tests.py: error: no such option: --sid functional_tests.py ERROR 2013-02-11 11:34:42,959 Failure running tests Traceback (most recent call last): File ./scripts/functional_tests.py, line 430, in main test_config.configure( sys.argv ) File /mnt/galaxy/galaxy-central/eggs/nose-0.11.1-py2.6.egg/nose/config.py, line 249, in configure options, args = self._parseArgs(argv, cfg_files) File /mnt/galaxy/galaxy-central/eggs/nose-0.11.1-py2.6.egg/nose/config.py, line 237, in _parseArgs return parser.parseArgsAndConfigFiles(argv[1:], cfg_files) File /mnt/galaxy/galaxy-central/eggs/nose-0.11.1-py2.6.egg/nose/config.py, line 133, in parseArgsAndConfigFiles return self._parser.parse_args(args, values) File /usr/lib64/python2.6/optparse.py, line 1396, in parse_args self.error(str(err)) File /usr/lib64/python2.6/optparse.py, line 1578, in error self.exit(2, %s: error: %s\n % (self.get_prog_name(), msg)) File /usr/lib64/python2.6/optparse.py, line 1568, in exit sys.exit(status) SystemExit: 2 ... Reading run_functional_tests.sh it handles the --sid argument via tool_list.py, it calls this command which seems to work: $ python tool_list.py NCBI_BLAST+-ncbi_blast_plus_tools functional.test_toolbox:TestForTool_ncbi_blastn_wrapper functional.test_toolbox:TestForTool_ncbi_blastp_wrapper functional.test_toolbox:TestForTool_ncbi_blastx_wrapper functional.test_toolbox:TestForTool_ncbi_tblastn_wrapper functional.test_toolbox:TestForTool_ncbi_tblastx_wrapper functional.test_toolbox:TestForTool_blastxml_to_tabular Before digging much further, does the --sid argument
Re: [galaxy-dev] run_functional_tests.sh -sid option
On Mon, Feb 11, 2013 at 1:54 PM, Dave Bouvier d...@bx.psu.edu wrote: Peter, It looks like you have two dashes in your -sid option, whereas the parser expects just one. --Dave B. Doh, thank you! $ ./run_functional_tests.sh -sid NCBI_BLAST+-ncbi_blast_plus_tools ... FAILED (errors=2, failures=4) ... Much obliged, the tests seem to be run now (and I can sort out why some are failing for me). Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Public ToolShed Problem - Latest Version isn't the tip repo
Hi Galaxy_dev team, I have question with regards to public ToolShed. For some reason, the latest version of my tools didn't get picked as tip repo when I was trying to install it on my cloud instance. I have read about some unexpected behaviours of public ToolShed if I use it like a git repo. Without knowing that at the first place, I had been modifying my tools on public ToolShed and had multiple uploads of some files. Is it possible that this is the reason why the latest version of my upload doesn't get picked as a tip repo? If it is so, how can I go about to fix it ? (I was looking for a way to delete repositories, but there didn't seem to have one) Thanks, Fei-Yang (Arthur) Jen Student Ontario Institute for Cancer Research MaRS Centre, South Tower 101 College Street, Suite 800 Toronto, Ontario, Canada M5G 0A3 Toll-free: 1-866-678-6427 Twitter: @OICR_news www.oicr.on.cahttp://www.oicr.on.ca This message and any attachments may contain confidential and/or privileged information for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this message in error, please contact the sender and delete all copies. Opinions, conclusions or other information contained in this message may not be that of the organization. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Nice 'citable' URLs for Galaxy Tool Shed repositories
Hello Peter, In addition to the fixes I've commented on inline below, I've also added a new rout for specified repository revisions. So the following citable URLs are now supported in the test tool shed. These fixes and enhancements will not be available on the main Galaxy tool shed until the next Galaxy release. http://testtoolshed.g2.bx.psu.edu/view/peterjc/ http://testtoolshed.g2.bx.psu.edu/view/peterjc/fasta_filter_by_id http://testtoolshed.g2.bx.psu.edu/view/peterjc/fasta_filter_by_id/66e2e0f16c36 On Feb 8, 2013, at 5:26 AM, Peter Cock wrote: On Sat, Feb 2, 2013 at 1:56 PM, Peter Cock p.j.a.c...@googlemail.com wrote: I've noticed one oddity, which is if I go one of the citable URLs like http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler and then browse away to another section/repository/etc the URL in the browser's address bar does not update. You can be looking at repository B, but the address bar URL still says repository A. (This was one reason I stuck a redirect in my prototype). Do you think this going to be easy to fix, or should we revert to the redirect trick to avoid this 'stale' URL problem? This behavior has been fixed in change set revision 8802:7ccddea80a25 which is currently running on the test Galaxy tool shed. Separately, but perhaps related, it would be nice if via the search or otherwise, the new URLs were automatically used - that would be slightly easier than copying it from the text of the page. This one is tricky and may have to wait until we eliminate the Galaxy iframes. If I can figure out a way to make this work before that, i certainly will. I've created a separate Trello card for this. https://trello.com/card/toolshed-nice-citable-urls-for-galaxy-tool-shed-repositories/506338ce32ae458f6d15e4b3/603 However, this is already functional enough to start sharing direct links. Once this goes live, you'll have to brief the whoever writes the new tool alerts for Twitter to use it :) I see the new citable URLs are already in use on the wiki (but not yet working as the live ToolShed doesn't have this update yet): http://wiki.galaxyproject.org/ToolShedToolFeatures#Example_repositories_in_the_main_Galaxy_Tool_Shed_that_define_tool_dependencies These are now working since the Galaxy release last Friday. Thanks! Greg Von Kuster ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Nice 'citable' URLs for Galaxy Tool Shed repositories
On Mon, Feb 11, 2013 at 5:01 PM, Greg Von Kuster g...@bx.psu.edu wrote: Hello Peter, In addition to the fixes I've commented on inline below, I've also added a new rout for specified repository revisions. So the following citable URLs are now supported in the test tool shed. These fixes and enhancements will not be available on the main Galaxy tool shed until the next Galaxy release. http://testtoolshed.g2.bx.psu.edu/view/peterjc/ http://testtoolshed.g2.bx.psu.edu/view/peterjc/fasta_filter_by_id http://testtoolshed.g2.bx.psu.edu/view/peterjc/fasta_filter_by_id/66e2e0f16c36 Lovely - by the way the fasta_filter_by_id tool was replaced by this more general tool: http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id or in the main shed: http://toolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id I've noticed one oddity, which is if I go one of the citable URLs like http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler and then browse away to another section/repository/etc the URL in the browser's address bar does not update. You can be looking at repository B, but the address bar URL still says repository A. (This was one reason I stuck a redirect in my prototype). Do you think this going to be easy to fix, or should we revert to the redirect trick to avoid this 'stale' URL problem? This behavior has been fixed in change set revision 8802:7ccddea80a25 which is currently running on the test Galaxy tool shed. Thanks Separately, but perhaps related, it would be nice if via the search or otherwise, the new URLs were automatically used - that would be slightly easier than copying it from the text of the page. This one is tricky and may have to wait until we eliminate the Galaxy iframes. If I can figure out a way to make this work before that, i certainly will. I've created a separate Trello card for this. https://trello.com/card/toolshed-nice-citable-urls-for-galaxy-tool-shed-repositories/506338ce32ae458f6d15e4b3/603 Great. However, this is already functional enough to start sharing direct links. Once this goes live, you'll have to brief the whoever writes the new tool alerts for Twitter to use it :) I look forward to seeing the first tweets links straight to a tool shed repository :) In fact we can start using these URLs in links to dependencies on other repositories as well. Thanks Greg for doing this so promptly, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Nice 'citable' URLs for Galaxy Tool Shed repositories
On Mon, Feb 11, 2013 at 5:16 PM, Peter Cock p.j.a.c...@googlemail.com wrote: However, this is already functional enough to start sharing direct links. Once this goes live, you'll have to brief the whoever writes the new tool alerts for Twitter to use it :) I look forward to seeing the first tweets links straight to a tool shed repository :) I should have checked Twitter while writing that email, first examples are out (note I have expanded the Twitter short URLs): https://twitter.com/galaxyproject/status/301011276711211008 Now in Galaxy Tool Shed: blastxml_to_top_descr: Make table of top BLAST match descriptions http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr #usegalaxy https://twitter.com/galaxyproject/status/301011636578316290 Galaxy Tool Shed now supports direct linking to tools, e.g. http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr http://toolshed.g2.bx.psu.edu/ #usegalaxy Nice :) Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] job output not returned from cluster
Hi all, I am using galaxy on the cloud and I keep getting the following error: An error occurred running this job: Job output not returned from cluster Any clues? thanks Alfonso ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] job output not returned from cluster
Hi Alfonso, Is this any particular tool that's failing? What does the state of your cloud cluster look like, are there any failures in the log? (in the cloudman interface) And lastly, when writing a new issue to the mailing list, please create a new email instead of replying to an unrelated thread. This will help us assist you and keep track of your individual issue instead of associating it with someone else. -Dannon On Feb 11, 2013, at 12:40 PM, Alfonso Garrido-Lecca alfonso.garrido-le...@colorado.edu wrote: Hi all, I am using galaxy on the cloud and I keep getting the following error: An error occurred running this job: Job output not returned from cluster Any clues? thanks Alfonso ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] [galaxy-user] Why SGE needed for galaxy ?
Here's a link to the architecture paper: http://onlinelibrary.wiley.com/doi/10.1002/cpe.1836/full Building CloudMan image from the repo you mention will not work - that repo is for cloudman itself while you need an image capable of running cloudman. For that, you should use CBL. Also, here are some instructions about setting up cloudman and galaxy on OpenNebula cloud: https://www.cloud.sara.nl/projects/mattiasdehollander-project/wiki (note that this mentions use of mi-deployment set of scripts; since that, mi-deployment has been merged into CBL). On Mon, Feb 11, 2013 at 8:21 PM, Zeeshan Ali Shah zas...@pdc.kth.se wrote: Hi, Thanks for answers , Actually i tried to understand cloudman role in galaxy , do you have an architecture paper which i should read ? for e.g. what i unserstood si that cloud man runs a python server and manage SGE through it via some python script . (may be i am wrong) Our proposed installation is like this: Users launch cloud man from Open nebula cloud, when cloudman is running they can add more nodes which are endup in same private cloud. As you suggested in this particular case I should have same images both for (master cloudman) and workers , am i right ? actually i built one image via https://bitbucket.org/galaxy/cloudman . DO you suggest that I should built image from CBL tree you mentioned below ? which wd have both cloudman and galaxy together. BR Zeeshan On W6-Feb 8, 2013, at 10:21 PM, Enis Afgan wrote: As far as actually building the image, the recommended method is to use CloudBioLinux build scripts: https://github.com/chapmanb/cloudbiolinux There is a CloudMan flavor of CBL that allows you to build only CloudMan- and Galaxy-required parts: https://github.com/chapmanb/cloudbiolinux/tree/master/contrib/flavor/cloudman On Sat, Feb 9, 2013 at 12:24 AM, Dannon Baker dannonba...@me.com wrote: The workers don't need their own copy of galaxy installed, but a shared filesystem is a requirement for galaxy (in any cluster environment -- see the galaxy wiki for more http://wiki.galaxyproject.org/Admin/Config/Performance/Cluster). Cloudman handles managing NFS for you and sharing the galaxy/tools/index/data volumes. In order for workers to communicate with the master instance, they'll need the cloudman installation as well, so you should use the same image. Now that I've answered that, I'm not sure I totally understand your proposed installation yet, but if you're suggesting bypassing cloudman for installation on a private cloud it should be possible. You'd want the master instance up full time running as the galaxy front end, dispatching jobs to a separate cluster managed by SGE/PBS/whatever. Basically the standard cluster configuration outlined in the wiki above, but you'd want your worker nodes automatically configured to mount the shared directories and join the PBS/SGE queue so they could handle jobs. Depending on what type of private cloud you're working with, it might be easier to just see if you can get cloudman to work :) Lastly, I swapped this message to galaxy-dev since it's about installation nuts and bolts. -Dannon On Feb 8, 2013, at 3:02 AM, Zeeshan Ali Shah zas...@pdc.kth.se wrote: Dear Enis, thanks for reply and being you as cloudman developer it is good to see you in the list . Q2: On Workers node we need galaxy installed with its shared directories ? like galaxyindices , galaxydata Q3: For a private cloud setup do you prefare to have a master image with cloudman and galaxy and use the same image for workers as well ? or worker images can be vanilla OS ? BR Zeeshan On W6-Feb 7, 2013, at 11:50 PM, Enis Afgan wrote: Hi Zeeshan, In order to gain from the scalability of the cloud, SGE does need to run. However, CloudMan sets all that up and manages it going forward. Enis On Fri, Feb 8, 2013 at 8:59 AM, Zeeshan Ali Shah zas...@pdc.kth.se wrote: Hi, It seems that cloud man need SGE for scaling . Does SGE need also when run cloud on private cloud ? Zeeshan ___ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org. Please keep all replies on the list by using reply all in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list: http://lists.bx.psu.edu/listinfo/galaxy-dev To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org. Please keep all replies on the list by using reply all in your mail client. For discussion of local Galaxy instances and the Galaxy
[galaxy-dev] Search for toolname from toolshed in workflow editor does not work?
Hi, It seems that the workflow editor does not know how to search for tools that are in the toolshed. The regular toolsearch in the main Analysis window has no problem finding reorder Sam/BAM in picard if I search there, but in the workflow editor it seems to only be able to search for tools that are installed outside the toolshed. Do I need to turn this on somewhere or is this just a bug? Thanks Thon ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Search for toolname from toolshed in workflow editor does not work?
This is indeed a bug, but fortunately there's a pull request that Björn Grüning submitted just recently that should fix it. It's on my list of things to test and incorporate soon. -Dannon On Feb 11, 2013, at 6:09 PM, Thon de Boer thondeb...@me.com wrote: Hi, It seems that the workflow editor does not know how to search for tools that are in the toolshed… The regular toolsearch in the main Analysis window has no problem finding “reorder Sam/BAM” in picard if I search there, but in the workflow editor it seems to only be able to search for tools that are installed outside the toolshed… Do I need to turn this on somewhere or is this just a bug? Thanks Thon ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] report server failing - lib/galaxy/webapps/reports/config.py does not have attribute sentry_dsn
File '/home/galaxy/gx/prod/galaxy/database/compiled_templates/reports/base/base_panels.mako.py', line 68 in render_body __M_writer(unicode(self.javascripts())) File '/home/galaxy/gx/prod/galaxy/database/compiled_templates/reports/webapps/reports/index.mako.py', line 104 in render_javascripts __M_writer(unicode(parent.javascripts())) File '/home/galaxy/gx/prod/galaxy/database/compiled_templates/reports/base/base_panels.mako.py', line 301 in render_javascripts if app.config.sentry_dsn: AttributeError: 'Configuration' object has no attribute 'sentry_dsn' I patched in this as a temporary work around: $ hg diff -r 8182 lib/galaxy/webapps/reports/config.py diff -r ec51a727a497 lib/galaxy/webapps/reports/config.py --- a/lib/galaxy/webapps/reports/config.py Thu Nov 01 23:25:39 2012 -0700 +++ b/lib/galaxy/webapps/reports/config.py Mon Feb 11 19:06:30 2013 -0600 @@ -45,6 +45,8 @@ global_conf_parser = ConfigParser.ConfigParser() if global_conf and __file__ in global_conf: global_conf_parser.read(global_conf['__file__']) +# Error logging with sentry +self.sentry_dsn = kwargs.get( 'sentry_dsn', None ) def get( self, key, default ): return self.config_dict.get( key, default ) def check( self ): ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Upload file : auto-detect based on file extension ?
Hi, We are using a proprietary file format in some of our tools. I successfully added a new data type, but what I would like to do is to use the auto-detect when uploading the file, just based on the extension of the file. My guess is I have to override the sniff() in the datatype class, and test for the extension ? Somethinkg like that : if file.endswith('.extension123'): ... But how do I get the original filename when it was uploaded ? Thanks, -- David ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] report server failing - lib/galaxy/webapps/reports/config.py does not have attribute sentry_dsn
Jim, Thanks for spotting that, I've pushed the fix in 8816:e5dcefc328bb. --Dave B. On 2/11/13 20:14:13.000, Jim Johnson wrote: File '/home/galaxy/gx/prod/galaxy/database/compiled_templates/reports/base/base_panels.mako.py', line 68 in render_body __M_writer(unicode(self.javascripts())) File '/home/galaxy/gx/prod/galaxy/database/compiled_templates/reports/webapps/reports/index.mako.py', line 104 in render_javascripts __M_writer(unicode(parent.javascripts())) File '/home/galaxy/gx/prod/galaxy/database/compiled_templates/reports/base/base_panels.mako.py', line 301 in render_javascripts if app.config.sentry_dsn: AttributeError: 'Configuration' object has no attribute 'sentry_dsn' I patched in this as a temporary work around: $ hg diff -r 8182 lib/galaxy/webapps/reports/config.py diff -r ec51a727a497 lib/galaxy/webapps/reports/config.py --- a/lib/galaxy/webapps/reports/config.py Thu Nov 01 23:25:39 2012 -0700 +++ b/lib/galaxy/webapps/reports/config.py Mon Feb 11 19:06:30 2013 -0600 @@ -45,6 +45,8 @@ global_conf_parser = ConfigParser.ConfigParser() if global_conf and __file__ in global_conf: global_conf_parser.read(global_conf['__file__']) +# Error logging with sentry +self.sentry_dsn = kwargs.get( 'sentry_dsn', None ) def get( self, key, default ): return self.config_dict.get( key, default ) def check( self ): ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/