Re: [galaxy-dev] BLAST only using single core
Thanks Peter, that was the hint I was needing. It looks like the solution for my LSF cluster was to use drmaa://-n 8/ in the universe_wsgi.ini and now I can see BLAST using 8 cores. It doesn’t scale perfectly so going higher isn't sensible but turnaround time for my test job has improved markedly. Shane Dr. Shane Sturrock NZGL BioIT Admin n...@biomatters.com ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] BLAST only using single core
Hi Peter, > On 21/07/2016, at 8:50 PM, Peter Cock <p.j.a.c...@googlemail.com> wrote: > > Hi Shane, > > We've not touched anything on the BLAST+ wrapper here for > a while - the command line is always built using: > > -num_threads "\${GALAXY_SLOTS:-8}" > > That means use the environment variable $GALAXY_SLOTS > if set, defaulting to 8 threads if not. See: > > https://github.com/peterjc/galaxy_blast/blob/master/tools/ncbi_blast_plus/ncbi_macros.xml#L352 > > Galaxy itself should be setting $GALAXY_SLOTS via your > cluster configuration - and from your description this seems > to be set to 1 thread/slot only. I suspect since I’m running an old configuration that my cluster setup isn’t optimal. I didn’t really want my users going nuts since our cluster is small (6 nodes, 16 cores each) so I wasn’t keen to give them too much power because and was happy that tools would only run on single cores although I’ve set things up as per the docs way back when so it uses multiple handlers and so on and with default_cluster_job_runner = drmaa:/// all in the galaxy.ini (universe_wsgi.ini actually) > > Can you tell us more about how your Galaxy LSF is setup? > Have you got BLAST specific cluster job settings? Nothing specific, but it would seem that the GALAXY_SLOTS variable has gone from being unset to set with the install I’ve got now and that would explain why BLAST has slowed down a lot. > > (I don't use LSF so don't have the details to hand) I’m just using the drmaa plugin so I’m guessing I need to either specify a job runner for BLAST which sets GALAXY_SLOTS. The documentation isn’t entirely clear although it seems I need to create a job_conf.xml which sets local_slots to a value for runner drmaa unless there’s a way to set that in the galaxy.ini file instead. Shane > > Peter > > On Wed, Jul 20, 2016 at 10:03 PM, Shane Sturrock <sh...@biomatters.com> wrote: >> Previously, the BLAST wrapper was able to use multiple cores but recently >> users have started complaining it has got really slow and when I look at the >> cluster a job is only using a single core. I don’t want jobs split across >> multiple cluster nodes, but each node has 16 cores so it would be good if >> they could be used. I’m still using the older 16.01 release and this has >> been upgraded repeatedly over the last few years so I’m still using the >> universe_wsg.ini file (sym linked to config/galaxy.ini) and I don’t have a >> job_conf.xml set up. I’m using drmaa to drive an LSF cluster. BLAST is the >> main issue here so I was wondering if there’s a way to pass the -num_threads >> flag without breaking everything until I can build up a new server? >> >> Dr. Shane Sturrock >> NZGL BioIT Admin >> n...@biomatters.com >> >> >> ___ >> Please keep all replies on the list by using "reply all" >> in your mail client. To manage your subscriptions to this >> and other Galaxy lists, please use the interface at: >> https://lists.galaxyproject.org/ >> >> To search Galaxy mailing lists use the unified search at: >> http://galaxyproject.org/search/mailinglists/ Dr. Shane Sturrock NZGL BioIT Admin n...@biomatters.com ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] BLAST only using single core
Previously, the BLAST wrapper was able to use multiple cores but recently users have started complaining it has got really slow and when I look at the cluster a job is only using a single core. I don’t want jobs split across multiple cluster nodes, but each node has 16 cores so it would be good if they could be used. I’m still using the older 16.01 release and this has been upgraded repeatedly over the last few years so I’m still using the universe_wsg.ini file (sym linked to config/galaxy.ini) and I don’t have a job_conf.xml set up. I’m using drmaa to drive an LSF cluster. BLAST is the main issue here so I was wondering if there’s a way to pass the -num_threads flag without breaking everything until I can build up a new server? Dr. Shane Sturrock NZGL BioIT Admin n...@biomatters.com ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] Mothur count.seqs
I’ve had a report from my users that the count.seqs function in Mothur doesn’t work and just produces an empty table - this is also what I’m seeing when testing it both on the current July 2015 distribution, plus I went back to my backup server still running the May 2015 version.I’ve attached a test set and using the latest Mothur installation and running count.seqs on this seems to work according to the logs but the output file is empty. Datatype needs to be set to names when this is imported of course. I would like to get this working because at the moment my users have to go back to the CLI version to do their work.Shane Dr. Shane SturrockNZGL BioIT Adminn...@biomatters.com smallSet.names Description: Binary data ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] Problem with set_user_disk_usage.py script
I recently updated our server to the latest release and have just had a user who ran up a lot of space and isn’t seeing it come back when she purges her data. I’ve tried using the set_user_disk_usage.py script as I’ve previously done to resolve this but now it is failing with the following: (galaxy_env)[galaxy@galaxy scripts]$ ./set_user_disk_usage.py Loading Galaxy model... Traceback (most recent call last): File ./set_user_disk_usage.py, line 85, in module model, object_store, engine = init() File ./set_user_disk_usage.py, line 43, in init for key, value in config_parser.items( app:main ): File /usr/lib64/python2.6/ConfigParser.py, line 565, in items raise NoSectionError(section) ConfigParser.NoSectionError: No section: ‘app:main' Do I need a newer python or is this a bug/regression? Shane Dr. Shane Sturrock Senior Scientist shane.sturr...@biomatters.com | P: +64 9 379 5064 BIOMATTERS ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] Problem with set_user_disk_usage.py script
Thanks Martin, That solves it. What I ended up doing (since my install is pretty old now) was to symlink the galaxy-dist/universe_wsgi.ini to galaxy-dist/config/galaxy.ini and then the script started working. Shane On 9/06/2015, at 3:15 pm, Martin Čech mar...@bx.psu.edu wrote: The script by default expects Galaxy config to be at config/galaxy.ini if you have Galaxy config in a diffferent place you can use the --config flag to specify path to it. The config location has been recently changed (from galaxy root folder and a different filename) - that is why the defaults are different then your setup. Sorry for the inconvenience. Martin On Mon, Jun 8, 2015 at 9:43 PM Shane Sturrock sh...@biomatters.com mailto:sh...@biomatters.com wrote: I recently updated our server to the latest release and have just had a user who ran up a lot of space and isn’t seeing it come back when she purges her data. I’ve tried using the set_user_disk_usage.py script as I’ve previously done to resolve this but now it is failing with the following: (galaxy_env)[galaxy@galaxy scripts]$ ./set_user_disk_usage.py Loading Galaxy model... Traceback (most recent call last): File ./set_user_disk_usage.py, line 85, in module model, object_store, engine = init() File ./set_user_disk_usage.py, line 43, in init for key, value in config_parser.items( app:main ): File /usr/lib64/python2.6/ConfigParser.py, line 565, in items raise NoSectionError(section) ConfigParser.NoSectionError: No section: ‘app:main' Do I need a newer python or is this a bug/regression? Shane Dr. Shane Sturrock Senior Scientist shane.sturr...@biomatters.com mailto:shane.sturr...@biomatters.com | P: +64 9 379 5064 BIOMATTERS ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/ http://galaxyproject.org/search/mailinglists/ Dr. Shane Sturrock Senior Scientist shane.sturr...@biomatters.com | P: +64 9 379 5064 BIOMATTERS ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/