Re: [galaxy-dev] Is the new tool repositories summary in the monthly newsletter useful?
On Wed, Oct 8, 2014 at 12:49 AM, Dave Clements cleme...@galaxyproject.org wrote: Hi All, The October Galaxy newsletter went out a week ago. Buried at the bottom is this 36 new ToolShed repos -- https://wiki.galaxyproject.org/GalaxyUpdates/2014_10#ToolShed_Contributions which lists repositories that have been published in the Galaxy Project ToolShed in the previous month. I have two questions about this: 1. How useful is this summary? Compiling it is a manual process and it's kind of mind-numbing. Most months it takes around 2 hours (I think). I find it moderately useful, so if most Galaxy Admins think the same, it probably is overall a good time investment. 2. If we keep the summary, should we put it in the Dev News Briefs instead? I'm kinda thinking this summary is a better match for the Dev News Briefs (every release), then it is for the general newsletter (every month). I would suggest both (easy if it is just a link, a tiny bit of copy and paste if not), but that wasn't an option on the Google form. Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] ToolShed tool preview broken (TestToolShed too)
Hi all, From the new tools information Dave Compiled for the last Galaxy Update https://wiki.galaxyproject.org/GalaxyUpdates/2014_10#ToolShed_Contributions I had a look at galaxyp's filter_by_fasta_ids: Extract sequences from a FASTA file based on a list of IDs tool: https://toolshed.g2.bx.psu.edu/view/galaxyp/filter_by_fasta_ids I wanted to see how it compared to my own similar tools (which handle FASTA, FASTQ, SFF and could cover more - they replaced my older single format filter tools): https://toolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id https://toolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id Now for the bug report, clicking on the button (under valid tools) which would normally give a preview of the tool form is failing - giving just Internal Server Error. I have tried a random selection of other tools and this seems to be universal - moreover the TestToolShed also seems to have the same problem. Regards, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] SQLalchemy InvalidRequestError
Hi all again Seems I am not so fortunate that this would just go away. It appear to be happening sometimes at start-up time for one of the handler processes. The first thing that appears to go wrong is this just after starting the job handler queue: --- galaxy.jobs.handler INFO 2014-10-06 14:37:51,220 job handler queue started galaxy.sample_tracking.external_service_types DEBUG 2014-10-06 14:37:51,246 Loaded external_service_type: Simple unknown sequencer 1.0.0 galaxy.sample_tracking.external_service_types DEBUG 2014-10-06 14:37:51,253 Loaded external_service_type: Applied Biosystems SOLiD 1.0.0 galaxy.queue_worker INFO 2014-10-06 14:37:51,254 Initalizing Galaxy Queue Worker on sqlalchemy+postgres://galaxy:xxx@158.119.147.86:5432/galaxyprod galaxy.jobs DEBUG 2014-10-06 14:37:51,416 (78355) Working directory for job is: /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/database/job_working_directory/078/78355 galaxy.web.framework.base DEBUG 2014-10-06 14:37:51,454 Enabling 'data_admin' controller, class: DataAdmin galaxy.jobs.handler ERROR 2014-10-06 14:37:51,464 failure running job 78355 Traceback (most recent call last): File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 243, in __monitor_step job_state = self.__check_if_ready_to_run( job ) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 333, in __check_if_ready_to_run state = self.__check_user_jobs( job, self.job_wrappers[job.id] ) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 417, in __check_user_jobs if job.user: File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/attributes.py, line 168, in __get__ return self.impl.get(instance_state(instance),dict_) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/attributes.py, line 453, in get value = self.callable_(state, passive) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/strategies.py, line 508, in _load_for_state return self._emit_lazyload(session, state, ident_key) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/strategies.py, line 552, in _emit_lazyload return q._load_on_ident(ident_key) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2512, in _load_on_ident return q.one() File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2184, in one ret = list(self) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2227, in __iter__ return self._execute_and_instances(context) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2240, in _execute_and_instances close_with_result=True) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2231, in _connection_from_session **kw) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py, line 774, in connection bind = self.get_bind(mapper, clause=clause, **kw) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py, line 1052, in get_bind c_mapper = mapper is not None and _class_to_mapper(mapper) or None File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/util.py, line 680, in _class_to_mapper mapperlib.configure_mappers() File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/mapper.py, line 2263, in configure_mappers mapper._post_configure_properties() File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/mapper.py, line 1172, in _post_configure_properties prop.init() File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/interfaces.py, line 128, in init self.do_init() File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/properties.py, line 910, in do_init self._process_dependent_arguments() File
[galaxy-dev] testtoolshed internal server error
Anyone else getting this when trying to upload to a testtoolshed repos? I'm using the upload files to repository function in repository actions and get a blank page with internal server error.. Worked fine yesterday. Ciao, Stef ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] SQLalchemy InvalidRequestError
Hi again Ulf, Thanks for the info. A few questions to help me track this down: Does the postgres database reside on a remote box from galaxy? And is it very large? Running the latest galaxy may not change anything related to this particular issue, but you could always try it. Sqlalchemy is fixed at the latest version we can currently support without reworking how migration scripts function (which we will do, moving to Alembic, in the future), and I do suspect that this is actually a bug in sqlalchemy mapper initialization, but we should be able to come up with an interim work around. Finally, if this is a blocker for you while it's not trivial(and I still am going to fox this bug), setting up an amqp (rabbitmq) server and configuring your galaxy instances to communicate using that is a workaround. On Oct 8, 2014 10:45 AM, Ulf Schaefer ulf.schae...@phe.gov.uk wrote: Hi all again Seems I am not so fortunate that this would just go away. It appear to be happening sometimes at start-up time for one of the handler processes. The first thing that appears to go wrong is this just after starting the job handler queue: --- galaxy.jobs.handler INFO 2014-10-06 14:37:51,220 job handler queue started galaxy.sample_tracking.external_service_types DEBUG 2014-10-06 14:37:51,246 Loaded external_service_type: Simple unknown sequencer 1.0.0 galaxy.sample_tracking.external_service_types DEBUG 2014-10-06 14:37:51,253 Loaded external_service_type: Applied Biosystems SOLiD 1.0.0 galaxy.queue_worker INFO 2014-10-06 14:37:51,254 Initalizing Galaxy Queue Worker on sqlalchemy+postgres://galaxy:xxx@158.119.147.86:5432/galaxyprod galaxy.jobs DEBUG 2014-10-06 14:37:51,416 (78355) Working directory for job is: /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/database/job_working_directory/078/78355 galaxy.web.framework.base DEBUG 2014-10-06 14:37:51,454 Enabling 'data_admin' controller, class: DataAdmin galaxy.jobs.handler ERROR 2014-10-06 14:37:51,464 failure running job 78355 Traceback (most recent call last): File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 243, in __monitor_step job_state = self.__check_if_ready_to_run( job ) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 333, in __check_if_ready_to_run state = self.__check_user_jobs( job, self.job_wrappers[job.id] ) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 417, in __check_user_jobs if job.user: File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/attributes.py, line 168, in __get__ return self.impl.get(instance_state(instance),dict_) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/attributes.py, line 453, in get value = self.callable_(state, passive) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/strategies.py, line 508, in _load_for_state return self._emit_lazyload(session, state, ident_key) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/strategies.py, line 552, in _emit_lazyload return q._load_on_ident(ident_key) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2512, in _load_on_ident return q.one() File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2184, in one ret = list(self) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2227, in __iter__ return self._execute_and_instances(context) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2240, in _execute_and_instances close_with_result=True) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2231, in _connection_from_session **kw) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py, line 774, in connection bind = self.get_bind(mapper, clause=clause, **kw) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py, line 1052, in get_bind c_mapper = mapper is not None and _class_to_mapper(mapper) or None File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/util.py, line 680, in _class_to_mapper
Re: [galaxy-dev] testtoolshed internal server error
On Wed, Oct 8, 2014 at 11:20 AM, Stef van Lieshout stefvanliesh...@fastmail.fm wrote: Anyone else getting this when trying to upload to a testtoolshed repos? I'm using the upload files to repository function in repository actions and get a blank page with internal server error.. Worked fine yesterday. Ciao, Stef There's a chance it is the same root problem as this issue which I hit a couple of hours ago (again internal server error): http://lists.bx.psu.edu/pipermail/galaxy-dev/2014-October/020614.html Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] testtoolshed internal server error
Ok, works for me again. Just a little hiccup I guess... Stef - Original message - From: Peter Cock p.j.a.c...@googlemail.com To: Stef van Lieshout stefvanliesh...@fastmail.fm Cc: Galaxy Dev galaxy-...@bx.psu.edu Subject: Re: [galaxy-dev] testtoolshed internal server error Date: Wed, 8 Oct 2014 11:33:36 +0100 On Wed, Oct 8, 2014 at 11:20 AM, Stef van Lieshout stefvanliesh...@fastmail.fm wrote: Anyone else getting this when trying to upload to a testtoolshed repos? I'm using the upload files to repository function in repository actions and get a blank page with internal server error.. Worked fine yesterday. Ciao, Stef There's a chance it is the same root problem as this issue which I hit a couple of hours ago (again internal server error): http://lists.bx.psu.edu/pipermail/galaxy-dev/2014-October/020614.html Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] testtoolshed internal server error
OK good - my issue with the ToolShed work now too :) On Wed, Oct 8, 2014 at 11:44 AM, Stef van Lieshout stefvanliesh...@fastmail.fm wrote: Ok, works for me again. Just a little hiccup I guess... Stef - Original message - From: Peter Cock p.j.a.c...@googlemail.com To: Stef van Lieshout stefvanliesh...@fastmail.fm Cc: Galaxy Dev galaxy-...@bx.psu.edu Subject: Re: [galaxy-dev] testtoolshed internal server error Date: Wed, 8 Oct 2014 11:33:36 +0100 On Wed, Oct 8, 2014 at 11:20 AM, Stef van Lieshout stefvanliesh...@fastmail.fm wrote: Anyone else getting this when trying to upload to a testtoolshed repos? I'm using the upload files to repository function in repository actions and get a blank page with internal server error.. Worked fine yesterday. Ciao, Stef There's a chance it is the same root problem as this issue which I hit a couple of hours ago (again internal server error): http://lists.bx.psu.edu/pipermail/galaxy-dev/2014-October/020614.html Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] ToolShed tool preview broken (TestToolShed too)
On Wed, Oct 8, 2014 at 9:22 AM, Peter Cock p.j.a.c...@googlemail.com wrote: Hi all, From the new tools information Dave Compiled for the last Galaxy Update https://wiki.galaxyproject.org/GalaxyUpdates/2014_10#ToolShed_Contributions I had a look at galaxyp's filter_by_fasta_ids: Extract sequences from a FASTA file based on a list of IDs tool: https://toolshed.g2.bx.psu.edu/view/galaxyp/filter_by_fasta_ids I wanted to see how it compared to my own similar tools (which handle FASTA, FASTQ, SFF and could cover more - they replaced my older single format filter tools): https://toolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id https://toolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id Now for the bug report, clicking on the button (under valid tools) which would normally give a preview of the tool form is failing - giving just Internal Server Error. I have tried a random selection of other tools and this seems to be universal - moreover the TestToolShed also seems to have the same problem. Regards, Peter This is working again now. See also another possibly related Internal Server Error on upload which is also working again now: http://lists.bx.psu.edu/pipermail/galaxy-dev/2014-October/020619.html Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] SQLalchemy InvalidRequestError
Hi Dannon Yes. The database is running on a different server from the server running Galaxy. They are both VMs running Centos (6.5 on the Galaxy server, 6.2 on the database server). The postgres version is 8.4.9 and the database size is 712,161,040. I suspect that is not very large compared to some others. There are a number of other databases running on the same server, the one most frequently used is for our test Galaxy server which runs on yet a different VM. This one is much smaller (25,319,184). Both servers are on the same subnet. The problem is with our production Galaxy (of course). Are there any instructions around, how to implement a rabbitmq for my Galaxy? Thanks for looking into this. Ulf On 08/10/14 11:26, Dannon Baker wrote: Hi again Ulf, Thanks for the info. A few questions to help me track this down: Does the postgres database reside on a remote box from galaxy? And is it very large? Running the latest galaxy may not change anything related to this particular issue, but you could always try it. Sqlalchemy is fixed at the latest version we can currently support without reworking how migration scripts function (which we will do, moving to Alembic, in the future), and I do suspect that this is actually a bug in sqlalchemy mapper initialization, but we should be able to come up with an interim work around. Finally, if this is a blocker for you while it's not trivial(and I still am going to fox this bug), setting up an amqp (rabbitmq) server and configuring your galaxy instances to communicate using that is a workaround. On Oct 8, 2014 10:45 AM, Ulf Schaefer ulf.schae...@phe.gov.uk wrote: Hi all again Seems I am not so fortunate that this would just go away. It appear to be happening sometimes at start-up time for one of the handler processes. The first thing that appears to go wrong is this just after starting the job handler queue: --- galaxy.jobs.handler INFO 2014-10-06 14:37:51,220 job handler queue started galaxy.sample_tracking.external_service_types DEBUG 2014-10-06 14:37:51,246 Loaded external_service_type: Simple unknown sequencer 1.0.0 galaxy.sample_tracking.external_service_types DEBUG 2014-10-06 14:37:51,253 Loaded external_service_type: Applied Biosystems SOLiD 1.0.0 galaxy.queue_worker INFO 2014-10-06 14:37:51,254 Initalizing Galaxy Queue Worker on sqlalchemy+postgres://galaxy:xxx@158.119.147.86:5432/galaxyprod galaxy.jobs DEBUG 2014-10-06 14:37:51,416 (78355) Working directory for job is: /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/database/job_working_directory/078/78355 galaxy.web.framework.base DEBUG 2014-10-06 14:37:51,454 Enabling 'data_admin' controller, class: DataAdmin galaxy.jobs.handler ERROR 2014-10-06 14:37:51,464 failure running job 78355 Traceback (most recent call last): File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 243, in __monitor_step job_state = self.__check_if_ready_to_run( job ) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 333, in __check_if_ready_to_run state = self.__check_user_jobs( job, self.job_wrappers[job.id] ) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/lib/galaxy/jobs/handler.py, line 417, in __check_user_jobs if job.user: File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/attributes.py, line 168, in __get__ return self.impl.get(instance_state(instance),dict_) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/attributes.py, line 453, in get value = self.callable_(state, passive) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/strategies.py, line 508, in _load_for_state return self._emit_lazyload(session, state, ident_key) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/strategies.py, line 552, in _emit_lazyload return q._load_on_ident(ident_key) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2512, in _load_on_ident return q.one() File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2184, in one ret = list(self) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2227, in __iter__ return self._execute_and_instances(context) File /phengs/hpc_storage/home/galaxy_hpc/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py, line 2240, in _execute_and_instances close_with_result=True) File
Re: [galaxy-dev] Is the new tool repositories summary in the monthly newsletter useful?
Hi Peter, all, I've added post in both places as an option. So far we only have two responses ... Thanks, Dave C On Wed, Oct 8, 2014 at 1:16 AM, Peter Cock p.j.a.c...@googlemail.com wrote: On Wed, Oct 8, 2014 at 12:49 AM, Dave Clements cleme...@galaxyproject.org wrote: Hi All, The October Galaxy newsletter went out a week ago. Buried at the bottom is this 36 new ToolShed repos -- https://wiki.galaxyproject.org/GalaxyUpdates/2014_10#ToolShed_Contributions which lists repositories that have been published in the Galaxy Project ToolShed in the previous month. I have two questions about this: 1. How useful is this summary? Compiling it is a manual process and it's kind of mind-numbing. Most months it takes around 2 hours (I think). I find it moderately useful, so if most Galaxy Admins think the same, it probably is overall a good time investment. 2. If we keep the summary, should we put it in the Dev News Briefs instead? I'm kinda thinking this summary is a better match for the Dev News Briefs (every release), then it is for the general newsletter (every month). I would suggest both (easy if it is just a link, a tiny bit of copy and paste if not), but that wasn't an option on the Google form. Peter -- http://galaxyproject.org/ http://getgalaxy.org/ http://usegalaxy.org/ https://wiki.galaxyproject.org/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] CloudMan: Autoscaling =Unable to run this job due to a cluster error
Hello Galaxy Team! I've set up a vanilla CloudMan instance using AWS. I've set the head node to not handle jobs and have set AutoScaling to a minimum of 0 and a maximum of 4 worker nodes. Upon submitting a job, it fails with the following error: Unable to run this job due to a cluster error, please retry it later Now if I set AutoScaling to a minimum of 1 worker node it works fine. But I would rather not always have 1 worker node up. Any recommendations? Thank you very much, Robert This e-mail message (including any attachments) is for the sole use of the intended recipient(s) and may contain confidential and privileged information. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this message (including any attachments) is strictly prohibited. If you have received this message in error, please contact the sender by reply e-mail message and destroy all copies of the original message (including attachments). ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/