Re: [galaxy-dev] data collections - workflow - bug?

2015-03-30 Thread John Chilton
Torsten - thanks for reporting this. This bug was reported a few more
times - and after several false starts I think we have finally found a
fix (with a lot of help from Marius van den Beek, Philip Mabon, Aaron
Petkau, and  Franklin Bristow - big thanks to each of them).

Pull Request with Fix: https://github.com/galaxyproject/galaxy/pull/52
Trello Card Tracking Progress: https://trello.com/c/I0n23JEP

Thanks for the patience all,
-John

On Tue, Jan 27, 2015 at 12:35 PM, Torsten Houwaart
 wrote:
> I will try to build a smaller testcase that fails and talk about the job
> handlers with Björn tomorrow. I tried this with quite a big workflow which I
> built from scratch (in the editor). I'll get back to you.
>
> Best,
> Torsten
>
>
>
> On 27.01.2015 18:25, John Chilton wrote:
>>
>> Sorry again about this mixup (bringing conversation back on channel).
>> Does this happen everytime you run a workflow with collections? Do you
>> know how many job handlers Galaxy is configured with?
>>
>> -John
>>
>> On Mon, Jan 26, 2015 at 11:24 AM, John Chilton 
>> wrote:
>>>
>>> Ugh misread sqlalchemy as sqlite. I will work on this.
>>>
>>> -John
>>>
>>> On Mon, Jan 26, 2015 at 11:16 AM, Torsten Houwaart
>>>  wrote:

 Hi John,

 i ran this on the Galaxy server in Freiburg: galaxy.uni-freiburg.de
 Björn (who sits immediately opposite of my desk :) ) told me it's a
 Postres
 database and that shouldn't be the problem.

 Best,
 Torsten


 On 26.01.2015 16:48, John Chilton wrote:
>
> Hey,
>
> Really intensive database operations including datasest collections
> but other things too like multi-running tools or workflows over many
> individual datasets for instance - can very easily overwhelm the
> default sqlite database. This is frustrating and shouldn't happen -
> but it does unfortunately. I would recommend using a postgres database
> when testing out dataset collections. The good news is that it is
> easier than ever to get a fully fledged production-quality server
> thanks to Bjoern's docker server
> (https://github.com/bgruening/docker-galaxy-stable) - it comes bundled
> with Postgres and Slurm so it should be able to handle the collection
> operations. If you need to run Galaxy on a non-containerized server
> (for instance because that is where the software is) more information
> on setting up Galaxy can be found here
>
> https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer.
>
> Here is a Trello card to track progress on the database optimization
> efforts if you are interested https://trello.com/c/UPLsMKQI.
>
> Very sorry.
>
> -John
>
>
>
> -John
>
> On Mon, Jan 26, 2015 at 9:35 AM, Torsten Houwaart
>  wrote:
>>
>> Hello Galaxy Devs,
>>
>> I was using data collections (for the first time) for a new workflow
>> of
>> ours
>> and I ran into this problem. There was no complaint by the
>> workflow-editor
>> and I could start the workflow but then  happened.
>> If you need more information about the workflow or otherwise let me
>> know.
>>
>> Best,
>> Torsten H.
>>
>>
>> job traceback:
>> Traceback (most recent call last):
>> File
>> "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
>> line 565, in finish_job
>>   job_state.job_wrapper.finish( stdout, stderr, exit_code )
>> File "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py",
>> line
>> 1250, in finish
>>   self.sa_session.flush()
>> File
>>
>>
>> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/scoping.py",
>> line 114, in do
>>   return getattr(self.registry(), name)(*args, **kwargs)
>> File
>>
>>
>> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
>> line 1718, in flush
>>   self._flush(objects)
>> File
>>
>>
>> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
>> line 1789, in _flush
>>   flush_context.execute()
>> File
>>
>>
>> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
>> line 331, in execute
>>   rec.execute(self)
>> File
>>
>>
>> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
>> line 475, in execute
>>   uow
>> File
>>
>>
>> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
>> line 59, in save_obj
>>   mapper, table, update)
>> File
>>
>>
>> "/usr/local/

Re: [galaxy-dev] data collections - workflow - bug?

2015-01-28 Thread Torsten Houwaart
I will try to build a smaller testcase that fails and talk about the job 
handlers with Björn tomorrow. I tried this with quite a big workflow 
which I built from scratch (in the editor). I'll get back to you.


Best,
Torsten


On 27.01.2015 18:25, John Chilton wrote:

Sorry again about this mixup (bringing conversation back on channel).
Does this happen everytime you run a workflow with collections? Do you
know how many job handlers Galaxy is configured with?

-John

On Mon, Jan 26, 2015 at 11:24 AM, John Chilton  wrote:

Ugh misread sqlalchemy as sqlite. I will work on this.

-John

On Mon, Jan 26, 2015 at 11:16 AM, Torsten Houwaart
 wrote:

Hi John,

i ran this on the Galaxy server in Freiburg: galaxy.uni-freiburg.de
Björn (who sits immediately opposite of my desk :) ) told me it's a Postres
database and that shouldn't be the problem.

Best,
Torsten


On 26.01.2015 16:48, John Chilton wrote:

Hey,

Really intensive database operations including datasest collections
but other things too like multi-running tools or workflows over many
individual datasets for instance - can very easily overwhelm the
default sqlite database. This is frustrating and shouldn't happen -
but it does unfortunately. I would recommend using a postgres database
when testing out dataset collections. The good news is that it is
easier than ever to get a fully fledged production-quality server
thanks to Bjoern's docker server
(https://github.com/bgruening/docker-galaxy-stable) - it comes bundled
with Postgres and Slurm so it should be able to handle the collection
operations. If you need to run Galaxy on a non-containerized server
(for instance because that is where the software is) more information
on setting up Galaxy can be found here
https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer.

Here is a Trello card to track progress on the database optimization
efforts if you are interested https://trello.com/c/UPLsMKQI.

Very sorry.

-John



-John

On Mon, Jan 26, 2015 at 9:35 AM, Torsten Houwaart
 wrote:

Hello Galaxy Devs,

I was using data collections (for the first time) for a new workflow of
ours
and I ran into this problem. There was no complaint by the
workflow-editor
and I could start the workflow but then  happened.
If you need more information about the workflow or otherwise let me know.

Best,
Torsten H.


job traceback:
Traceback (most recent call last):
File
"/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
line 565, in finish_job
  job_state.job_wrapper.finish( stdout, stderr, exit_code )
File "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line
1250, in finish
  self.sa_session.flush()
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/scoping.py",
line 114, in do
  return getattr(self.registry(), name)(*args, **kwargs)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
line 1718, in flush
  self._flush(objects)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
line 1789, in _flush
  flush_context.execute()
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
line 331, in execute
  rec.execute(self)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
line 475, in execute
  uow
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
line 59, in save_obj
  mapper, table, update)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
line 485, in _emit_update_statements
  execute(statement, params)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
line 1449, in execute
  params)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
line 1584, in _execute_clauseelement
  compiled_sql, distilled_params
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
line 1698, in _execute_context
  context)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
line 1691, in _execute_context
  context)
File

"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/default.py",
line 331, in do_execute
  cursor.execute(statement, parameters)
DBAPIError: (TransactionRollbackError) deadlock detected
DETAIL:  Process 3144 waits for ShareLock on transaction 2517124; blocked
by
process 3143.
Process 3143 waits for ShareLock on transaction 2517123; blocked by
pr

Re: [galaxy-dev] data collections - workflow - bug?

2015-01-27 Thread John Chilton
Sorry again about this mixup (bringing conversation back on channel).
Does this happen everytime you run a workflow with collections? Do you
know how many job handlers Galaxy is configured with?

-John

On Mon, Jan 26, 2015 at 11:24 AM, John Chilton  wrote:
> Ugh misread sqlalchemy as sqlite. I will work on this.
>
> -John
>
> On Mon, Jan 26, 2015 at 11:16 AM, Torsten Houwaart
>  wrote:
>> Hi John,
>>
>> i ran this on the Galaxy server in Freiburg: galaxy.uni-freiburg.de
>> Björn (who sits immediately opposite of my desk :) ) told me it's a Postres
>> database and that shouldn't be the problem.
>>
>> Best,
>> Torsten
>>
>>
>> On 26.01.2015 16:48, John Chilton wrote:
>>>
>>> Hey,
>>>
>>>Really intensive database operations including datasest collections
>>> but other things too like multi-running tools or workflows over many
>>> individual datasets for instance - can very easily overwhelm the
>>> default sqlite database. This is frustrating and shouldn't happen -
>>> but it does unfortunately. I would recommend using a postgres database
>>> when testing out dataset collections. The good news is that it is
>>> easier than ever to get a fully fledged production-quality server
>>> thanks to Bjoern's docker server
>>> (https://github.com/bgruening/docker-galaxy-stable) - it comes bundled
>>> with Postgres and Slurm so it should be able to handle the collection
>>> operations. If you need to run Galaxy on a non-containerized server
>>> (for instance because that is where the software is) more information
>>> on setting up Galaxy can be found here
>>> https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer.
>>>
>>> Here is a Trello card to track progress on the database optimization
>>> efforts if you are interested https://trello.com/c/UPLsMKQI.
>>>
>>> Very sorry.
>>>
>>> -John
>>>
>>>
>>>
>>> -John
>>>
>>> On Mon, Jan 26, 2015 at 9:35 AM, Torsten Houwaart
>>>  wrote:

 Hello Galaxy Devs,

 I was using data collections (for the first time) for a new workflow of
 ours
 and I ran into this problem. There was no complaint by the
 workflow-editor
 and I could start the workflow but then  happened.
 If you need more information about the workflow or otherwise let me know.

 Best,
 Torsten H.


 job traceback:
 Traceback (most recent call last):
File
 "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
 line 565, in finish_job
  job_state.job_wrapper.finish( stdout, stderr, exit_code )
File "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line
 1250, in finish
  self.sa_session.flush()
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/scoping.py",
 line 114, in do
  return getattr(self.registry(), name)(*args, **kwargs)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
 line 1718, in flush
  self._flush(objects)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
 line 1789, in _flush
  flush_context.execute()
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
 line 331, in execute
  rec.execute(self)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
 line 475, in execute
  uow
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
 line 59, in save_obj
  mapper, table, update)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
 line 485, in _emit_update_statements
  execute(statement, params)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1449, in execute
  params)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1584, in _execute_clauseelement
  compiled_sql, distilled_params
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1698, in _execute_context
  context)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1691, in _execute_context
  context)
File

 "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/default.py",
 line 331, in do_execute
  curso

Re: [galaxy-dev] data collections - workflow - bug?

2015-01-26 Thread John Chilton
Hey,

  Really intensive database operations including datasest collections
but other things too like multi-running tools or workflows over many
individual datasets for instance - can very easily overwhelm the
default sqlite database. This is frustrating and shouldn't happen -
but it does unfortunately. I would recommend using a postgres database
when testing out dataset collections. The good news is that it is
easier than ever to get a fully fledged production-quality server
thanks to Bjoern's docker server
(https://github.com/bgruening/docker-galaxy-stable) - it comes bundled
with Postgres and Slurm so it should be able to handle the collection
operations. If you need to run Galaxy on a non-containerized server
(for instance because that is where the software is) more information
on setting up Galaxy can be found here
https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer.

Here is a Trello card to track progress on the database optimization
efforts if you are interested https://trello.com/c/UPLsMKQI.

Very sorry.

-John



-John

On Mon, Jan 26, 2015 at 9:35 AM, Torsten Houwaart
 wrote:
> Hello Galaxy Devs,
>
> I was using data collections (for the first time) for a new workflow of ours
> and I ran into this problem. There was no complaint by the workflow-editor
> and I could start the workflow but then  happened.
> If you need more information about the workflow or otherwise let me know.
>
> Best,
> Torsten H.
>
>
> job traceback:
> Traceback (most recent call last):
>   File "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
> line 565, in finish_job
> job_state.job_wrapper.finish( stdout, stderr, exit_code )
>   File "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line
> 1250, in finish
> self.sa_session.flush()
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/scoping.py",
> line 114, in do
> return getattr(self.registry(), name)(*args, **kwargs)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
> line 1718, in flush
> self._flush(objects)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
> line 1789, in _flush
> flush_context.execute()
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
> line 331, in execute
> rec.execute(self)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
> line 475, in execute
> uow
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
> line 59, in save_obj
> mapper, table, update)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
> line 485, in _emit_update_statements
> execute(statement, params)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
> line 1449, in execute
> params)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
> line 1584, in _execute_clauseelement
> compiled_sql, distilled_params
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
> line 1698, in _execute_context
> context)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
> line 1691, in _execute_context
> context)
>   File
> "/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/default.py",
> line 331, in do_execute
> cursor.execute(statement, parameters)
> DBAPIError: (TransactionRollbackError) deadlock detected
> DETAIL:  Process 3144 waits for ShareLock on transaction 2517124; blocked by
> process 3143.
> Process 3143 waits for ShareLock on transaction 2517123; blocked by process
> 3144.
> HINT:  See server log for query details.
>  'UPDATE workflow_invocation SET update_time=%(update_time)s WHERE
> workflow_invocation.id = %(workflow_invocation_id)s' {'update_time':
> datetime.datetime(2015, 1, 26, 14, 20, 4, 155440), 'workflow_invocation_id':
> 5454}
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to t

[galaxy-dev] data collections - workflow - bug?

2015-01-26 Thread Torsten Houwaart

Hello Galaxy Devs,

I was using data collections (for the first time) for a new workflow of ours and I 
ran into this problem. There was no complaint by the workflow-editor and I could 
start the workflow but then  happened.
If you need more information about the workflow or otherwise let me know.

Best,
Torsten H.


job traceback:
Traceback (most recent call last):
  File "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py", 
line 565, in finish_job
job_state.job_wrapper.finish( stdout, stderr, exit_code )
  File "/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line 1250, 
in finish
self.sa_session.flush()
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/scoping.py",
 line 114, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
 line 1718, in flush
self._flush(objects)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/session.py",
 line 1789, in _flush
flush_context.execute()
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
 line 331, in execute
rec.execute(self)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/unitofwork.py",
 line 475, in execute
uow
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
 line 59, in save_obj
mapper, table, update)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/persistence.py",
 line 485, in _emit_update_statements
execute(statement, params)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1449, in execute
params)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1584, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1698, in _execute_context
context)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py",
 line 1691, in _execute_context
context)
  File 
"/usr/local/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/default.py",
 line 331, in do_execute
cursor.execute(statement, parameters)
DBAPIError: (TransactionRollbackError) deadlock detected
DETAIL:  Process 3144 waits for ShareLock on transaction 2517124; blocked by 
process 3143.
Process 3143 waits for ShareLock on transaction 2517123; blocked by process 
3144.
HINT:  See server log for query details.
 'UPDATE workflow_invocation SET update_time=%(update_time)s WHERE 
workflow_invocation.id = %(workflow_invocation_id)s' {'update_time': 
datetime.datetime(2015, 1, 26, 14, 20, 4, 155440), 'workflow_invocation_id': 
5454}

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/