[galaxy-dev] Error executing workflow via API

2014-09-08 Thread Neil.Burdett
Hi,
 I execute a number of workflows via the API which all work fine. However, 
the longest one I use (consists of three linked tasks), returns an error. I 
call the workflow in the python script using popen and has the format:

/home/galaxy/milxcloud-new/scripts/api/workflow_execute.py 
d2fcd3feb4c6318c496d55fa8869b67c http://barium-rbh/new/api/workflows 
f597429621d6eb2b hist_id=f597429621d6eb2b 4=hda=c8c00aa41dc69085

The task appears on the history panel and all tasks eventually complete in the 
workflow, however, the task that calls the workflow using the above popen call 
returns the following error:

Traceback (most recent call last):
  File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line 31, 
in module
main()
  File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line 28, 
in main
submit( sys.argv[1], sys.argv[2], data )
  File /home/galaxy/milxcloud-new/scripts/api/common.py, line 117, in submit
r = post( api_key, url, data )
  File /home/galaxy/milxcloud-new/scripts/api/common.py, line 51, in post
return json.loads( urllib2.urlopen( req ).read() )
  File /usr/lib/python2.7/socket.py, line 351, in read
data = self._sock.recv(rbufsize)
  File /usr/lib/python2.7/httplib.py, line 541, in read
return self._read_chunked(amt)
  File /usr/lib/python2.7/httplib.py, line 586, in _read_chunked
raise IncompleteRead(''.join(value))
httplib.IncompleteRead: IncompleteRead(147 bytes read)

This worked on older versions of Galaxy. I need this to work so I can monitor 
the workflow so know when all tasks are complete.

has anyone seen this error? Or got an idea how to solve it.

hg summary reports:
% hg summary
parent: 13771:7a4d321c0e38 tip
 Updated tag latest_2014.06.02 for changeset 8c30e91bc9ae
branch: stable
commit: (clean)
update: (current)

Thanks
Neil
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Error executing workflow via API

2014-09-08 Thread John Chilton
There are definitely limitations to the size of workflows that can be
executed right now but I feel like that problem should be getting
better not worse so this is a little confusing. Did something besides
the Galaxy version change (like proxy settings, timeouts, etc...?).

The client side error interesting but I feel like a server side error
would reveal more - is there a server side error in the Galaxy logs
for this?

-John

On Mon, Sep 8, 2014 at 9:29 PM,  neil.burd...@csiro.au wrote:
 Hi,
  I execute a number of workflows via the API which all work fine.
 However, the longest one I use (consists of three linked tasks), returns an
 error. I call the workflow in the python script using popen and has the
 format:

 /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py
 d2fcd3feb4c6318c496d55fa8869b67c http://barium-rbh/new/api/workflows
 f597429621d6eb2b hist_id=f597429621d6eb2b 4=hda=c8c00aa41dc69085

 The task appears on the history panel and all tasks eventually complete in
 the workflow, however, the task that calls the workflow using the above
 popen call returns the following error:

 Traceback (most recent call last):
   File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line
 31, in module
 main()
   File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line
 28, in main
 submit( sys.argv[1], sys.argv[2], data )
   File /home/galaxy/milxcloud-new/scripts/api/common.py, line 117, in
 submit
 r = post( api_key, url, data )
   File /home/galaxy/milxcloud-new/scripts/api/common.py, line 51, in post
 return json.loads( urllib2.urlopen( req ).read() )
   File /usr/lib/python2.7/socket.py, line 351, in read
 data = self._sock.recv(rbufsize)
   File /usr/lib/python2.7/httplib.py, line 541, in read
 return self._read_chunked(amt)
   File /usr/lib/python2.7/httplib.py, line 586, in _read_chunked
 raise IncompleteRead(''.join(value))
 httplib.IncompleteRead: IncompleteRead(147 bytes read)

 This worked on older versions of Galaxy. I need this to work so I can
 monitor the workflow so know when all tasks are complete.

 has anyone seen this error? Or got an idea how to solve it.

 hg summary reports:
 % hg summary
 parent: 13771:7a4d321c0e38 tip
  Updated tag latest_2014.06.02 for changeset 8c30e91bc9ae
 branch: stable
 commit: (clean)
 update: (current)

 Thanks
 Neil

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Error executing workflow via API

2014-09-08 Thread Neil.Burdett
Hi John,
   Thanks for the quick response. There seems to be nothing obvious on 
the server side log. This is all I get:

galaxy.jobs.runners DEBUG 2014-09-09 11:43:23,850 (322) command is: python 
/home/galaxy/milxcloud-new/tools/cte/cte_process_input_data.py 
'/home/galaxy/milxcloud-new/tools/cte/cte_process_input_data.py' 
'/home/galaxy/milxcloud-new/database/files/000/dataset_447.dat' 'Input data 
(141-S-0851-MRI-T1-Screening)' 
'/home/galaxy/milxcloud-new/database/files/000/dataset_447_files' 
d2fcd3feb4c6318c496d55fa8869b67c 
'/home/galaxy/milxcloud-new/database/files/000/dataset_454.dat' 
'/home/galaxy/milxcloud-new/database/job_working_directory/000/322/dataset_454_files'
galaxy.jobs.runners.local DEBUG 2014-09-09 11:43:23,852 (322) executing job 
script: 
/home/galaxy/milxcloud-new/database/job_working_directory/000/322/galaxy_322.sh
galaxy.jobs DEBUG 2014-09-09 11:43:23,882 (322) Persisting job destination 
(destination id: local:///)
127.0.0.1 - - [09/Sep/2014:11:43:23 +1000] POST 
/new/api/workflows?key=d2fcd3feb4c6318c496d55fa8869b67c HTTP/1.1 200 - - 
Python-urllib/2.7

I still have an 'old' version of Galaxy and so can confirm that this workflow 
works fine on older versions of Galaxy. 

With this 'latest' version of Galaxy I am using the default values stated in 
universe_wsgi.ini . I am happy to modify values to see if I can resolve this 
issue if you can suggest where /what I modify.

Also, in older versions of Galaxy after executing the popen command I get a 
response almost immediately (and I do with other workflows I use). However, in 
this particular case I don't get a response for 5 minutes.

Thanks
Neil

From: John Chilton [jmchil...@gmail.com]
Sent: Tuesday, September 09, 2014 11:37 AM
To: Burdett, Neil (DPS, Herston - RBWH)
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Error executing workflow via API

There are definitely limitations to the size of workflows that can be
executed right now but I feel like that problem should be getting
better not worse so this is a little confusing. Did something besides
the Galaxy version change (like proxy settings, timeouts, etc...?).

The client side error interesting but I feel like a server side error
would reveal more - is there a server side error in the Galaxy logs
for this?

-John

On Mon, Sep 8, 2014 at 9:29 PM,  neil.burd...@csiro.au wrote:
 Hi,
  I execute a number of workflows via the API which all work fine.
 However, the longest one I use (consists of three linked tasks), returns an
 error. I call the workflow in the python script using popen and has the
 format:

 /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py
 d2fcd3feb4c6318c496d55fa8869b67c http://barium-rbh/new/api/workflows
 f597429621d6eb2b hist_id=f597429621d6eb2b 4=hda=c8c00aa41dc69085

 The task appears on the history panel and all tasks eventually complete in
 the workflow, however, the task that calls the workflow using the above
 popen call returns the following error:

 Traceback (most recent call last):
   File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line
 31, in module
 main()
   File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line
 28, in main
 submit( sys.argv[1], sys.argv[2], data )
   File /home/galaxy/milxcloud-new/scripts/api/common.py, line 117, in
 submit
 r = post( api_key, url, data )
   File /home/galaxy/milxcloud-new/scripts/api/common.py, line 51, in post
 return json.loads( urllib2.urlopen( req ).read() )
   File /usr/lib/python2.7/socket.py, line 351, in read
 data = self._sock.recv(rbufsize)
   File /usr/lib/python2.7/httplib.py, line 541, in read
 return self._read_chunked(amt)
   File /usr/lib/python2.7/httplib.py, line 586, in _read_chunked
 raise IncompleteRead(''.join(value))
 httplib.IncompleteRead: IncompleteRead(147 bytes read)

 This worked on older versions of Galaxy. I need this to work so I can
 monitor the workflow so know when all tasks are complete.

 has anyone seen this error? Or got an idea how to solve it.

 hg summary reports:
 % hg summary
 parent: 13771:7a4d321c0e38 tip
  Updated tag latest_2014.06.02 for changeset 8c30e91bc9ae
 branch: stable
 commit: (clean)
 update: (current)

 Thanks
 Neil

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org

Re: [galaxy-dev] Error executing workflow via API

2014-09-08 Thread Neil.Burdett
Hi,
It seems all my other workflows that work have a txt output feeding into a 
an input accepting a text input. This particular workflow (that's failing) is 
outputting a html output to a html input ? Could this be the reason for the 
failure? has anyone else got html outputs to inputs on their workflows ? Or 
maybe on the wrong track again? The workflows are the same on this version of 
Galaxy as they are on the older version that works.

Everything else remains the same i.e. same desktop etc...

Neil

From: Burdett, Neil (DPS, Herston - RBWH)
Sent: Tuesday, September 09, 2014 11:55 AM
To: John Chilton
Cc: galaxy-dev@lists.bx.psu.edu
Subject: RE: [galaxy-dev] Error executing workflow via API

Hi John,
   Thanks for the quick response. There seems to be nothing obvious on 
the server side log. This is all I get:

galaxy.jobs.runners DEBUG 2014-09-09 11:43:23,850 (322) command is: python 
/home/galaxy/milxcloud-new/tools/cte/cte_process_input_data.py 
'/home/galaxy/milxcloud-new/tools/cte/cte_process_input_data.py' 
'/home/galaxy/milxcloud-new/database/files/000/dataset_447.dat' 'Input data 
(141-S-0851-MRI-T1-Screening)' 
'/home/galaxy/milxcloud-new/database/files/000/dataset_447_files' 
d2fcd3feb4c6318c496d55fa8869b67c 
'/home/galaxy/milxcloud-new/database/files/000/dataset_454.dat' 
'/home/galaxy/milxcloud-new/database/job_working_directory/000/322/dataset_454_files'
galaxy.jobs.runners.local DEBUG 2014-09-09 11:43:23,852 (322) executing job 
script: 
/home/galaxy/milxcloud-new/database/job_working_directory/000/322/galaxy_322.sh
galaxy.jobs DEBUG 2014-09-09 11:43:23,882 (322) Persisting job destination 
(destination id: local:///)
127.0.0.1 - - [09/Sep/2014:11:43:23 +1000] POST 
/new/api/workflows?key=d2fcd3feb4c6318c496d55fa8869b67c HTTP/1.1 200 - - 
Python-urllib/2.7

I still have an 'old' version of Galaxy and so can confirm that this workflow 
works fine on older versions of Galaxy.

With this 'latest' version of Galaxy I am using the default values stated in 
universe_wsgi.ini . I am happy to modify values to see if I can resolve this 
issue if you can suggest where /what I modify.

Also, in older versions of Galaxy after executing the popen command I get a 
response almost immediately (and I do with other workflows I use). However, in 
this particular case I don't get a response for 5 minutes.

Thanks
Neil

From: John Chilton [jmchil...@gmail.com]
Sent: Tuesday, September 09, 2014 11:37 AM
To: Burdett, Neil (DPS, Herston - RBWH)
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Error executing workflow via API

There are definitely limitations to the size of workflows that can be
executed right now but I feel like that problem should be getting
better not worse so this is a little confusing. Did something besides
the Galaxy version change (like proxy settings, timeouts, etc...?).

The client side error interesting but I feel like a server side error
would reveal more - is there a server side error in the Galaxy logs
for this?

-John

On Mon, Sep 8, 2014 at 9:29 PM,  neil.burd...@csiro.au wrote:
 Hi,
  I execute a number of workflows via the API which all work fine.
 However, the longest one I use (consists of three linked tasks), returns an
 error. I call the workflow in the python script using popen and has the
 format:

 /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py
 d2fcd3feb4c6318c496d55fa8869b67c http://barium-rbh/new/api/workflows
 f597429621d6eb2b hist_id=f597429621d6eb2b 4=hda=c8c00aa41dc69085

 The task appears on the history panel and all tasks eventually complete in
 the workflow, however, the task that calls the workflow using the above
 popen call returns the following error:

 Traceback (most recent call last):
   File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line
 31, in module
 main()
   File /home/galaxy/milxcloud-new/scripts/api/workflow_execute.py, line
 28, in main
 submit( sys.argv[1], sys.argv[2], data )
   File /home/galaxy/milxcloud-new/scripts/api/common.py, line 117, in
 submit
 r = post( api_key, url, data )
   File /home/galaxy/milxcloud-new/scripts/api/common.py, line 51, in post
 return json.loads( urllib2.urlopen( req ).read() )
   File /usr/lib/python2.7/socket.py, line 351, in read
 data = self._sock.recv(rbufsize)
   File /usr/lib/python2.7/httplib.py, line 541, in read
 return self._read_chunked(amt)
   File /usr/lib/python2.7/httplib.py, line 586, in _read_chunked
 raise IncompleteRead(''.join(value))
 httplib.IncompleteRead: IncompleteRead(147 bytes read)

 This worked on older versions of Galaxy. I need this to work so I can
 monitor the workflow so know when all tasks are complete.

 has anyone seen this error? Or got an idea how to solve it.

 hg summary reports:
 % hg summary
 parent: 13771:7a4d321c0e38 tip
  Updated tag latest_2014.06.02 for changeset 8c30e91bc9ae
 branch: stable
 commit

[galaxy-dev] Error running workflow via API

2014-04-09 Thread John Marmaduke Eppley
The API interface is translating my parameters into python repr strings instead 
of actual strings.

A command that should look like this:
createPrimerFile.py 
/minilims/galaxy-data/primerTemplates/nextera.2x.primers.fasta NN NN  
/slipstream/galaxy/production/data/files/020/dataset_20664.dat

is getting run as this:
createPrimerFile.py 
/minilims/galaxy-data/primerTemplates/nextera.2x.primers.fasta 
__lt__galaxy.tools.parameters.basic.RuntimeValue object at 0x792be10__gt__ 
NN  /slipstream/galaxy/production/data/files/020/dataset_20728.dat

The input table in the tool info page is:
Input Parameter Value   Note for rerun
chemistry type: galaxy.tools.parameters.basic.RuntimeValue object at 
0x7fee4425ca10   
First barcode   NN  
Second (optional) barcode   NN


I’m running a slightly out of date version of Galaxy (I think it is change set 
11080:87b586afb054) because this is BioTeam’s Slipstream appliance. Is there 
any chance this is a known bug? I didn’t see anything in Trello, but I’m really 
not sure I’ve got the hang of Trello yet.

Thanks
-john



smime.p7s
Description: S/MIME cryptographic signature
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Error running workflow via API

2014-04-09 Thread John Marmaduke Eppley
I’ve figured this one out. The tool id changed one me. Sorry for the spam.
-j

On Apr 9, 2014, at 4:48 PM, John Marmaduke Eppley jmepp...@mit.edu wrote:

 The API interface is translating my parameters into python repr strings 
 instead of actual strings.
 
 A command that should look like this:
 createPrimerFile.py 
 /minilims/galaxy-data/primerTemplates/nextera.2x.primers.fasta NN NN 
  /slipstream/galaxy/production/data/files/020/dataset_20664.dat
 
 is getting run as this:
 createPrimerFile.py 
 /minilims/galaxy-data/primerTemplates/nextera.2x.primers.fasta 
 __lt__galaxy.tools.parameters.basic.RuntimeValue object at 0x792be10__gt__ 
 NN  /slipstream/galaxy/production/data/files/020/dataset_20728.dat
 
 The input table in the tool info page is:
 Input Parameter   Value   Note for rerun
 chemistry type:   galaxy.tools.parameters.basic.RuntimeValue object at 
 0x7fee4425ca10   
 First barcode NN  
 Second (optional) barcode NN
 
 
 I’m running a slightly out of date version of Galaxy (I think it is change 
 set 11080:87b586afb054) because this is BioTeam’s Slipstream appliance. Is 
 there any chance this is a known bug? I didn’t see anything in Trello, but 
 I’m really not sure I’ve got the hang of Trello yet.
 
 Thanks
 -john
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/



smime.p7s
Description: S/MIME cryptographic signature
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Error running workflow via API

2014-04-09 Thread John Eppley
The API interface is translating my parameters into python repr strings instead 
of actual strings.

A command that should look like this:
createPrimerFile.py 
/minilims/galaxy-data/primerTemplates/nextera.2x.primers.fasta NN NN  
/slipstream/galaxy/production/data/files/020/dataset_20664.dat

is getting run as this:
createPrimerFile.py 
/minilims/galaxy-data/primerTemplates/nextera.2x.primers.fasta 
__lt__galaxy.tools.parameters.basic.RuntimeValue object at 0x792be10__gt__ 
NN  /slipstream/galaxy/production/data/files/020/dataset_20728.dat

The input table in the tool info page is:
Input Parameter Value   Note for rerun
chemistry type: galaxy.tools.parameters.basic.RuntimeValue object at 
0x7fee4425ca10   
First barcode   NN  
Second (optional) barcode   NN


I’m running a slightly out of date version of Galaxy (I think it is change set 
11080:87b586afb054) because this is BioTeam’s Slipstream appliance. Is there 
any chance this is a known bug? I didn’t see anything in Trello, but I’m really 
not sure I’ve got the hang of Trello yet.

Thanks
-john___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] error in workflow

2014-03-12 Thread Hans-Rudolf Hotz

Hi Milad

The error message: database is locked might be a hint.

Are you using a PostgreSQL database or just SQLite ? If the latter, you 
will often run in troubles when running more than one job at a time, due 
to file locking.


Regards, Hans-Rudolf


On 03/11/2014 07:41 PM, Milad Bastami wrote:

Dear Galaxy Developers,
I'm running galaxy local on ubuntu and trying to run a workflow on
multiple datasets (separately).  Occasionally when I try to run workflow
on input dataset I get Unable to finish job error in some steps, in
most time the problem will be solved when I run the workflow again, but
this going to happen more frequently. I attached the error message here.
I suspect it may be related to increase in number of datasets of the
current history (1757 datasets with ~350GB size).
I will appreciate your help

Traceback (most recent call last):
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/lib/galaxy/jobs/runners/local.py,
 line 116, in queue_job
 job_wrapper.finish( stdout, stderr, exit_code )
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/lib/galaxy/jobs/__init__.py,
 line 1068, in finish
 self.sa_session.flush()
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/scoping.py,
 line 114, in do
 return getattr(self.registry(), name)(*args, **kwargs)
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py,
 line 1718, in flush
 self._flush(objects)
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py,
 line 1804, in _flush
 transaction.commit()
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py,
 line 365, in commit
 t[1].commit()
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 2045, in commit
 self._do_commit()
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 2075, in _do_commit
 self.connection._commit_impl()
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 1280, in _commit_impl
 self._handle_dbapi_exception(e, None, None, None, None)
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 1277, in _commit_impl
 self.engine.dialect.do_commit(self.connection)
   File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/default.py,
 line 307, in do_commit
 connection.commit()
OperationalError: (OperationalError) database is locked None None



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] error in workflow

2014-03-11 Thread Milad Bastami
Dear Galaxy Developers,I'm running galaxy local on ubuntu and trying to run a 
workflow on multiple datasets (separately).  Occasionally when I try to run 
workflow on input dataset I get Unable to finish job error in some steps, in 
most time the problem will be solved when I run the workflow again, but this 
going to happen more frequently. I attached the error message here. I suspect 
it may be related to increase in number of datasets of the current history 
(1757 datasets with ~350GB size).I will appreciate your helpTraceback (most 
recent call last):
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/lib/galaxy/jobs/runners/local.py,
 line 116, in queue_job
job_wrapper.finish( stdout, stderr, exit_code )
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/lib/galaxy/jobs/__init__.py,
 line 1068, in finish
self.sa_session.flush()
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/scoping.py,
 line 114, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py,
 line 1718, in flush
self._flush(objects)
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py,
 line 1804, in _flush
transaction.commit()
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py,
 line 365, in commit
t[1].commit()
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 2045, in commit
self._do_commit()
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 2075, in _do_commit
self.connection._commit_impl()
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 1280, in _commit_impl
self._handle_dbapi_exception(e, None, None, None, None)
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py,
 line 1277, in _commit_impl
self.engine.dialect.do_commit(self.connection)
  File 
/media/milad/acfed08f-e8e7-43d5-b582-5b5acdba9072/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/default.py,
 line 307, in do_commit
connection.commit()
OperationalError: (OperationalError) database is locked None None   
  ___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] error running workflow

2012-07-02 Thread J.W.F.van_der_Heijden


Hi all,

I have added some tools to my local install of Galaxy. To get to my final 
results I have to do a lot of steps.  When I try to run every step separately 
it turns out to work out fine. But when I try to run all steps at once with a 
workflow I get the following error message.

OperationalError: (OperationalError) database is locked u'UPDATE dataset SET 
update_time=?, state=? WHERE dataset.id = ?' ['2012-07-02 11:34:48.135419', 
'queued', 529]

I have noticed that the processes eat a lot of my computer resources. Other 
than that I have no clue what the problem could be.

Any tips are appreciated.

Kind regards,
Jaap van der Heijden
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] error running workflow

2012-07-02 Thread Hans-Rudolf Hotz

Hi Jaap

are you using 'SQLite', if so, I recommend to switch to 'PostgreSQL', 
see: 
http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Production%20Server/#Switching_to_a_database_server



Regards, Hans



On 07/02/2012 02:26 PM, j.w.f.van_der_heij...@lumc.nl wrote:



Hi all,

I have added some tools to my local install of Galaxy. To get to my
final results I have to do a lot of steps.  When I try to run every step
separately it turns out to work out fine. But when I try to run all
steps at once with a workflow I get the following error message.

|OperationalError: (OperationalError) database is locked u'UPDATE
dataset SET update_time=?, state=? WHERE dataset.id = ?' ['2012-07-02
11:34:48.135419', 'queued', 529]

I have noticed that the processes eat a lot of my computer resources.
Other than that I have no clue what the problem could be.

Any tips are appreciated.

Kind regards,
Jaap van der Heijden
|


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/