Hi Roberto,

It looks like you are probably still using the default sqlite database. You’ll 
want to update to using e.g. postgres when exploring these more database 
intensive functions. See e.g.: 
https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer#Switching_to_a_database_server

Thanks for using Galaxy,

Dan


On Feb 5, 2015, at 8:24 AM, Roberto Alonso <roal...@gmail.com> wrote:

> Hello,
> 
> I am trying to use parallelism in Galaxy.  I added this entry to the tool xml 
> config: 
> 
> <tool id="fa_gc_content_1" name="Compute GC content">
>   <description>for each sequence in a file</description>
>   <parallelism method="basic" split_size="8" 
> split_mode="number_of_parts"></parallelism>
> 
> 
> But when I run the job, the log shows the next: 
> 
> 
> 
> Traceback (most recent call last):
>   File "/home/ralonso/galaxy-dist/lib/galaxy/jobs/runners/__init__.py", line 
> 158, in prepare_job
>     job_wrapper.prepare()
>   File "/home/ralonso/galaxy-dist/lib/galaxy/jobs/__init__.py", line 1607, in 
> prepare
>     tool_evaluator.set_compute_environment( compute_environment )
>   File "/home/ralonso/galaxy-dist/lib/galaxy/tools/evaluation.py", line 53, 
> in set_compute_environment
>     incoming = self.tool.params_from_strings( incoming, self.app )
>   File "/home/ralonso/galaxy-dist/lib/galaxy/tools/__init__.py", line 2810, 
> in params_from_strings
>     return params_from_strings( self.inputs, params, app, ignore_errors )
>   File "/home/ralonso/galaxy-dist/lib/galaxy/tools/parameters/__init__.py", 
> line 103, in params_from_strings
>     value = params[key].value_from_basic( value, app, ignore_errors )
>   File "/home/ralonso/galaxy-dist/lib/galaxy/tools/parameters/basic.py", line 
> 162, in value_from_basic
>     return self.to_python( value, app )
>   File "/home/ralonso/galaxy-dist/lib/galaxy/tools/parameters/basic.py", line 
> 1999, in to_python
>     return app.model.context.query( app.model.HistoryDatasetAssociation 
> ).get( int( value ) )
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py",
>  line 775, in get
>     return self._load_on_ident(key)
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py",
>  line 2512, in _load_on_ident
>     return q.one()
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py",
>  line 2184, in one
>     ret = list(self)
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py",
>  line 2227, in __iter__
>     return self._execute_and_instances(context)
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/query.py",
>  line 2242, in _execute_and_instances
>     result = conn.execute(querycontext.statement, self._params)
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py",
>  line 1449, in execute
>     params)
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py",
>  line 1584, in _execute_clauseelement
>     compiled_sql, distilled_params
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py",
>  line 1698, in _execute_context
>     context)
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py",
>  line 1691, in _execute_context
>     context)
>   File 
> "/home/ralonso/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/engine/default.py",
>  line 331, in do_execute
>     cursor.execute(statement, parameters)
> OperationalError: (OperationalError) database is locked u'SELECT 
> history_dataset_association.id AS history_dataset_association_id, 
> history_dataset_association.history_id AS 
> history_dataset_association_history_id, 
> history_dataset_association.dataset_id AS 
> history_dataset_association_dataset_id, 
> history_dataset_association.create_time AS 
> history_dataset_association_create_time, 
> history_dataset_association.update_time AS 
> history_dataset_association_update_time, history_dataset_association.state AS 
> history_dataset_association_state, 
> history_dataset_association.copied_from_history_dataset_association_id AS 
> history_dataset_association_copied_from_history_dataset_association_id, 
> history_dataset_association.copied_from_library_dataset_dataset_association_id
>  AS 
> history_dataset_association_copied_from_library_dataset_dataset_association_id,
>  history_dataset_association.hid AS history_dataset_association_hid, 
> history_dataset_association.name AS history_dataset_association_name, 
> history_dataset_association.info AS history_dataset_association_info, 
> history_dataset_association.blurb AS history_dataset_association_blurb, 
> history_dataset_association.peek AS history_dataset_association_peek, 
> history_dataset_association.tool_version AS 
> history_dataset_association_tool_version, 
> history_dataset_association.extension AS 
> history_dataset_association_extension, history_dataset_association.metadata 
> AS history_dataset_association_metadata, 
> history_dataset_association.parent_id AS 
> history_dataset_association_parent_id, 
> history_dataset_association.designation AS 
> history_dataset_association_designation, history_dataset_association.deleted 
> AS history_dataset_association_deleted, history_dataset_association.purged AS 
> history_dataset_association_purged, history_dataset_association.visible AS 
> history_dataset_association_visible, 
> history_dataset_association.hidden_beneath_collection_instance_id AS 
> history_dataset_association_hidden_beneath_collection_instance_id, 
> history_dataset_association.extended_metadata_id AS 
> history_dataset_association_extended_metadata_id, dataset_1.id AS 
> dataset_1_id, dataset_1.create_time AS dataset_1_create_time, 
> dataset_1.update_time AS dataset_1_update_time, dataset_1.state AS 
> dataset_1_state, dataset_1.deleted AS dataset_1_deleted, dataset_1.purged AS 
> dataset_1_purged, dataset_1.purgable AS dataset_1_purgable, 
> dataset_1.object_store_id AS dataset_1_object_store_id, 
> dataset_1.external_filename AS dataset_1_external_filename, 
> dataset_1._extra_files_path AS dataset_1__extra_files_path, 
> dataset_1.file_size AS dataset_1_file_size, dataset_1.total_size AS 
> dataset_1_total_size, dataset_1.uuid AS dataset_1_uuid \nFROM 
> history_dataset_association LEFT OUTER JOIN dataset AS dataset_1 ON 
> dataset_1.id = history_dataset_association.dataset_id \nWHERE 
> history_dataset_association.id = ?' (1,)
> galaxy.jobs.runners ERROR 2015-02-05 12:58:11,431 (89_486) Failure preparing 
> job
> 
> So when one task tries to run it fails, it seems that the database is locked 
> by other task. When I run with 4 splits it never happen, but with 5 it begins 
> to happen. Indeed with 5 splits sometimes it doesn't happen, but since 6 it 
> always occurs.
> 
> Could you please help me?
> 
> 
> 
> Regards
> 
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>  https://lists.galaxyproject.org/
> 
> To search Galaxy mailing lists use the unified search at:
>  http://galaxyproject.org/search/mailinglists/

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to