[
https://issues.apache.org/jira/browse/HDDS-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102740#comment-17102740
]
Aravindan Vijayan edited comment on HDDS-3561 at 5/8/20, 5:03 PM:
------------------------------------------------------------------
[~maobaolong] Sqlite does not permit parallel connections. That is why this is
probably happening. We have switched the default SQL DB in Recon to Derby after
HDDS-3411. Please try that out, and let us know. You can also configure any
other SQL DB in Recon by changing the following configs.
_ozone.recon.sql.db.jdbc.url=<>_
_ozone.recon.sql.db.driver=<>_
_ozone.recon.sql.db.jooq.dialect=<>_
The dialect can be got from org.jooq.SQLDialect. For example, for MySql it is
"MYSQL", "MYSQL_5_7" or "MYSQL_5_8".
was (Author: avijayan):
[~maobaolong] Sqlite does not permit parallel connections. That is why this is
probably happening. We have switched the default SQL DB in Recon to Derby after
HDDS-3411. Please try that out, and let us know. You can also configure any
other SQL DB in Recon by changing the following configs.
_ozone.recon.sql.db.jdbc.url=<>_
_ozone.recon.sql.db.jooq.dialect=<>_
The dialect can be got from org.jooq.SQLDialect. For example, for MySql it is
"MYSQL", "MYSQL_5_7" or "MYSQL_5_8".
> Recon hang several seconds caused by database is locked.
> --------------------------------------------------------
>
> Key: HDDS-3561
> URL: https://issues.apache.org/jira/browse/HDDS-3561
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Recon
> Affects Versions: 0.6.0
> Reporter: maobaolong
> Priority: Major
>
> 2020-05-08 19:52:33,753 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl:
> Obtaining full snapshot from Ozone Manager
> 2020-05-08 19:52:33,972 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl: Got
> new checkpoint from OM :
> /data11/baoloongmao/ozone-0.6.0-SNAPSHOT/recon-om-db-dir
> /om.snapshot.db_1588938753753
> 2020-05-08 19:52:33,972 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.recovery.ReconOmMetadataManagerImpl: Cleaning
> up old OM snapshot db at
> /data11/baoloongmao/ozone-0.6.0-SNAPSHOT/recon-om-db-dir
> /om.snapshot.db_1588938446752.
> 2020-05-08 19:52:34,092 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.recovery.ReconOmMetadataManagerImpl: Created OM
> DB handle from snapshot at
> /data11/baoloongmao/ozone-0.6.0-SNAPSHOT/recon-om-db
> -dir/om.snapshot.db_1588938753753.
> 2020-05-08 19:52:34,199 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl:
> Calling reprocess on Recon tasks.
> 2020-05-08 19:52:34,199 [pool-9-thread-1] INFO
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask: Starting a
> 'reprocess' run of ContainerKeyMapperTask.
> 2020-05-08 19:52:34,569 [pool-9-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.ContainerDBServiceProviderImpl:
> Creating new Recon Container DB at
> /data11/baoloongmao/ozone-0.6.0-SNAPSHOT/recon-db-d
> ir/recon-container-key.db_1588938754199
> 2020-05-08 19:52:34,569 [pool-9-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.ContainerDBServiceProviderImpl:
> Cleaning up old Recon Container DB at
> /data11/baoloongmao/ozone-0.6.0-SNAPSHOT/recon-d
> b-dir/recon-container-key.db_1588936734517.
> 2020-05-08 19:52:49,736 [pool-9-thread-1] INFO
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask: Completed
> 'reprocess' of ContainerKeyMapperTask.
> 2020-05-08 19:52:49,736 [pool-9-thread-1] INFO
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask: It took me 15.537
> seconds to process 9485 keys.
> 2020-05-08 19:52:51,969 [pool-9-thread-1] INFO
> org.apache.hadoop.ozone.recon.tasks.FileSizeCountTask: Completed a
> 'reprocess' run of FileSizeCountTask.
> 2020-05-08 19:53:52,210 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl:
> Syncing data from Ozone Manager.
> 2020-05-08 19:53:52,210 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl:
> Obtaining delta updates from Ozone Manager
> 2020-05-08 19:53:52,212 [pool-8-thread-1] INFO
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl:
> Number of updates received from OM : 0
> 2020-05-08 19:53:55,213 [pool-8-thread-1] WARN
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl:
> Unable to get and apply delta updates from OM.
> 2020-05-08 19:53:55,213 [pool-8-thread-1] WARN
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl:
> Unable to get and apply delta updates from OM.
> org.jooq.exception.DataAccessException: SQL [update recon_task_status set
> last_updated_timestamp = ?, last_updated_seq_number = ? where
> recon_task_status.task_name = ?]; [SQLITE_BUSY] The database file is locked
> (database is locked)
> at org.jooq_3.11.9.SQLITE.debug(Unknown Source)
> at org.jooq.impl.Tools.translate(Tools.java:2429)
> at
> org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:832)
> at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:364)
> at
> org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:253)
> at
> org.jooq.impl.UpdatableRecordImpl.access$100(UpdatableRecordImpl.java:84)
> at
> org.jooq.impl.UpdatableRecordImpl$2.operate(UpdatableRecordImpl.java:209)
> at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:125)
> at
> org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:205)
> at
> org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:155)
> at
> org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:150)
> at org.jooq.impl.DAOImpl.update(DAOImpl.java:205)
> at org.jooq.impl.DAOImpl.update(DAOImpl.java:180)
> at
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:398)
> at
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.lambda$start$0(OzoneManagerServiceProviderImpl.java:232)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.sqlite.SQLiteException: [SQLITE_BUSY] The database file is
> locked (database is locked)
> at org.sqlite.core.DB.newSQLException(DB.java:941)
> at org.sqlite.core.DB.newSQLException(DB.java:953)
> at org.sqlite.core.DB.throwex(DB.java:918)
> at org.sqlite.core.NativeDB.prepare_utf8(Native Method)
> at org.sqlite.core.NativeDB.prepare(NativeDB.java:134)
> at org.sqlite.core.DB.prepare(DB.java:257)
> at
> org.sqlite.core.CorePreparedStatement.<init>(CorePreparedStatement.java:47)
> at
> org.sqlite.jdbc3.JDBC3PreparedStatement.<init>(JDBC3PreparedStatement.java:30)
> at
> org.sqlite.jdbc4.JDBC4PreparedStatement.<init>(JDBC4PreparedStatement.java:19)
> at
> org.sqlite.jdbc4.JDBC4Connection.prepareStatement(JDBC4Connection.java:35)
> at
> org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:241)
> at
> org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:205)
> at
> org.jooq.impl.ProviderEnabledConnection.prepareStatement(ProviderEnabledConnection.java:109)
> at
> org.jooq.impl.SettingsEnabledConnection.prepareStatement(SettingsEnabledConnection.java:73)
> at org.jooq.impl.AbstractQuery.prepare(AbstractQuery.java:416)
> at org.jooq.impl.AbstractDMLQuery.prepare(AbstractDMLQuery.java:512)
> at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:322)
> ... 18 more
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]