[
https://issues.apache.org/jira/browse/HDDS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17923505#comment-17923505
]
Wei-Chiu Chuang edited comment on HDDS-12182 at 2/3/25 10:55 PM:
-----------------------------------------------------------------
Thanks!
Looks like RocksDB is not able to load. Most likely it's because the old db
metadata to be compared against is in a different format than expected in your
environment. Because it is on WSL, I suspect the line break symbols (i.e. CRLF
vs LF).
{noformat}
2025-02-01 22:22:05,711 [main] ERROR utils.ContainerCache
(ContainerCache.java:getDB(166)) - Error opening DB. Container:123
ContainerPath:/tmp/junit-17260296180396646534/metadata/123/123-dn-container.db
java.io.IOException: Failed init RocksDB, db path :
/tmp/junit-17260296180396646534/metadata/123/123-dn-container.db, exception
:org.rocksdb.RocksDBException CURRENT file corrupted
at org.apache.hadoop.hdds.utils.db.RDBStore.<init>(RDBStore.java:184)
at
org.apache.hadoop.hdds.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:237)
at
org.apache.hadoop.ozone.container.metadata.AbstractDatanodeStore.initDBStore(AbstractDatanodeStore.java:97)
at
org.apache.hadoop.ozone.container.metadata.AbstractRDBStore.start(AbstractRDBStore.java:77)
at
org.apache.hadoop.ozone.container.metadata.AbstractRDBStore.<init>(AbstractRDBStore.java:59)
at
org.apache.hadoop.ozone.container.metadata.AbstractDatanodeStore.<init>(AbstractDatanodeStore.java:73)
at
org.apache.hadoop.ozone.container.metadata.DatanodeStoreSchemaOneImpl.<init>(DatanodeStoreSchemaOneImpl.java:42)
at
org.apache.hadoop.ozone.container.keyvalue.helpers.BlockUtils.getUncachedDatanodeStore(BlockUtils.java:82)
at
org.apache.hadoop.ozone.container.common.utils.ContainerCache.getDB(ContainerCache.java:161)
at
org.apache.hadoop.ozone.container.keyvalue.helpers.BlockUtils.getDB(BlockUtils.java:137)
at
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:256)
at
org.apache.hadoop.ozone.container.common.TestSchemaOneBackwardsCompatibility.newKvData(TestSchemaOneBackwardsCompatibility.java:619)
at
org.apache.hadoop.ozone.container.common.TestSchemaOneBackwardsCompatibility.testReadDeletedBlocks(TestSchemaOneBackwardsCompatibility.java:529)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}
Basically it's a test issue. The test itself verifies upgrade from old Ozone
version to a new old. Because it's unlikely to upgrade from a Linux box to
Windows box, this test is not valid in your environment.
You can consider skip this test and let it continue. use {{mvn test -fae}} to
ignore failures.
was (Author: jojochuang):
Thanks!
Looks like RocksDB is not able to load. Most likely it's because the old db
metadata to be compared against is in a different format than expected in your
environment. Because it is on WSL, I suspect the line break symbols (i.e. CRLF
vs LF).
{noformat}
2025-02-01 22:22:05,711 [main] ERROR utils.ContainerCache
(ContainerCache.java:getDB(166)) - Error opening DB. Container:123
ContainerPath:/tmp/junit-17260296180396646534/metadata/123/123-dn-container.db
java.io.IOException: Failed init RocksDB, db path :
/tmp/junit-17260296180396646534/metadata/123/123-dn-container.db, exception
:org.rocksdb.RocksDBException CURRENT file corrupted
at org.apache.hadoop.hdds.utils.db.RDBStore.<init>(RDBStore.java:184)
at
org.apache.hadoop.hdds.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:237)
at
org.apache.hadoop.ozone.container.metadata.AbstractDatanodeStore.initDBStore(AbstractDatanodeStore.java:97)
at
org.apache.hadoop.ozone.container.metadata.AbstractRDBStore.start(AbstractRDBStore.java:77)
at
org.apache.hadoop.ozone.container.metadata.AbstractRDBStore.<init>(AbstractRDBStore.java:59)
at
org.apache.hadoop.ozone.container.metadata.AbstractDatanodeStore.<init>(AbstractDatanodeStore.java:73)
at
org.apache.hadoop.ozone.container.metadata.DatanodeStoreSchemaOneImpl.<init>(DatanodeStoreSchemaOneImpl.java:42)
at
org.apache.hadoop.ozone.container.keyvalue.helpers.BlockUtils.getUncachedDatanodeStore(BlockUtils.java:82)
at
org.apache.hadoop.ozone.container.common.utils.ContainerCache.getDB(ContainerCache.java:161)
at
org.apache.hadoop.ozone.container.keyvalue.helpers.BlockUtils.getDB(BlockUtils.java:137)
at
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:256)
at
org.apache.hadoop.ozone.container.common.TestSchemaOneBackwardsCompatibility.newKvData(TestSchemaOneBackwardsCompatibility.java:619)
at
org.apache.hadoop.ozone.container.common.TestSchemaOneBackwardsCompatibility.testReadDeletedBlocks(TestSchemaOneBackwardsCompatibility.java:529)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}
> Error for building project on Windows Subsystem for Linux(WSL)
> --------------------------------------------------------------
>
> Key: HDDS-12182
> URL: https://issues.apache.org/jira/browse/HDDS-12182
> Project: Apache Ozone
> Issue Type: Bug
> Components: build
> Affects Versions: 2.0.0
> Environment: * {*}Operating System{*}: Ubuntu 24.04.1 LTS running on
> WSL2.
> * {*}Java Version{*}: OpenJDK 11.0.25.
> * {*}Maven Version{*}: Apache Maven 3.8.7.
> * {*}CPU Specs{*}: AMD Ryzen 5 5600H (6 cores, 12 threads).
> Reporter: Sylvia Chao
> Priority: Trivial
> Attachments:
> TEST-org.apache.hadoop.ozone.container.common.TestSchemaOneBackwardsCompatibility.zip,
> build.log
>
>
> While testing the possibility of building the system on Windows Subsystem for
> Linux (WSL), the build fails with the following errors:
> [ERROR] Errors:
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testBlockIteration:180->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testBlockIteration:180->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testDelete:279->makeContainerSet:575->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testDelete:279->makeContainerSet:575->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testDirectTableIterationDisabled:155->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testDirectTableIterationDisabled:155->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadBlockData:409->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadBlockData:409->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadDeletedBlockChunkInfo:352->makeContainerSet:575->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadDeletedBlockChunkInfo:352->makeContainerSet:575->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadDeletedBlocks:529->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadDeletedBlocks:529->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadDeletingBlockData:455->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadDeletingBlockData:455->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadMetadata:510->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadMetadata:510->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadWithMetadata:229->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadWithMetadata:229->newKvData:619 »
> StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadWithoutMetadata:247->newKvData:619
> » StorageContainer
> [ERROR]
> TestSchemaOneBackwardsCompatibility.testReadWithoutMetadata:247->newKvData:619
> » StorageContainer
> [ERROR] TestContainerReader.testMultipleContainerReader:387 » NullPointer
> [ERROR] TestOzoneContainer.testBuildContainerMap:172 » DiskOutOfSpace No
> storage locat...
>
> The error {{DiskOutOfSpace: No storage locations configured}} suggests an
> issue related to storage configuration, but it is unclear whether this
> problem is specific to the WSL environment or a configuration issue.
> Attachment build.log is the full log file from "mvn clean install".
> Would appreciate any insights into whether this is caused by WSL or if it
> could be a configuration issue that is also affecting Linux systems.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]