errose28 commented on code in PR #7943:
URL: https://github.com/apache/ozone/pull/7943#discussion_r1970107114
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java:
##########
@@ -98,6 +98,82 @@ public long putBlock(Container container, BlockData data,
data, endOfBlock);
}
+ @Override
+ public long putBlockForClosedContainer(Container container, BlockData data,
boolean overwriteBcsId)
Review Comment:
Why do we need a flag to overwrite the BCSID? Shouldn't it always be an
automatic greater-than check?
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java:
##########
@@ -367,6 +443,19 @@ public List<BlockData> listBlock(Container container, long
startLocalID, int
}
}
+ @Override
+ public boolean blockExists(Container container, BlockID blockID) throws
IOException {
Review Comment:
This is unused
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java:
##########
@@ -98,6 +98,82 @@ public long putBlock(Container container, BlockData data,
data, endOfBlock);
}
+ @Override
+ public long putBlockForClosedContainer(Container container, BlockData data,
boolean overwriteBcsId)
+ throws IOException {
+ Preconditions.checkNotNull(data, "BlockData cannot be null for put " +
+ "operation.");
+ Preconditions.checkState(data.getContainerID() >= 0, "Container Id " +
+ "cannot be negative");
+
+ KeyValueContainerData containerData = (KeyValueContainerData)
container.getContainerData();
+
+ // We are not locking the key manager since RocksDB serializes all actions
+ // against a single DB. We rely on DB level locking to avoid conflicts.
+ try (DBHandle db = BlockUtils.getDB(containerData, config)) {
+ // This is a post condition that acts as a hint to the user.
+ // Should never fail.
+ Preconditions.checkNotNull(db, DB_NULL_ERR_MSG);
+
+ long bcsId = data.getBlockCommitSequenceId();
+ long containerBCSId = containerData.getBlockCommitSequenceId();
+
+ // Check if the block is already present in the DB of the container to
determine whether
+ // the blockCount is already incremented for this block in the DB or not.
+ long localID = data.getLocalID();
+ boolean incrBlockCount = false;
+
+ // update the blockData as well as BlockCommitSequenceId here
+ try (BatchOperation batch = db.getStore().getBatchHandler()
+ .initBatchOperation()) {
+ // If block exists in cache, blockCount should not be incremented.
+ if
(db.getStore().getBlockDataTable().get(containerData.getBlockKey(localID)) ==
null) {
+ // Block does not exist in DB => blockCount needs to be
+ // incremented when the block is added into DB.
+ incrBlockCount = true;
+ }
+
+ db.getStore().getBlockDataTable().putWithBatch(batch,
containerData.getBlockKey(localID), data);
+ if (overwriteBcsId && bcsId > containerBCSId) {
+ db.getStore().getMetadataTable().putWithBatch(batch,
containerData.getBcsIdKey(), bcsId);
+ }
+
+ // Set Bytes used, this bytes used will be updated for every write and
+ // only get committed for every put block. In this way, when datanode
+ // is up, for computation of disk space by container only committed
+ // block length is used, And also on restart the blocks committed to DB
+ // is only used to compute the bytes used. This is done to keep the
+ // current behavior and avoid DB write during write chunk operation.
+ // Write UTs for this
Review Comment:
This is a TODO right?
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java:
##########
@@ -98,6 +98,82 @@ public long putBlock(Container container, BlockData data,
data, endOfBlock);
}
+ @Override
+ public long putBlockForClosedContainer(Container container, BlockData data,
boolean overwriteBcsId)
+ throws IOException {
+ Preconditions.checkNotNull(data, "BlockData cannot be null for put " +
+ "operation.");
+ Preconditions.checkState(data.getContainerID() >= 0, "Container Id " +
+ "cannot be negative");
+
+ KeyValueContainerData containerData = (KeyValueContainerData)
container.getContainerData();
+
+ // We are not locking the key manager since RocksDB serializes all actions
+ // against a single DB. We rely on DB level locking to avoid conflicts.
+ try (DBHandle db = BlockUtils.getDB(containerData, config)) {
+ // This is a post condition that acts as a hint to the user.
+ // Should never fail.
+ Preconditions.checkNotNull(db, DB_NULL_ERR_MSG);
+
+ long bcsId = data.getBlockCommitSequenceId();
+ long containerBCSId = containerData.getBlockCommitSequenceId();
+
+ // Check if the block is already present in the DB of the container to
determine whether
+ // the blockCount is already incremented for this block in the DB or not.
+ long localID = data.getLocalID();
+ boolean incrBlockCount = false;
+
+ // update the blockData as well as BlockCommitSequenceId here
+ try (BatchOperation batch = db.getStore().getBatchHandler()
+ .initBatchOperation()) {
+ // If block exists in cache, blockCount should not be incremented.
Review Comment:
```suggestion
// If block already exists in the DB, blockCount should not be
incremented.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]