the-other-tim-brown commented on code in PR #12982:
URL: https://github.com/apache/hudi/pull/12982#discussion_r2025781303
##########
hudi-common/src/main/java/org/apache/hudi/common/table/view/RocksDbBasedFileSystemView.java:
##########
@@ -269,41 +269,51 @@ protected Option<Pair<String, CompactionOperation>>
getPendingLogCompactionOpera
@Override
protected boolean isPartitionAvailableInStore(String partitionPath) {
- String lookupKey = schemaHelper.getKeyForPartitionLookup(partitionPath);
- Serializable obj =
rocksDB.get(schemaHelper.getColFamilyForStoredPartitions(), lookupKey);
- return obj != null;
+ try {
+ readLock.lock();
+ String lookupKey = schemaHelper.getKeyForPartitionLookup(partitionPath);
+ Serializable obj =
rocksDB.get(schemaHelper.getColFamilyForStoredPartitions(), lookupKey);
+ return obj != null;
+ } finally {
+ readLock.unlock();
+ }
}
@Override
protected void storePartitionView(String partitionPath,
List<HoodieFileGroup> fileGroups) {
- LOG.info("Resetting and adding new partition ({}) to ROCKSDB based
file-system view at {}, Total file-groups={}",
- partitionPath, config.getRocksdbBasePath(), fileGroups.size());
-
- String lookupKey = schemaHelper.getKeyForPartitionLookup(partitionPath);
- rocksDB.delete(schemaHelper.getColFamilyForStoredPartitions(), lookupKey);
-
- // First delete partition views
- rocksDB.prefixDelete(schemaHelper.getColFamilyForView(),
- schemaHelper.getPrefixForSliceViewByPartition(partitionPath));
- rocksDB.prefixDelete(schemaHelper.getColFamilyForView(),
- schemaHelper.getPrefixForDataFileViewByPartition(partitionPath));
-
- // Now add them
- fileGroups.forEach(fg ->
- rocksDB.writeBatch(batch ->
- fg.getAllFileSlicesIncludingInflight().forEach(fs -> {
- rocksDB.putInBatch(batch, schemaHelper.getColFamilyForView(),
schemaHelper.getKeyForSliceView(fg, fs), fs);
- fs.getBaseFile().ifPresent(df ->
- rocksDB.putInBatch(batch,
schemaHelper.getColFamilyForView(), schemaHelper.getKeyForDataFileView(fg, fs),
df)
- );
- })
- )
- );
+ try {
+ writeLock.lock();
Review Comment:
The implementation I had was not correct. The correct implementation will
require the use of read/write locks similar to this but that will force all
instances of the disk map to use these locks, even during compaction where we
don't have the multi-threaded case.
The read lock is required when reading but we can move the read lock even
closer to where the store is accessed to minimize repeated code and reduce risk
of new methods missing these checks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]