kerneltime commented on code in PR #7034:
URL: https://github.com/apache/ozone/pull/7034#discussion_r1705850026
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java:
##########
@@ -156,24 +184,19 @@ private static void writeData(ChunkBuffer data, String
filename,
private static long writeDataToFile(File file, ChunkBuffer data,
long offset, boolean sync) {
final Path path = file.toPath();
- try {
- return processFileExclusively(path, () -> {
Review Comment:
You can delete `processFileExclusively` code as well.
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java:
##########
@@ -91,10 +95,34 @@ public final class ChunkUtils {
StandardOpenOption.READ
));
public static final FileAttribute<?>[] NO_ATTRIBUTES = {};
+ public static final int DEFAULT_FILE_LOCK_STRIPED_SIZE = 512;
+ private static Striped<ReadWriteLock> fileStripedLock =
+ Striped.readWriteLock(DEFAULT_FILE_LOCK_STRIPED_SIZE);
/** Never constructed. **/
private ChunkUtils() {
+ }
+
+ @VisibleForTesting
+ public static void setStripedLock(Striped<ReadWriteLock> stripedLock) {
+ fileStripedLock = stripedLock;
+ }
+
+ @VisibleForTesting
+ public static void clearStripedLock() {
+ fileStripedLock = Striped.readWriteLock(DEFAULT_FILE_LOCK_STRIPED_SIZE);
+ }
+
+ private static ReentrantReadWriteLock getFileLock(Path filePath) {
+ return (ReentrantReadWriteLock) fileStripedLock.get(filePath);
Review Comment:
```suggestion
private static ReadWriteLock getFileLock(Path filePath) {
return fileStripedLock.get(filePath);
```
##########
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/helpers/TestChunkUtils.java:
##########
@@ -124,6 +126,8 @@ void concurrentReadOfSameFile() throws Exception {
executor.shutdownNow();
}
assertFalse(failed.get());
+
+ ChunkUtils.clearStripedLock();
Review Comment:
Also, to test isolation it makes sense to have multiple writers writing
different data at incrementing offset while a single reader thread checks if it
can read partial writes or corrupted writes.
##########
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/helpers/TestChunkUtils.java:
##########
@@ -124,6 +126,8 @@ void concurrentReadOfSameFile() throws Exception {
executor.shutdownNow();
}
assertFalse(failed.get());
+
+ ChunkUtils.clearStripedLock();
Review Comment:
Instead of clearing, add a check to see if the read and locks have been
released.
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java:
##########
@@ -91,10 +95,34 @@ public final class ChunkUtils {
StandardOpenOption.READ
));
public static final FileAttribute<?>[] NO_ATTRIBUTES = {};
+ public static final int DEFAULT_FILE_LOCK_STRIPED_SIZE = 512;
Review Comment:
I think it is ok make the default a bit larger as well to `2048`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]