rkhachatryan commented on code in PR #19441:
URL: https://github.com/apache/flink/pull/19441#discussion_r854959208


##########
flink-dstl/flink-dstl-dfs/src/test/java/org/apache/flink/changelog/fs/ChangelogStorageMetricsTest.java:
##########
@@ -295,6 +346,55 @@ public void close() {
         }
     }
 
+    private static class WaitingMaxAttemptUploader implements 
StateChangeUploader {
+        private final ConcurrentHashMap<UploadTask, CountDownLatch> 
remainingAttemptsPerTask;
+        private final int maxAttempts;
+
+        public WaitingMaxAttemptUploader(int maxAttempts) {
+            if (maxAttempts < 1) {
+                throw new IllegalArgumentException("maxAttempts < 0");
+            }
+            this.maxAttempts = maxAttempts;
+            this.remainingAttemptsPerTask = new ConcurrentHashMap<>();
+        }
+
+        @Override
+        public UploadTasksResult upload(Collection<UploadTask> tasks) throws 
IOException {
+
+            for (UploadTask uploadTask : tasks) {
+                CountDownLatch remainingAttempts = 
remainingAttemptsPerTask.get(uploadTask);
+                if (remainingAttempts == null) {
+                    remainingAttempts = new CountDownLatch(maxAttempts - 1);
+                    remainingAttemptsPerTask.put(uploadTask, 
remainingAttempts);
+                } else {
+                    remainingAttempts.countDown();
+                }

Review Comment:
   NIT: can be replaced with
   ```
   remainingAttemptsPerTask
           .computeIfAbsent(uploadTask, ign -> new CountDownLatch(maxAttempts - 
1))
           .countDown();
   ```



##########
flink-dstl/flink-dstl-dfs/src/test/java/org/apache/flink/changelog/fs/ChangelogStorageMetricsTest.java:
##########
@@ -295,6 +346,55 @@ public void close() {
         }
     }
 
+    private static class WaitingMaxAttemptUploader implements 
StateChangeUploader {
+        private final ConcurrentHashMap<UploadTask, CountDownLatch> 
remainingAttemptsPerTask;
+        private final int maxAttempts;
+
+        public WaitingMaxAttemptUploader(int maxAttempts) {
+            if (maxAttempts < 1) {
+                throw new IllegalArgumentException("maxAttempts < 0");
+            }
+            this.maxAttempts = maxAttempts;
+            this.remainingAttemptsPerTask = new ConcurrentHashMap<>();
+        }
+
+        @Override
+        public UploadTasksResult upload(Collection<UploadTask> tasks) throws 
IOException {
+
+            for (UploadTask uploadTask : tasks) {
+                CountDownLatch remainingAttempts = 
remainingAttemptsPerTask.get(uploadTask);
+                if (remainingAttempts == null) {
+                    remainingAttempts = new CountDownLatch(maxAttempts - 1);
+                    remainingAttemptsPerTask.put(uploadTask, 
remainingAttempts);
+                } else {
+                    remainingAttempts.countDown();
+                }
+            }
+            for (UploadTask uploadTask : tasks) {
+                CountDownLatch remainingAttempts = 
remainingAttemptsPerTask.get(uploadTask);
+                try {
+                    remainingAttempts.await();

Review Comment:
   I was thinking that only the 1st attempt should be waiting, all the other 
should fail by throwing an exception. That would more likely catch any bugs.
   WDYT?



##########
flink-dstl/flink-dstl-dfs/src/test/java/org/apache/flink/changelog/fs/ChangelogStorageMetricsTest.java:
##########
@@ -295,6 +346,55 @@ public void close() {
         }
     }
 
+    private static class WaitingMaxAttemptUploader implements 
StateChangeUploader {
+        private final ConcurrentHashMap<UploadTask, CountDownLatch> 
remainingAttemptsPerTask;
+        private final int maxAttempts;
+
+        public WaitingMaxAttemptUploader(int maxAttempts) {
+            if (maxAttempts < 1) {
+                throw new IllegalArgumentException("maxAttempts < 0");
+            }
+            this.maxAttempts = maxAttempts;
+            this.remainingAttemptsPerTask = new ConcurrentHashMap<>();
+        }
+
+        @Override
+        public UploadTasksResult upload(Collection<UploadTask> tasks) throws 
IOException {
+
+            for (UploadTask uploadTask : tasks) {
+                CountDownLatch remainingAttempts = 
remainingAttemptsPerTask.get(uploadTask);
+                if (remainingAttempts == null) {
+                    remainingAttempts = new CountDownLatch(maxAttempts - 1);
+                    remainingAttemptsPerTask.put(uploadTask, 
remainingAttempts);
+                } else {
+                    remainingAttempts.countDown();
+                }
+            }
+            for (UploadTask uploadTask : tasks) {
+                CountDownLatch remainingAttempts = 
remainingAttemptsPerTask.get(uploadTask);
+                try {
+                    remainingAttempts.await();
+                } catch (InterruptedException e) {
+                    Thread.currentThread().interrupt();

Review Comment:
   Should we throw an exception here (in addition to thread.interrupt)?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to