arunpandianp commented on code in PR #34546:
URL: https://github.com/apache/beam/pull/34546#discussion_r2029494437


##########
runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/windmill/client/commits/StreamingEngineWorkCommitter.java:
##########
@@ -174,19 +175,18 @@ private void streamingCommitLoop() {
           continue;
         }
 
-        try (CloseableStream<CommitWorkStream> closeableCommitStream =
-                commitWorkStreamFactory.get();
-            CommitWorkStream.RequestBatcher batcher = 
closeableCommitStream.stream().batcher()) {
+        try (CommitWorkStream.RequestBatcher batcher = 
closeableCommitStream.stream().batcher()) {
           if (!tryAddToCommitBatch(initialCommit, batcher)) {
             throw new AssertionError("Initial commit on flushed stream should 
always be accepted.");
           }
           // Batch additional commits to the stream and possibly make an 
un-batched commit the
           // next initial commit.
           initialCommit = expandBatch(batcher);
-        } catch (Exception e) {
-          LOG.error("Error occurred sending commits.", e);

Review Comment:
   stream timeouts were getting caught here, now they will not be caught. 
Thinking if we should handle it differently, is it safe to suppress these 
exceptions?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@beam.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to