zhuzhurk commented on code in PR #24747: URL: https://github.com/apache/flink/pull/24747#discussion_r1590908511
########## flink-filesystems/flink-oss-fs-hadoop/src/main/java/org/apache/flink/fs/osshadoop/writer/OSSCommitter.java: ########## @@ -76,7 +76,7 @@ public void commitAfterRecovery() throws IOException { } catch (Exception e) { LOG.info( "Failed to commit after recovery {} with multi-part upload ID {}. " - + "exception {}", + + "exception", Review Comment: Maybe just removing "exception..." ########## flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/checkpointing/BarrierAlignmentUtilTest.java: ########## @@ -63,7 +63,7 @@ void testDelayableTimerNotHiddenException() throws Exception { timerService.advance(delay.toMillis()); assertThatThrownBy(mailboxProcessor::runMailboxStep) - .as("BarrierAlignmentUtil.DelayableTimer should not hidden exception") + .as("BarrierAlignmentUtil.DelayableTimer should not hide exception") Review Comment: exception -> exceptions ########## flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobClient.java: ########## @@ -130,7 +130,13 @@ static void downloadFromBlobServer( throws IOException { final byte[] buf = new byte[BUFFER_SIZE]; - LOG.info("Downloading {}/{} from {}", jobId, blobKey, serverAddress); + LOG.info( + "Downloading {}/{} from {} and store it under {} (retry {})", + jobId, + blobKey, + serverAddress, + localJarFile.getAbsolutePath(), Review Comment: My gut feeling is that this path is a bit too detailed to be in `INFO` logs. Would you elaborate more on the scenarios that you find it is essential to log it? ########## flink-clients/src/main/java/org/apache/flink/client/deployment/application/executors/EmbeddedExecutor.java: ########## @@ -163,7 +163,7 @@ private static CompletableFuture<JobID> submitJob( final Time rpcTimeout) { checkNotNull(jobGraph); - LOG.info("Submitting Job with JobId={}.", jobGraph.getJobID()); + LOG.info("Submitting job '{}' ({}).", jobGraph.getName(), jobGraph.getJobID()); Review Comment: The job submission log of application mode is already redundant. Therefore I prefer to not further complicate it. e.g. ``` 2024-05-06 20:19:20,853 [flink-akka.actor.default-dispatcher-16] INFO org.apache.flink.client.deployment.application.executors.EmbeddedExecutor [] - Job 2214f37f6f124fadb641e22182325897 is submitted. 2024-05-06 20:19:20,854 [flink-akka.actor.default-dispatcher-16] INFO org.apache.flink.client.deployment.application.executors.EmbeddedExecutor [] - Submitting Job with JobId=2214f37f6f124fadb641e22182325897. 2024-05-06 20:19:21,141 [flink-akka.actor.default-dispatcher-5] INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Received JobGraph submission 'Flink Streaming Job' (2214f37f6f124fadb641e22182325897). 2024-05-06 20:19:21,177 [flink-akka.actor.default-dispatcher-5] INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Submitting job 'Flink Streaming Job' (2214f37f6f124fadb641e22182325897). ``` ########## flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java: ########## @@ -1309,10 +1309,10 @@ private ApplicationReport startAppMaster( + appId); // break .. case RUNNING: - LOG.info("YARN application has been deployed successfully."); + LOG.info("YARN application {} has been deployed successfully.", appId); Review Comment: In my experiences using yarn + flink, this enrichment is not needed because the application id is just around. So I think these changes to `YarnClusterDescriptor` do not seem to be necessary and can be omitted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org