KarmaGYZ commented on a change in pull request #18360:
URL: https://github.com/apache/flink/pull/18360#discussion_r790500376



##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStore.java
##########
@@ -242,23 +249,41 @@ public void close() throws IOException {
     // --------------------------------------------------------------
 
     private int calculateSize(JobID jobId, ExecutionGraphInfo 
serializableExecutionGraphInfo) {
-        final File executionGraphInfoFile = getExecutionGraphFile(jobId);
+        if (flushToDisk) {
+            final File executionGraphInfoFile = getExecutionGraphFile(jobId);
 
-        if (executionGraphInfoFile.exists()) {
-            return Math.toIntExact(executionGraphInfoFile.length());
+            if (executionGraphInfoFile.exists()) {
+                return Math.toIntExact(executionGraphInfoFile.length());
+            }
         } else {
-            LOG.debug(
-                    "Could not find execution graph information file for {}. 
Estimating the size instead.",
-                    jobId);
-            final ArchivedExecutionGraph serializableExecutionGraph =
-                    serializableExecutionGraphInfo.getArchivedExecutionGraph();
-            return serializableExecutionGraph.getAllVertices().size() * 1000
-                    + 
serializableExecutionGraph.getAccumulatorsSerialized().size() * 1000;
+            ByteArrayOutputStream bos = new ByteArrayOutputStream();
+            try (ObjectOutputStream oos = new ObjectOutputStream(bos)) {
+                oos.writeObject(serializableExecutionGraphInfo);
+                oos.flush();
+                return bos.size();
+            } catch (Exception e) {
+                LOG.warn("Calculate graph info size failed", e);
+            }
         }
+
+        LOG.debug(
+                "Could not find execution graph information file for {}. 
Estimating the size instead.",

Review comment:
       ```suggestion
                   "Could not find execution graph information file for {} or 
fail to calculate the exact size. Estimating the size instead.",
   ```

##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStore.java
##########
@@ -74,6 +76,8 @@
 
     private final Thread shutdownHook;
 
+    private final boolean flushToDisk;

Review comment:
       Maybe we should edit the javadoc of `FileExecutionGraphInfoStore`. 
Describe the flushing operation can be controlled by option.

##########
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
##########
@@ -284,14 +301,18 @@ public void testCacheLoading() throws IOException {
 
             final File storageDirectory = 
executionGraphInfoStore.getStorageDir();
 
-            assertThat(
-                    storageDirectory.listFiles().length,
-                    Matchers.equalTo(executionGraphInfos.size()));
-
-            for (ExecutionGraphInfo executionGraphInfo : executionGraphInfos) {
+            if (flushToDisk) {
                 assertThat(
-                        
executionGraphInfoStore.get(executionGraphInfo.getJobId()),
-                        matchesPartiallyWith(executionGraphInfo));
+                        storageDirectory.listFiles().length,
+                        Matchers.equalTo(executionGraphInfos.size()));
+                for (ExecutionGraphInfo executionGraphInfo : 
executionGraphInfos) {
+                    assertThat(
+                            
executionGraphInfoStore.get(executionGraphInfo.getJobId()),
+                            matchesPartiallyWith(executionGraphInfo));
+                }
+            } else {

Review comment:
       Maybe edit the description of this test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to