Abacn commented on code in PR #28621:
URL: https://github.com/apache/beam/pull/28621#discussion_r1344439659


##########
runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowRunner.java:
##########
@@ -1331,6 +1331,14 @@ public DataflowPipelineJob run(Pipeline pipeline) {
       hooks.modifyEnvironmentBeforeSubmission(newJob.getEnvironment());
     }
 
+    // enable upload_graph when the graph is too large
+    if ((Utf8.encodedLength(newJob.toString()) >= 
CREATE_JOB_REQUEST_LIMIT_BYTES)

Review Comment:
   The calculation of `Utf8.encodedLength(newJob.toString())` could be 
expensive especially when the graph is large. Similar calculation include
   
   - L1349: DataflowPipelineTranslator.jobToString(newJob).getBytes(UTF_8) when 
upload the graph
   - L1410: if (Utf8.encodedLength(newJob.toString()) >= 
CREATE_JOB_REQUEST_LIMIT_BYTES) when decide the error message
   
   We should be able to just run the conversion of `Job->utf8 encoded bytes` 
once



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to