mchades commented on code in PR #7695:
URL: https://github.com/apache/gravitino/pull/7695#discussion_r2215751784


##########
api/src/main/java/org/apache/gravitino/job/SparkJobTemplate.java:
##########
@@ -30,6 +30,21 @@
  * Represents a job template for executing Spark applications. This class 
extends the JobTemplate
  * class and provides functionality specific to Spark job templates, including 
the class name, jars,
  * files, archives, and configurations required for the Spark job.
+ *
+ * <p>Take Spark word count job as an example:
+ *
+ * <p>className: "org.apache.spark.examples.JavaWordCount" executable
+ * "https://example.com/spark-examples.jar"; arguments: ["{{input_path}}", 
"{{output_path}}"]

Review Comment:
   so the `executable` is the actual job resource, not the engine (such as 
spark, bash, etc.) to execute the job?
   
   If the answer is yes, how does the user specify engine information, such as 
the Spark version or Spark home?



##########
core/src/main/java/org/apache/gravitino/Configs.java:
##########
@@ -399,4 +399,11 @@ private Configs() {}
           .stringConf()
           .checkValue(StringUtils::isNotBlank, 
ConfigConstants.NOT_BLANK_ERROR_MSG)
           .createWithDefault("caffeine");
+
+  public static final ConfigEntry<String> JOB_STAGING_DIR =

Review Comment:
   So this configuration is set at the server level? Should we consider 
supporting it at the metalake level? (That means each metalake has its own 
staging dir)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@gravitino.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to