pan3793 commented on code in PR #6876:
URL: https://github.com/apache/kyuubi/pull/6876#discussion_r1899883656


##########
docs/deployment/engine_on_kubernetes.md:
##########
@@ -48,6 +48,12 @@ The minimum required configurations are:
 * spark.kubernetes.file.upload.path (path on S3 or HDFS)
 * spark.kubernetes.authenticate.driver.serviceAccountName ([viz 
ServiceAccount](#serviceaccount))
 
+The vanilla Spark neither support rolling nor expiration mechanism for 
`spark.kubernetes.file.upload.path`, if you use
+file system that does not support TTL, e.g. HDFS, additional cleanup 
mechanisms are needed to prevent the files in this
+directory from growing indefinitely. Since Kyuubi v1.11.0, you can configure 
`spark.kubernetes.file.upload.path` with
+placeholders `{{YEAR}}`, `{{MONTH}}` and `{{DAY}}`, and enable 
`kyuubi.kubernetes.spark.autoCreateFileUploadPath.enabled`
+to let Kyuubi server create the directory with 777 permission automatically 
before submitting Spark application.
+

Review Comment:
   It adds the rolling support for `spark.kubernetes.file.upload.path`, for 
example,
   
   ```
   
spark.kubernetes.file.upload.path=hdfs://hadoop-testing/spark-upload-{{YEAR}}{{MONTH}}
   ```
   
   ```
   hdfs://hadoop-testing/spark-upload-202412
   hdfs://hadoop-testing/spark-upload-202501
   ```
   Admin can safely delete the `hdfs://hadoop-testing/spark-upload-202412` 
after 20250101



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org
For additional commands, e-mail: notifications-h...@kyuubi.apache.org

Reply via email to