skonto edited a comment on issue #23546: [SPARK-23153][K8s] Support client 
dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#issuecomment-492182627
 
 
   @srowen there was also a debate about the deletion of the subdir. In my 
view, the user provides it and he may want to re-use the contents of it, 
because in a consecutive submission he may dont want to re-upload the jar and 
can just point to it at the S3 bucket for example. 
   Only the user knows what he wants to do with it as only the user knows when 
to delete a checkpointLocation in streaming eg. start from scratch for whatever 
reason (exception are temp locs via 
`spark.sql.streaming.forceDeleteTempCheckpointLocation`). 
   Now for creating the names in an automated fashion its possible but not sure 
if this is what @vanzin was saying anyway. If that is the problem then why 
Spark does not handle checkpoint dirs in an automated way? I dont see a 
difference. But again ok I will automate it. 
   So is `spark.kubernetes.file.upload.path` +`auto-generated subdir` ok? 
    

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to