dongjoon-hyun commented on issue #25072: [SPARK-28294][CORE] Support 
`spark.history.fs.cleaner.maxNum` configuration
URL: https://github.com/apache/spark/pull/25072#issuecomment-509365358
 
 
   For the default value, I agree with your concerns.
   
   Although this default value, 1M, is big enough not to surprise general 
HDFS-default users. But, there might be two exceptions.
   
   1. If they increase `dfs.namenode.fs-limits.max-directory-items` 
configuration already, the maximum is 6 * 1024 * 1024.
   2. If they are using non-HDFS storage. Maybe, S3?
   
   So, not to surprise all users including S3, do you want me to use 
`Int.MaxValue` instead? I can change like that. Technically, that will disable 
this feature, but `spark.history.fs.cleaner.enabled` itself is disabled by 
default, too.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to