[
https://issues.apache.org/jira/browse/SPARK-18716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-18716.
-------------------------------
Resolution: Won't Fix
FWIW I tend to agree that deleting big files first is problematic.
> Restrict the disk usage of spark event log.
> --------------------------------------------
>
> Key: SPARK-18716
> URL: https://issues.apache.org/jira/browse/SPARK-18716
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.0.2
> Reporter: Genmao Yu
>
> We've had reports of overfull disk usage of spark event log file. Current
> implementation has following drawbacks:
> 1. If we did not start Spark HistoryServer or Spark HistoryServer just
> failed, there is no chance to do clean work.
> 2. Spark HistoryServer is cleaning event log file based on file age only. If
> there are abundant applications constantly, the disk usage in every
> {{spark.history.fs.cleaner.maxAge}} can still be very large.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]