[
https://issues.apache.org/jira/browse/SPARK-49788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17890172#comment-17890172
]
Holden Karau commented on SPARK-49788:
--------------------------------------
So the problem with just saying "Spark is done with your query" is that we
don't know if the user is going to re-use the RDD or Dataset later if they're
defined in the global scope (as is often the case with notebooks).
Currently if they're defined in a local scope, they're eligible for GC and can
get cleaned up at the end of the query but I would say that I frequently see
people not writing code that way.
> Add spark.cleaner.ttl functionality for long lived jobs
> -------------------------------------------------------
>
> Key: SPARK-49788
> URL: https://issues.apache.org/jira/browse/SPARK-49788
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 4.0.0
> Reporter: Holden Karau
> Assignee: Holden Karau
> Priority: Major
>
> We should add a TTL (and maybe a threshold?) to clean shuffle files which
> have been stored for greater than some fixed period of time. This would be
> useful for long lived jobs with Spark Connect and Notebooks where items may
> not go out-of-scope and be resident for longer than needed.
>
> See SPARK-7689 which removed the original TTL cleaner.
>
> To reduce the chance of confusion/difficulty to understand which was part of
> the original reason for removing the TTL based cleaner we should (instead of
> "raw" TTL) keep track of when it was accessed last to reset the counter + log
> on removal.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]