[
https://issues.apache.org/jira/browse/SPARK-21501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16100006#comment-16100006
]
Thomas Graves commented on SPARK-21501:
---------------------------------------
We want to change it from a # of entries to a size of memory. So the cache is
enabled but has a max size of 200MB for instance. This way you can size that
based on the memory size of the YARN NM where the spark shuffle service runs.
If I have a 1GB heap on my NM this ensures it only uses 200MB. With the # of
entries you can't guarantee it won't use all 1GB of your heap because the size
of each entry is dependent upon the # of reducers in that job.
> Spark shuffle index cache size should be memory based
> -----------------------------------------------------
>
> Key: SPARK-21501
> URL: https://issues.apache.org/jira/browse/SPARK-21501
> Project: Spark
> Issue Type: Bug
> Components: Shuffle
> Affects Versions: 2.1.0
> Reporter: Thomas Graves
>
> Right now the spark shuffle service has a cache for index files. It is based
> on a # of files cached (spark.shuffle.service.index.cache.entries). This can
> cause issues if people have a lot of reducers because the size of each entry
> can fluctuate based on the # of reducers.
> We saw an issues with a job that had 170000 reducers and it caused NM with
> spark shuffle service to use 700-800MB or memory in NM by itself.
> We should change this cache to be memory based and only allow a certain
> memory size used. When I say memory based I mean the cache should have a
> limit of say 100MB.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]