Xun REN commented on SPARK-21501:

Hi guys,

Could you tell me how to figure out how many memory the NM with spark shuffle 
service has used ? And how to know a spark job has used how many reducers ?

Because I have the same problem recently and I want to get a list of spark jobs 
by sorting by number of reducers.




Xun REN.

> Spark shuffle index cache size should be memory based
> -----------------------------------------------------
>                 Key: SPARK-21501
>                 URL: https://issues.apache.org/jira/browse/SPARK-21501
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle
>    Affects Versions: 2.1.0
>            Reporter: Thomas Graves
>            Assignee: Sanket Reddy
>            Priority: Major
>             Fix For: 2.3.0
> Right now the spark shuffle service has a cache for index files. It is based 
> on a # of files cached (spark.shuffle.service.index.cache.entries). This can 
> cause issues if people have a lot of reducers because the size of each entry 
> can fluctuate based on the # of reducers. 
> We saw an issues with a job that had 170000 reducers and it caused NM with 
> spark shuffle service to use 700-800MB or memory in NM by itself.
> We should change this cache to be memory based and only allow a certain 
> memory size used. When I say memory based I mean the cache should have a 
> limit of say 100MB.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to