Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/22628
the default of 0 gets you the netty default which is always 2 * the # of
cores. It ignores if you set the io.serverThreads to set the # of shuffle
threads. With a default of 100, it actually applies the setting for
io.serverThreads or if you don't have it set then uses the 2 * # of cores. So
having a default of 100 is better here.
io.serverThreads = 10 and
spark.shuffle.server.chunkFetchHandlerThreadsPercent=100 then the # of threads
here for chunked fetches is 10
io.serverThreads=10 and
spark.shuffle.server.chunkFetchHandlerThreadsPercent=0 then the # of threads
here for chunked fetches is 2*#cores
Its better to have a default of 100 to keep the current behavior.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]