Github user vincent-grosbois commented on the issue:
https://github.com/apache/spark/pull/22024
Hi,
sorry I haven't done any benchmark on this.
But this solved an Out Of Memory issue for:
- before this fix with Spark 2.3 we had OOM issues in specific jobs when
the Partition size was big (1.9 GB) and made of few big objects, without using
the spark.maxRemoteBlockSizeFetchToMem trick
- with this fix our OOM dissapeared, without having to use
spark.maxRemoteBlockSizeFetchToMem (ie everything is done on memory, no disk
spill)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]