Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r118184231
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -287,4 +287,10 @@ package object config {
.bytesConf(ByteUnit.BYTE)
.createWithDefault(100 * 1024 * 1024)
+ private[spark] val REDUCER_MAX_REQ_SIZE_SHUFFLE_TO_MEM =
+ ConfigBuilder("spark.reducer.maxReqSizeShuffleToMem")
+ .doc("The blocks of a shuffle request will be fetched to disk when
size of the request is " +
+ "above this threshold. This is to avoid a giant request takes too
much memory.")
+ .longConf
+ .createWithDefault(200 * 1024 * 1024)
--- End diff --
shall we use `bytesConf(ByteUnit.MiB)` here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]