Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4450#discussion_r29112635
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -113,11 +114,21 @@ private[spark] class ExternalSorter[K, V, C](
if (shouldPartition) partitioner.get.getPartition(key) else 0
}
+ private val metaInitialRecords =
conf.getInt("spark.shuffle.sort.metaInitialRecords", 256)
+ private val kvChunkSize = conf.getInt("spark.shuffle.sort.kvChunkSize",
8192)
+ private val useSerializedPairBuffer =
+ !ordering.isDefined &&
conf.getBoolean("spark.shuffle.sort.serializeMapOutputs", true) &&
+ conf.get("spark.serializer", null) == classOf[KryoSerializer].getName
--- End diff --
We should probably check here that there is not a user kryo registrator
that has set autoReset to false. Otherwise, the whole thing will not work. It's
tricky because Kryo does not expose a getter for the autoReset field. One idea
would be to coerce that autoReset is true when we instantiate Kryo... but that
could regress behavior for users who were disabling it as an optimization.
Maybe we should just make a "best effort" attempt to use reflection to access
the `autoReset` field of the `Kryo` object and throw an exception if it is set
to `false` and `spark.shuffle.sort.serializeMapOutputs` is set to true.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]