Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/21867#discussion_r205124896
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -731,7 +731,14 @@ private[spark] class BlockManager(
}
if (data != null) {
- return Some(ChunkedByteBuffer.fromManagedBuffer(data, chunkSize))
+ // SPARK-24307 undocumented "escape-hatch" in case there are any
issues in converting to
+ // to ChunkedByteBuffer, to go back to old code-path. Can be
removed post Spark 2.4 if
+ // new path is stable.
+ if (conf.getBoolean("spark.fetchToNioBuffer", false)) {
--- End diff --
sure -- the fetch-to-disk conf is "spark.maxRemoteBlockSizeFetchToMem"
which is why I stuck with just "spark." prefix. Also on second thought, I will
make the rest of it more specific too, as there is lots of "fetching" this
doesn't effect.
how about "spark.network.remoteReadNioBufferConversion"?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]