Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/6176#discussion_r30454371
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -154,8 +154,53 @@ class SnappyCompressionCodec(conf: SparkConf) extends
CompressionCodec {
override def compressedOutputStream(s: OutputStream): OutputStream = {
val blockSize =
conf.getSizeAsBytes("spark.io.compression.snappy.blockSize", "32k").toInt
- new SnappyOutputStream(s, blockSize)
+ new SnappyOutputStreamWrapper(new SnappyOutputStream(s, blockSize))
}
override def compressedInputStream(s: InputStream): InputStream = new
SnappyInputStream(s)
}
+
+/**
+ * Wrapper over [[SnappyOutputStream]] which guards against
write-after-close and double-close
+ * issues. See SPARK-7660 for more details. This wrapping can be removed
if we upgrade to a version
+ * of snappy-java that contains the fix for
https://github.com/xerial/snappy-java/issues/107.
+ */
+private final class SnappyOutputStreamWrapper(os: SnappyOutputStream)
extends OutputStream {
+
+ private[this] var closed: Boolean = false
--- End diff --
Ah, good point. I guess this needs to be volatile in case we're performing
cleanup in another thread. @rxin, if this is volatile, won't that make the
`write()` checks way more expensive?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]