Github user a-roberts commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14958#discussion_r77662239
  
    --- Diff: 
core/src/test/scala/org/apache/spark/io/CompressionCodecSuite.scala ---
    @@ -130,4 +130,58 @@ class CompressionCodecSuite extends SparkFunSuite {
         ByteStreams.readFully(concatenatedBytes, decompressed)
         assert(decompressed.toSeq === (0 to 127))
       }
    +
    +  // Based on 
https://github.com/xerial/snappy-java/blob/60cc0c2e1d1a76ae2981d0572a5164fcfdfba5f1/src/test/java/org/xerial/snappy/SnappyInputStreamTest.java
    +  test("SPARK 17378: snappy-java should handle magic header when reading 
stream") {
    +    val b = new ByteArrayOutputStream()
    +    // Write uncompressed length beginning with -126 (the same with 
magicheader[0])
    +    b.write(-126) // Can't access magic header[0] as it isn't public, so 
access this way
    --- End diff --
    
    Yeah I agree, so how about we just revert the test case commit here and 
merge the 1.1.2.6 change itself as folks want it, and then in a later PR add an 
extra test for robustness if we want to.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to