Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14958#discussion_r77661887
  
    --- Diff: 
core/src/test/scala/org/apache/spark/io/CompressionCodecSuite.scala ---
    @@ -130,4 +130,58 @@ class CompressionCodecSuite extends SparkFunSuite {
         ByteStreams.readFully(concatenatedBytes, decompressed)
         assert(decompressed.toSeq === (0 to 127))
       }
    +
    +  // Based on 
https://github.com/xerial/snappy-java/blob/60cc0c2e1d1a76ae2981d0572a5164fcfdfba5f1/src/test/java/org/xerial/snappy/SnappyInputStreamTest.java
    +  test("SPARK 17378: snappy-java should handle magic header when reading 
stream") {
    +    val b = new ByteArrayOutputStream()
    +    // Write uncompressed length beginning with -126 (the same with 
magicheader[0])
    +    b.write(-126) // Can't access magic header[0] as it isn't public, so 
access this way
    --- End diff --
    
    Hm, this seems to tie this test to this internal detail of Snappy though. 
Spark itself doesn't really need to assert this detail in a test. 
    
    I feel like this test is just testing snappy, which snappy can test. I 
could see testing a case at the level of Spark that triggers this bug and 
verifies it's fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to