Github user shahidki31 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23241#discussion_r239516496
  
    --- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
    @@ -197,4 +201,8 @@ class ZStdCompressionCodec(conf: SparkConf) extends 
CompressionCodec {
         // avoid overhead excessive of JNI call while trying to uncompress 
small amount of data.
         new BufferedInputStream(new ZstdInputStream(s), bufferSize)
       }
    +
    +  override def zstdEventLogCompressedInputStream(s: InputStream): 
InputStream = {
    +    new BufferedInputStream(new ZstdInputStream(s).setContinuous(true), 
bufferSize)
    --- End diff --
    
    Thanks @srowen . 
    
    > Is it actually desirable to not fail on a partial frame? I'm not sure. We 
shouldn't encounter it elsewhere.
    Yes. Ideally it shouldn't fail. Even for EventLoggingListener if the 
application is finished, the frame will close (That is why it is applicable for 
only running application). After analyzing again the zstd code, the impact 
seems lesser "Either throw exception or read the frame", and latter seems 
better.
    I can update the code.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to