felipecrv commented on PR #35:
URL: https://github.com/apache/arrow-experiments/pull/35#issuecomment-2341035467

   > Because it only supports modern compressors with extremely high 
decompression speed (lz4, zstd).
   
   Browsers support `zstd` already. So you only really fallback to slow `gzip` 
when nothing else is available. For a JavaScript app running on a browser, the 
only available compressor API is GZip so by using Zstd at the HTTP layer, you 
get to use the modern compression algorithm.
   
   > There may also be a latency advantage as metadata is not compressed, only 
data.
   
   I'm experimenting with dynamic buffer sizing and frame flushes that align 
with the HTTP chunk boundaries to improve latency. I can ensure metadata 
reaches the client as soon as possible. When the compressed stream comes from 
the network, the cost of decompressing a single block of the Zstd stream to get 
metadata is too small relative to the network latency.
   
   Buffer-level compression *can also increase latency* by buffering the whole 
compressed buffer before producing the length and putting that on the stream.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to