[ 
https://issues.apache.org/jira/browse/PARQUET-2184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17610260#comment-17610260
 ] 

ASF GitHub Bot commented on PARQUET-2184:
-----------------------------------------

shangxinli commented on code in PR #993:
URL: https://github.com/apache/parquet-mr/pull/993#discussion_r981762161


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/codec/SnappyCompressor.java:
##########
@@ -32,6 +32,10 @@
  * entire input in setInput and compresses it as one compressed block.
  */
 public class SnappyCompressor implements Compressor {
+  // Double up to an 8 mb write buffer,  then switch to 1MB linear allocation
+  private static final int DOUBLING_ALLOC_THRESH =  8 << 20;

Review Comment:
   use 1 << 23 is more meaningful





> Improve SnappyCompressor buffer expansion performance
> -----------------------------------------------------
>
>                 Key: PARQUET-2184
>                 URL: https://issues.apache.org/jira/browse/PARQUET-2184
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>    Affects Versions: 1.13.0
>            Reporter: Andrew Baranec
>            Priority: Minor
>
> The existing implementation of SnappyCompressor will only allocate enough 
> bytes for the buffer passed into setInput().  This leads to suboptimal 
> performance when there are patterns of writes that cause repeated buffer 
> expansions.  In the worst case it must copy the entire buffer for every 
> single invocation of setInput()
> Instead of allocating a buffer of size current + write length,  there should 
> be an expansion strategy that reduces the amount of copying required.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to