[ 
https://issues.apache.org/jira/browse/PARQUET-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15117811#comment-15117811
 ] 

Ryan Blue commented on PARQUET-460:
-----------------------------------

This would be a useful tool, [~flykobe]. I recently built one as part of Kite, 
but it hasn't been released. The branch is 
[merge-parquet|https://github.com/rdblue/kite/commits/merge-parquet]. I 
committed the Parquet changes in PARQUET-382, which adds the ability to append 
encoded blocks to a Parquet file. You should be able to see how the changes are 
used in both the MR job and the CLI tool. Hopefully that helps with this work. 
Thanks!

> Parquet files concat tool
> -------------------------
>
>                 Key: PARQUET-460
>                 URL: https://issues.apache.org/jira/browse/PARQUET-460
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>    Affects Versions: 1.7.0, 1.8.0
>            Reporter: flykobe cheng
>
> Currently the parquet file generation is time consuming, most of time used 
> for serialize and compress. It cost about 10mins to generate a 100MB~ parquet 
> file in our scenario. We want to improve write performance without generate 
> too many small files, which will impact read performance.
> We propose to:
> 1. generate several small parquet files concurrently
> 2. merge small files to one file: concat the parquet blocks in binary 
> (without SerDe), merge footers and modify the path and offset metadata.
> We create ParquetFilesConcat class to finish step 2. It can be invoked by 
> parquet.tools.command.ConcatCommand. If this function approved by parquet 
> community, we will integrate it in spark.
> It will impact compression and introduced more dictionary pages, but it can 
> be improved by adjusting the concurrency of step 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to