Hello,

3 MB files are too small for parquet, Try to increase the size.
Keep an eye on statistics. In our case, we haven’t seen statistics being 
generated for string data types and will perform a Scan.


Regards
Shiv


> On Feb 12, 2018, at 9:24 PM, ilegend <511618...@qq.com> wrote:
> 
> Hi guys,
> We're testing parquet performance for our big data environment. Parquet is 
> better than orc, but we believe that the parquet has more potential. Any 
> comments and suggestions are welcomed. The test environment is as follows:
> 1. Server 48 cores + 256gb memory.
> 2. Spark 2.1.0 + hdfs 2.6.0 + parquet-mr-1.8.1 
> +parquet-format-2.3.0-incubating.
> 3. The size of hdfs file is 3MB.
> 4. Parquet-me sets default values, row group size 128MB, data page size 1MB.
> 
> 
> 发自我的 iPhone
> 
> 

Reply via email to