liuxian created SPARK-25356:
-------------------------------

             Summary:  Add  Parquet block size (row group size)  option to 
SparkSQL configuration
                 Key: SPARK-25356
                 URL: https://issues.apache.org/jira/browse/SPARK-25356
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 2.4.0
            Reporter: liuxian


I think we should configure the Parquet buffer size when using Parquet format.

Because for HDFS, `dfs.block.size` is configurable, sometimes we hope the block 
size of parquet to be consistent with it.

And  whether this parameter `spark.sql.files.maxPartitionBytes` is best 
consistent with the Parquet  block size when using Parquet format?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to