GitHub user 10110346 opened a pull request:

    https://github.com/apache/spark/pull/22350

    [SPARK-25356][SQL]Add Parquet block size  option to SparkSQL configuration

    ## What changes were proposed in this pull request?
    
    
    I think we should configure the Parquet buffer size when using Parquet 
format.
    Because for HDFS, `dfs.block.size` is configurable, sometimes we hope the 
block size of parquet to be consistent with it.
    And  whether this parameter `spark.sql.files.maxPartitionBytes` is best 
consistent with the Parquet  block size when using Parquet format?
    Also we may want to shrink Parquet block size in some tests.
    
    ## How was this patch tested?
    N/A


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/10110346/spark addblocksize

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/22350.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #22350
    
----
commit 3485b523d54e83ed3388febd06b3ac4914d181ed
Author: liuxian <liu.xian3@...>
Date:   2018-09-06T10:35:43Z

    fix

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to