Github user watermen commented on the pull request:

    https://github.com/apache/spark/pull/8125#issuecomment-131665109
  
    @liancheng Sorry for the late reply. Something need to be explained below:
    1. Small files are not common case, so the default of 
`spark.sql.small.file.combine` can be false.
    2. Maybe small files are more like a design issue of the upper application. 
But if spark provide a way to improve it, it'll be nice to users. Like SQL 
optimizer, spark do many optimizing on user's bad SQL.
    3. It is not only to Parquet/ORC, but also Text(CSV/JSON), so it is useful 
for some case.
    Despite this PR, does spark consider some solutions to improve the case of 
small files in spark internal?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to