Github user watermen commented on the pull request:

    https://github.com/apache/spark/pull/8125#issuecomment-130281539
  
    @liancheng Yes, Reading a large number of small files is not recommended, 
but it's exists in many production environments. Such as insert into table T 
every 5 minutes because data is produced in real time. The data of every 5 
minutes will be a file, and if the data is less, it'll be many small files in 
table T.
    I test TPC-DS in 100G with ORC format, and find table `store_sales` has 
260000+ files with 2000+ dirctorys, and the preformance improved more than 15% 
with `spark.sql.small.file.combine=true`. How can user call `coalesce(n)` if 
user only want to use spark-sql shell(interactive query)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to