[ 
https://issues.apache.org/jira/browse/SPARK-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352735#comment-14352735
 ] 

Sean Owen commented on SPARK-6221:
----------------------------------

Interesting idea, though you could make this argument for any output from 
Spark. The problem is knowing how far to merge; Spark can't in general know 
that. The caller does, but the caller can already repartition to address this 
situation. What would this buy over just repartitioning before persisting?

> SparkSQL should support auto merging output files
> -------------------------------------------------
>
>                 Key: SPARK-6221
>                 URL: https://issues.apache.org/jira/browse/SPARK-6221
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>            Reporter: Yi Tian
>
> Hive has a feature that could automatically merge small files in HQL's output 
> path. 
> This feature is quite useful for some cases that people use {{insert into}} 
> to  handle minute data from the input path to a daily table.
> In that case, if the SQL includes {{group by}} or {{join}} operation, we 
> always set the {{reduce number}} at least 200 to avoid the possible OOM in 
> reduce side.
> That will cause this SQL output at least 200 files at the end of the 
> execution. So the daily table will finally contains more than 50000 files. 
> If we could provide the same feature in SparkSQL, it will extremely reduce 
> hdfs operations and spark tasks when we run other sql on this table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to