[ 
https://issues.apache.org/jira/browse/SPARK-10297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated SPARK-10297:
-----------------------------
    Issue Type: Sub-task  (was: Bug)
        Parent: SPARK-9932

> When save data to a data source table, we should bound the size of a saved 
> file
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-10297
>                 URL: https://issues.apache.org/jira/browse/SPARK-10297
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Yin Huai
>            Priority: Critical
>
> When we save a table to a data source table, it is possible that a writer is 
> responsible to write out a larger number of rows, which can make the 
> generated file very large and cause job failed if the underlying storage 
> system has a limit of max file size (e.g. S3's limit is 5GB). We should bound 
> the size of a file generated by a writer and create new writers for the same 
> partition if necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to