[ 
https://issues.apache.org/jira/browse/SPARK-26913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-26913:
------------------------------------

    Assignee:     (was: Apache Spark)

> New data source V2 API: SupportsDirectWrite
> -------------------------------------------
>
>                 Key: SPARK-26913
>                 URL: https://issues.apache.org/jira/browse/SPARK-26913
>             Project: Spark
>          Issue Type: Task
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Gengliang Wang
>            Priority: Major
>
> Spark supports writing to file data sources without getting and validation 
> with the table schema.
> For example, 
> ```
> spark.range(10).write.orc(path)
> val newDF = spark.range(20).map(id => (id.toDouble, 
> id.toString)).toDF("double", "string")
> newDF.write.mode("overwrite").orc(path)
> ```
> 1. There is no need to get/infer the schema from the table/path
> 2.  The schema of `newDF` can be different with the original table schema.
> However, from https://github.com/apache/spark/pull/23606/files#r255319992 we 
> can see that the feature above is missing in data source V2. Currently, data 
> source V2 always validates the output query with the table schema. Even after 
> the catalog support of DS V2 is implemented,  I think it is hard to support 
> both behaviors with the current API/framework. 
> This PR proposes to create a new mix-in interface `SupportsDirectWrite`.  
> With the interface, Spark will write to the table location directly without 
> schema inference and validation on `DataFrameWriter.save`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to