RussellSpitzer commented on a change in pull request #25348: 
[RFC][SPARK-28554][SQL] Adds a v1 fallback writer implementation for v2 data 
source codepaths
URL: https://github.com/apache/spark/pull/25348#discussion_r311235037
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala
 ##########
 @@ -501,3 +528,19 @@ private[v2] case class DataWritingSparkTaskResult(
  * Sink progress information collected after commit.
  */
 private[sql] case class StreamWriterCommitProgress(numOutputRows: Long)
+
+/**
+ * A trait that allows Tables that use V1 Writer interfaces to write data.
+ */
+sealed trait SupportsV1Write extends V2TableWriteExec {
+  def plan: LogicalPlan
+
+  protected def writeWithV1(
+      relation: CreatableRelationProvider,
+      mode: SaveMode,
+      options: CaseInsensitiveStringMap): RDD[InternalRow] = {
+    relation.createRelation(
+      sqlContext, mode, options.asScala.toMap, 
Dataset.ofRows(sqlContext.sparkSession, plan))
+    sparkContext.emptyRDD
 
 Review comment:
   Ok so I see this is used in Atomic Table Writes, that seems a bit wrong to 
me, shouldn't we just not support the atomic table writes with V1 Fallback? 
Seems like we are violating the contract by returning empty regardless of what 
happens.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to