RussellSpitzer commented on a change in pull request #25348: 
[RFC][SPARK-28554][SQL] Adds a v1 fallback writer implementation for v2 data 
source codepaths
URL: https://github.com/apache/spark/pull/25348#discussion_r311351159
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala
 ##########
 @@ -501,3 +528,19 @@ private[v2] case class DataWritingSparkTaskResult(
  * Sink progress information collected after commit.
  */
 private[sql] case class StreamWriterCommitProgress(numOutputRows: Long)
+
+/**
+ * A trait that allows Tables that use V1 Writer interfaces to write data.
+ */
+sealed trait SupportsV1Write extends V2TableWriteExec {
+  def plan: LogicalPlan
+
+  protected def writeWithV1(
+      relation: CreatableRelationProvider,
+      mode: SaveMode,
+      options: CaseInsensitiveStringMap): RDD[InternalRow] = {
+    relation.createRelation(
+      sqlContext, mode, options.asScala.toMap, 
Dataset.ofRows(sqlContext.sparkSession, plan))
+    sparkContext.emptyRDD
 
 Review comment:
   ```         
    val writtenRows = writer match {
               case v1: V1WriteBuilder =>
                 writeWithV1(v1.buildForV1Write(), writeOptions)
               case v2 =>
                 doWrite(v2.buildForBatch())
             }
   ```
   If this is always empty why do we save it as writtenRows here? This is just 
to hold a reference to the empty result set?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to