Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20020#discussion_r158025766
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/DataWritingCommand.scala
 ---
    @@ -20,30 +20,18 @@ package org.apache.spark.sql.execution.command
     import org.apache.hadoop.conf.Configuration
     
     import org.apache.spark.SparkContext
    -import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
    +import org.apache.spark.sql.{Row, SparkSession}
    +import org.apache.spark.sql.catalyst.plans.logical.{Command, LogicalPlan}
    +import org.apache.spark.sql.execution.SparkPlan
     import org.apache.spark.sql.execution.datasources.BasicWriteJobStatsTracker
     import org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}
     import org.apache.spark.util.SerializableConfiguration
     
    -
     /**
      * A special `RunnableCommand` which writes data out and updates metrics.
      */
    -trait DataWritingCommand extends RunnableCommand {
    -
    -  /**
    -   * The input query plan that produces the data to be written.
    -   */
    -  def query: LogicalPlan
    -
    -  // We make the input `query` an inner child instead of a child in order 
to hide it from the
    -  // optimizer. This is because optimizer may not preserve the output 
schema names' case, and we
    -  // have to keep the original analyzed plan here so that we can pass the 
corrected schema to the
    -  // writer. The schema of analyzed plan is what user expects(or 
specifies), so we should respect
    -  // it when writing.
    -  override protected def innerChildren: Seq[LogicalPlan] = query :: Nil
    --- End diff --
    
    now shall we define query as a child here?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to