Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21305#discussion_r200421789
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/sources/v2/DataSourceV2Suite.scala 
---
    @@ -203,33 +203,33 @@ class DataSourceV2Suite extends QueryTest with 
SharedSQLContext {
             val path = file.getCanonicalPath
             assert(spark.read.format(cls.getName).option("path", 
path).load().collect().isEmpty)
     
    -        spark.range(10).select('id, -'id).write.format(cls.getName)
    +        spark.range(10).select('id as 'i, -'id as 
'j).write.format(cls.getName)
    --- End diff --
    
    Yes. The new resolution rule validates the dataframe that will be written 
to the table.
    
    Because this uses the `DataFrameWriter` API, it matches columns by name 
because there isn't a strong expectation for ordering in the dataframe API 
(e.g. `withColumn` doesn't specify where the new column is added).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to