German Schiavon Matteo created SPARK-30828:
----------------------------------------------

             Summary: Improve insertInto behaviour
                 Key: SPARK-30828
                 URL: https://issues.apache.org/jira/browse/SPARK-30828
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core, SQL
    Affects Versions: 3.0.0
            Reporter: German Schiavon Matteo


Actually ****when you call _*insertInto*_ to add a dataFrame into an existing 
table the only safety check is that the number of columns match, but the order 
doesn't matter, and the message in case that the number of columns doesn't 
match is not very helpful, specially when you have  a lot of columns:

 ```org.apache.spark.sql.AnalysisException: `default`.`table` requires that the 
data to be inserted have the same number of columns as the target table: target 
table has 2 column(s) but the inserted data has 1 column(s), including 0 
partition column(s) having constant value(s).; ```

I think a standard column check would be very helpful, just in almost other 
cases with Spark:

``` cannot resolve 'p2' given input columns: [id, p1];" ```

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to