[ 
https://issues.apache.org/jira/browse/SPARK-23348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356176#comment-16356176
 ] 

Sameer Agarwal commented on SPARK-23348:
----------------------------------------

yes, +1

> append data using saveAsTable should adjust the data types
> ----------------------------------------------------------
>
>                 Key: SPARK-23348
>                 URL: https://issues.apache.org/jira/browse/SPARK-23348
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.2, 2.1.2, 2.2.1, 2.3.0
>            Reporter: Wenchen Fan
>            Priority: Major
>
>  
> {code:java}
> Seq(1 -> "a").toDF("i", "j").write.saveAsTable("t")
> Seq("c" -> 3).toDF("i", "j").write.mode("append").saveAsTable("t")
> scala> sql("select * from t").show
> {code}
>  
> This query will fail with a strange error:
> {code:java}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 10.0 failed 1 times, most recent failure: Lost task 1.0 in stage 10.0 
> (TID 15, localhost, executor driver): 
> java.lang.UnsupportedOperationException: Unimplemented type: IntegerType
>  at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBinaryBatch(VectorizedColumnReader.java:473)
>  at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:214)
>  at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:261)
> ...
> {code}
>  
> All Spark 2.X are the same. For Spark 1.6.3,
> {code}
> scala> sql("select * from tx").show
> +----+---+
> |   i|  j|
> +----+---+
> |null|  3|
> |   1|  a|
> +----+---+
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to