Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/20846
@liutang123 , Spark should not do this kind of risky thing. Hive 2.3.2 also
disallows incompatible schema changes like the following.
```sql
hive> CREATE TABLE test_par(a string) PARTITIONED BY (b bigint) ROW FORMAT
SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
OK
Time taken: 0.262 seconds
hive> ALTER TABLE test_par CHANGE a a bigint RESTRICT;
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following
columns have types incompatible with the existing columns in their respective
positions :
a
hive> SELECT VERSION();
OK
2.3.2 r857a9fd8ad725a53bd95c1b2d6612f9b1155f44d
Time taken: 0.711 seconds, Fetched: 1 row(s)
```
cc @gatorsmile .
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]