MihawkZoro commented on issue #6344:
URL: https://github.com/apache/hudi/issues/6344#issuecomment-1210125594
@KnightChess It does work, but there is a new problem
the SQL is
```
alter table hudi_mor_test alter column ts after uuid;
```
and the exception is
```
Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to
alter table. The following columns have types incompatible with the existing
columns in their respective positions :
col
org.apache.spark.sql.AnalysisException:
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. The
following columns have types incompatible with the existing columns in their
respective positions :
col
```
the config is
```
key value
hoodie.schema.on.read.enable true
schema.on.read.enable true
spark.app.id local-1660101121767
spark.app.name SparkSQL::192.168.217.179
spark.app.startTime 1660101120803
spark.driver.host 192.168.217.179
spark.driver.port 50694
spark.executor.id driver
spark.hive.metastore.schema.verification false
spark.jars
spark.master local[*]
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.sql.catalog.spark_catalog
org.apache.spark.sql.hudi.catalog.HoodieCatalog
spark.sql.catalogImplementation hive
spark.sql.datetime.java8API.enabled true
spark.sql.extensions org.apache.spark.sql.hudi.HoodieSparkSessionExtension
spark.sql.hive.metastore.version 2.3.9
spark.sql.hive.version 2.3.9
spark.sql.legacy.createHiveTableByDefault false
spark.sql.parquet.enableVectorizedReader true
spark.sql.parquet.filterPushdown true
spark.sql.parquet.recordLevelFilter.enabled true
spark.sql.warehouse.dir hdfs://localhost:9000/user/hive/warehouse
spark.submit.deployMode client
spark.submit.pyFiles
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]