lichaohao commented on issue #4659:
URL: https://github.com/apache/iceberg/issues/4659#issuecomment-1112003247
flink (use hive_catalog) write data into iceberg table, spark api(scala) can
not read iceberg data
flink: 1.13
iceberg:0.13.2
spark:3.2
hive:3.1.2
such as:
spark.read.format("iceberg").load("/warehouse/tablespace/external/hive/iceberg.db/my_iceberg_test").show()
spark api can not read iceberg data which using hive_catalog
exception message:
caused by: File does not exist:
/warehouse/tablespace/external/hive/iceberg.db/my_iceberg_test/metadata/version-hint.text
i check the hdfs path ,hdfs actually does not exist version-hint.text if
write data by hive_catalog ,but it exists if using hadoop_catalog.
i think spark api wants to get the latest schema,datalist by
version-hint.text .
maybe spark api should support this situation if using hive_catalog write
data into iceberg table ,spark api can get the latest schema,datalist by hive
TBLProperties
this bug can be fixed in next version (iceberg-spark-runtime ) ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]