Initial-neko commented on issue #4192:
URL: https://github.com/apache/iceberg/issues/4192#issuecomment-1048409274


   > The issue is probably that the table object you are using is not from the 
configured Spark Catalog. Try loading the table instance without 
Spark3Util.loadIcebergTable.
   > 
   > The error is because Spark needs the catalog name to match the one in the 
Spark conf. Otherwise it does not know where to find the table. When 
instantiating a new table using the Java api it will have the default catalog 
name which is most likely not the same.
   > 
   > A metadata table is just a special view of a table which returns 
information like what partitions are in the table or what files are in the 
table. All iceberg tables have them available, see the docs for more info.
   
   Our tables are loaded through hivecatalog. There will be no such problems in 
expiresnapshot and rewrite operations. I will confirm it; Thanks for your reply~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to