Thanks Wing Yew for filling in the missing part. > > The built-in version is also used for other things that Spark may use from > Hive (aside from interaction with HMS), such as Hive SerDes.
AFAIK, this is blocking Spark itself from upgrade the built-in version to Hive 4. Thanks Peter for recap. The only thing to clarify is Hive 3 Runtime tests have never been running while it's irrelevant now. There were test failures[1] after upgrading metastore module to Hive 4 so I guess it doesn't work yet. Moving forward, I agree we should make sure the metastore tests running against all Hive versions in use. However, I'm not sure how to set up modules and dependencies given the changes in Hive 4 (thanks Denys). I need more experiments to explore various ideas. 1. https://github.com/apache/iceberg/actions/runs/12339936020/job/34436774628?pr=11750 Thanks, Manu On Tue, Jan 7, 2025 at 8:01 PM Denys Kuzmenko <dkuzme...@apache.org> wrote: > Hi Peter, > > Re > "Hive would provide a HMS client jar which only contains java code which > is needed to connect and communicate using Thrift with a HMS instance (no > internal HMS server code etc). We could use it as a dependency for our > iceberg-hive-metastore module. Either setting a minimal version, or using a > shaded embedded version." > > In Hive-4.x `HiveMetaStoreClient` is shipped within > `hive-standalone-metastore-common` jar that has a client code and security: > > https://mvnrepository.com/artifact/org.apache.hive/hive-standalone-metastore-common/4.0.1 > > Regards, > Denys >