ggershinsky commented on code in PR #14685:
URL: https://github.com/apache/iceberg/pull/14685#discussion_r2567379462
##########
hive-metastore/src/main/java/org/apache/iceberg/hive/HMSTablePropertyHelper.java:
##########
@@ -256,6 +278,42 @@ static void setSchema(
}
}
+ @VisibleForTesting
+ static void setMetadataHash(
+ TableMetadata metadata, Map<String, String> parameters, long
maxHiveTablePropertySize) {
+ if (exposeInHmsProperties(maxHiveTablePropertySize)
+ && parameters.containsKey(TableProperties.ENCRYPTION_TABLE_KEY)) {
+ String currentMetadataAsJson = TableMetadataParser.toJson(metadata);
Review Comment:
Implementation-related question(s) - how big can the metadata object become
in production deployments? Are there other Iceberg calls to this method (not in
unitests)?
If the in-memory serialization of a full metadata object can be a problem,
there might be stream methods for a hash calculation.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]