flyrain commented on code in PR #1068:
URL: https://github.com/apache/polaris/pull/1068#discussion_r1972467647
##########
polaris-core/src/main/java/org/apache/polaris/core/PolarisConfiguration.java:
##########
@@ -264,4 +264,11 @@ public static <T> Builder<T> builder() {
+ STORAGE_CREDENTIAL_DURATION_SECONDS.key)
.defaultValue(30 * 60) // 30 minutes
.build();
+
+ public static final PolarisConfiguration<Integer>
STORAGE_CONFIGURATION_MAX_LOCATIONS =
+ PolarisConfiguration.<Integer>builder()
+ .key("STORAGE_CONFIGURATION_MAX_LOCATIONS")
+ .description("How many locations can be associated with a storage
configuration")
+ .defaultValue(20)
Review Comment:
Even with storage configs pushing down to table level, the chance of a
unbounded number of table location is quite small.
1. For majority Iceberg use cases within Polaris, writers will only possible
to use three locations specified by properties `location`, `write.data.path`,
and `write.metadata.path`.
2. For migration use case, admittedly it is possible that are more than 3
locations mentioned above. However, users should be aware of the number of
locations while migration, and add them to the table-level storage configs. At
that time, we can enforce it by saying "that's too many locations, credential
vending won't work." The limit seems better at table-level, as locations from
different tables may not overlap.
In short, a limit at catalog level doesn't seem necessary now, and may not
be effective in the future. I'd consider to remove it. But I'm open to be
convinced by other use cases.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]