chia7712 commented on code in PR #20334:
URL: https://github.com/apache/kafka/pull/20334#discussion_r2538011530
##########
docs/upgrade.html:
##########
@@ -113,6 +113,35 @@ <h5><a id="upgrade_420_notable"
href="#upgrade_420_notable">Notable changes in 4
<li>
The <code>num.replica.fetchers</code> config has a new lower bound of
1.
</li>
+ <li>
+ Improvements have been made to the validation rules and default values
of LIST-type configurations
+ (<a href="https://cwiki.apache.org/confluence/x/HArXF">KIP-1161</a>).
+ <ul>
+ <li>
+ LIST-type configurations now enforce stricter validation:
+ <ul>
+ <li>Null values are no longer accepted for most LIST-type
configurations, except those that explicitly
+ allow a null default value or where a null value has a
well-defined semantic meaning.</li>
+ <li>Duplicate entries within the same list are no longer
permitted.</li>
Review Comment:
> If the original value is no longer valid, it's probably better to fail so
that the user is aware of it.
Regarding this point, KIP-1030 introduces similar breaking changes in 4.0
for configurations like segment.bytes and log.segment.bytes. The error occurs
when the node is processing the dynamic configurations from the metadata
publisher. It is currently only logged as a warning rather than being a fatal
error. The good news is that these breaking changes do not shut the broker
down; the bad news is that users may be completely unaware of the issue.
A possible solution is to introduce a new configuration that enables the
broker to shut down if it cannot accept the dynamic configurations. We already
have a fault handler like this, but it currently only works for the controller.
For example, the new configuration `config.validation.fatal=log.segment.bytes`
would cause the broker to shut down if the dynamic `log.segment.bytes` value
cannot be applied
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]