Hi Hector,

Thanks for taking a look! I think the key difference between the proposed
behavior and the rejected alternative is that the set of tasks that will be
running with the former is still a complete set of tasks, whereas the set
of tasks in the latter is a subset of tasks. Also noteworthy but slightly
less important: the problem will be more visible to users with the former
(the connector will still be marked FAILED) than with the latter.

Cheers,

Chris

On Tue, Nov 21, 2023, 00:53 Hector Geraldino (BLOOMBERG/ 919 3RD A) <
hgerald...@bloomberg.net> wrote:

> Thanks for the KIP Chris, adding this check makes total sense.
>
> I do have one question. The second paragraph in the Public Interfaces
> section states:
>
> "If the connector generated excessive tasks after being reconfigured, then
> any existing tasks for the connector will be allowed to continue running,
> unless that existing set of tasks also exceeds the tasks.max property."
>
> Would not failing the connector land us in the second scenario of
> 'Rejected Alternatives'?
>
> From: dev@kafka.apache.org At: 11/11/23 00:27:44 UTC-5:00To:
> dev@kafka.apache.org
> Subject: [DISCUSS] KIP-1004: Enforce tasks.max property in Kafka Connect
>
> Hi all,
>
> I'd like to open up KIP-1004 for discussion:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1004%3A+Enforce+tasks.max+
> property+in+Kafka+Connect
>
> As a brief summary: this KIP proposes that the Kafka Connect runtime start
> failing connectors that generate a greater number of tasks than the
> tasks.max property, with an optional emergency override that can be used to
> continue running these (probably-buggy) connectors if absolutely necessary.
>
> I'll be taking time off most of the next three weeks, so response latency
> may be a bit higher than usual, but I wanted to kick off the discussion in
> case we can land this in time for the upcoming 3.7.0 release.
>
> Cheers,
>
> Chris
>
>
>

Reply via email to