Hi Tom,
yes it makes sense to have a single KIP for enhancing these policies.

As Mickael pointed out, we need that the create/alter topic policy are 
able to assess the current cluster metadata.
KIP-170 suggested a Provider interface with the minimal set of methods 
that we needed.

However Ismael on 22/06/2017 suggested changes to KIP-170 that i didn't 
manage to incorporate yet there.
instead of a provider interface Ismael suggested to extend the 
RequestMetadata :

> Thanks for the KIP, Edoardo. A few comments:
> 
> 1. Have you considered extending RequestMetadata with the additional
>information you need? We could add Cluster to it, which has topic
>assignment information, for example. This way, there would be no need for 
a
V2 interface.

>2. Something else that could be useful is passing an instance of 
`Session`
>so that one can provide custom behaviour depending on the logged in user.
>Would this be useful?

> 3. For the delete case, we may consider passing a class instead of just 
a
> string to the validate method so that we have options if we need to 
extend
>it.

> 4. Do we want to enhance the AlterConfigs policy as well?
>
> Ismael


His change proposal #1 would be fine with me - although I do not see how 
we could check if isTopicMarkedForDeletion.
as for change #2 having the KafkaPrincipal instead of the session works 
for me

As for the delete policy - our motivation is entirely different : we 
needed a policy essentially to ensure that the topic was not registered 
with dependent services that we did not want to break.  For a delete 
policy to be usable in a friendly way, the Kafka protocol needs to be 
updated as in KIP-170 to include a message in the delete topic response 
(to tell why it's failed).

I'm happy if you incorporate the enhancements to create/alter that allow a 
check against the cluster metadata 
and leave out - to anther KIP, or maybe I'll rewrite 170 the changes to 
delete.

thanks
Edo

--------------------------------------------------

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Tom Bentley <t.j.bent...@gmail.com>
To:     dev@kafka.apache.org
Date:   25/09/2017 18:11
Subject:        Re: [DISCUSS] KIP-201: Rationalising Policy interfaces



Hi Mickael,

Thanks for the reply.

Thanks for the KIP. Is this meant to superseed KIP-170 ?
> If so, one of our key requirements was to be able to access the
> topics/partitions list from the policy, so an administrator could
> enforce a partition limit for example.
>

It's not meant to replace KIP-170 because there are things in KIP-170 
which
aren't purely about policy (the change in requests and responses, for
example), which KIP-201 doesn't propose to implement. Obviously there is
overlap when it comes to policies, and both KIPs start with different
motivations for the policy changes they propose. I think it makes sense 
for
a single KIP address both sets of use cases if possible. I'm happy for 
that
to be KIP-201 if that suits you.

I think the approach taken in KIP-170, of a Provider interface for
obtaining information that's not intrinsic to the request and a method to
inject that provider into the policy, is a good one. It retains a clean
distinction between the request metadata itself and the wider cluster
metadata, which I think is a good thing. If you're happy Mickael, I'll
update KIP-201 with something similar.


> Also instead of simply having the Java Principal object, could we have
> the KafkaPrincipal ? So policies could take advantage of custom
> KafkaPrincipal object (KIP-189).
>

Certainly.



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

Reply via email to