Hey everyone,

Thanks for the comments. I'll respond to each one-by-one. In the meantime, can 
we put this on the agenda for the KIP hangout for next week?

Thanks,
Aditya

________________________________________
From: Neha Narkhede [n...@confluent.io]
Sent: Sunday, May 03, 2015 9:48 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-21 Configuration Management

Thanks for starting this discussion, Aditya. Few questions/comments

1. If you change the default values like it's mentioned in the KIP, do you
also overwrite the local config file as part of updating the default value?
If not, where does the admin look to find the default values, ZK or local
Kafka config file? What if a config value is different in both places?

2. I share Gwen's concern around making sure that popular config management
tools continue to work with this change. Would love to see how each of
those would work with the proposal in the KIP. I don't know enough about
each of the tools but seems like in some of the tools, you have to define
some sort of class with parameter names as config names. How will such
tools find out about the config values? In Puppet, if this means that each
Puppet agent has to read it from ZK, this means the ZK port has to be open
to pretty much every machine in the DC. This is a bummer and a very
confusing requirement. Not sure if this is really a problem or not (each of
those tools might behave differently), though pointing out that this is
something worth paying attention to.

3. The wrapper tools that let users read/change config tools should not
depend on ZK for the reason mentioned above. It's a pain to assume that the
ZK port is open from any machine that needs to run this tool. Ideally what
users want is a REST API to the brokers to change or read the config (ala
Elasticsearch), but in the absence of the REST API, we should think if we
can write the tool such that it just requires talking to the Kafka broker
port. This will require a config RPC.

4. Not sure if KIP is the right place to discuss the design of propagating
the config changes to the brokers, but have you thought about just letting
the controller oversee the config changes and propagate via RPC to the
brokers? That way, there is an easier way to express config changes that
require all brokers to change it for it to be called complete. Maybe this
is not required, but it is hard to say if we don't discuss the full set of
configs that need to be dynamic.

Thanks,
Neha

On Fri, May 1, 2015 at 12:53 PM, Jay Kreps <jay.kr...@gmail.com> wrote:

> Hey Aditya,
>
> This is a great! A couple of comments:
>
> 1. Leaving the file config in place is definitely the least disturbance.
> But let's really think about getting rid of the files and just have one
> config mechanism. There is always a tendency to make everything pluggable
> which so often just leads to two mediocre solutions. Can we do the exercise
> of trying to consider fully getting rid of file config and seeing what goes
> wrong?
>
> 2. Do we need to model defaults? The current approach is that if you have a
> global config x it is overridden for a topic xyz by /topics/xyz/x, and I
> think this could be extended to /brokers/0/x. I think this is simpler. We
> need to specify the precedence for these overrides, e.g. if you override at
> the broker and topic level I think the topic level takes precedence.
>
> 3. I recommend we have the producer and consumer config just be an override
> under client.id. The override is by client id and we can have separate
> properties for controlling quotas for producers and consumers.
>
> 4. Some configs can be changed just by updating the reference, others may
> require some action. An example of this is if you want to disable log
> compaction (assuming we wanted to make that dynamic) we need to call
> shutdown() on the cleaner. I think it may be required to register a
> listener callback that gets called when the config changes.
>
> 5. For handling the reference can you explain your plan a bit? Currently we
> have an immutable KafkaConfig object with a bunch of vals. That or
> individual values in there get injected all over the code base. I was
> thinking something like this:
> a. We retain the KafkaConfig object as an immutable object just as today.
> b. It is no longer legit to grab values out fo that config if they are
> changeable.
> c. Instead of making KafkaConfig itself mutable we make KafkaConfiguration
> which has a single volatile reference to the current KafkaConfig.
> KafkaConfiguration is what gets passed into various components. So to
> access a config you do something like config.instance.myValue. When the
> config changes the config manager updates this reference.
> d. The KafkaConfiguration is the thing that allows doing the
> configuration.onChange("my.config", callback)
>
> -Jay
>
> On Tue, Apr 28, 2015 at 3:57 PM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
>
> > Hey everyone,
> >
> > Wrote up a KIP to update topic, client and broker configs dynamically via
> > Zookeeper.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration
> >
> > Please read and provide feedback.
> >
> > Thanks,
> > Aditya
> >
> > PS: I've intentionally kept this discussion separate from KIP-5 since I'm
> > not sure if that is actively being worked on and I wanted to start with a
> > clean slate.
> >
>



--
Thanks,
Neha

Reply via email to