Hi Rajini,

The current ConfigCommand would still be possible to use, therefore those
who wish to set up SCRAM or initial quotas would be able to continue doing
it through kafka-run-class.sh.
In an ideal world I'd keep it in the current ConfigCommand command so we
wouldn't mix the zookeeper and admin client configs. Perhaps I could create
a kafka-configs-zookeeper.sh shell script for shortening the
kafka-run-class command.
What do you and others think?

Thanks,
Viktor



On Mon, Apr 16, 2018 at 10:15 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:

> Hi Viktor,
>
> The KIP proposes to remove the ability to configure using ZooKeeper. This
> means we will no longer have the ability to start up a cluster with SCRAM
> credentials since we first need to create SCRAM credentials before brokers
> can start if the broker uses SCRAM for inter-broker communication and we
> need SCRAM credentials for the AdminClient before we can create new ones.
> For quotas as well, we will no longer be able to configure quotas until at
> least one broker has been started. Perhaps, we ought to retain the ability
> to configure using ZooKeeper for these initialization scenarios and support
> only AdminClient for dynamic updates?
>
> What do others think?
>
> Regards,
>
> Rajini
>
> On Sun, Apr 15, 2018 at 10:28 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > +1
> > -------- Original message --------From: zhenya Sun <toke...@126.com>
> > Date: 4/15/18  12:42 AM  (GMT-08:00) To: dev <dev@kafka.apache.org> Cc:
> > dev <dev@kafka.apache.org> Subject: Re: [VOTE] #2 KIP-248: Create New
> > ConfigCommand That Uses The New AdminClient
> > non-binding +1
> >
> >
> >
> >
> >
> > from my iphone!
> > On 04/15/2018 15:41, Attila Sasvári wrote:
> > Thanks for updating the KIP.
> >
> > +1 (non-binding)
> >
> > Viktor Somogyi <viktorsomo...@gmail.com> ezt írta (időpont: 2018. ápr.
> 9.,
> > H 16:49):
> >
> > > Hi Magnus,
> > >
> > > Thanks for the heads up, added the endianness to the KIP. Here is the
> > > current text:
> > >
> > > "Double
> > > A new type needs to be added to transfer quota values. Since the
> protocol
> > > classes in Kafka already uses ByteBuffers it is logical to use their
> > > functionality for serializing doubles. The serialization is basically a
> > > representation of the specified floating-point value according to the
> > IEEE
> > > 754 floating-point "double format" bit layout. The ByteBuffer
> serializer
> > > writes eight bytes containing the given double value, in Big Endian
> byte
> > > order, into this buffer at the current position, and then increments
> the
> > > position by eight.
> > >
> > > The implementation will be defined in
> > > org.apache.kafka.common.protocol.types with the other protocol types
> > and it
> > > will have no default value much like the other types available in the
> > > protocol."
> > >
> > > Also, I haven't changed the protocol docs yet but will do so in my PR
> for
> > > this feature.
> > >
> > > Let me know if you'd still add something.
> > >
> > > Regards,
> > > Viktor
> > >
> > >
> > > On Mon, Apr 9, 2018 at 3:32 PM, Magnus Edenhill <mag...@edenhill.se>
> > > wrote:
> > >
> > > > Hi Viktor,
> > > >
> > > > since serialization of floats isn't as straight forward as integers,
> > > please
> > > > specify the exact serialization format of DOUBLE in the protocol docs
> > > > (e.g., IEEE 754),
> > > > including endianness (big-endian please).
> > > >
> > > > This will help the non-java client ecosystem.
> > > >
> > > > Thanks,
> > > > Magnus
> > > >
> > > >
> > > > 2018-04-09 15:16 GMT+02:00 Viktor Somogyi <viktorsomo...@gmail.com>:
> > > >
> > > > > Hi Attila,
> > > > >
> > > > > 1. It uses ByteBuffers, which in turn will use
> > Double.doubleToLongBits
> > > to
> > > > > convert the double value to a long and that long will be written in
> > the
> > > > > buffer. I'v updated the KIP with this.
> > > > > 2. Good idea, modified it.
> > > > > 3. During the discussion I remember we didn't really decide which
> one
> > > > would
> > > > > be the better one but I agree that a wrapper class that makes sure
> > the
> > > > list
> > > > > that is used as a key is immutable is a good idea and would ease
> the
> > > life
> > > > > of people using the interface. Also more importantly would make
> sure
> > > that
> > > > > we always use the same hashCode. I have created wrapper classes for
> > the
> > > > map
> > > > > value as well but that was reverted to keep things consistent.
> > Although
> > > > for
> > > > > the key I think we wouldn't break consistency. I updated the KIP.
> > > > >
> > > > > Thanks,
> > > > > Viktor
> > > > >
> > > > >
> > > > > On Tue, Apr 3, 2018 at 1:27 PM, Attila Sasvári <
> asasv...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Thanks for working on it Viktor.
> > > > > >
> > > > > > It looks good to me, but I have some questions:
> > > > > > - I see a new type DOUBLE is used for quota_value , and it is not
> > > > listed
> > > > > > among the primitive types on the Kafka protocol guide. Can you
> add
> > > some
> > > > > > more details?
> > > > > > - I am not sure that using an environment (i.e.
> > > > USE_OLD_COMMAND)variable
> > > > > is
> > > > > > the best way to control behaviour of kafka-config.sh . In other
> > > scripts
> > > > > > (e.g. console-consumer) an argument is passed (e.g.
> > --new-consumer).
> > > If
> > > > > we
> > > > > > still want to use it, then I would suggest something like
> > > > > > USE_OLD_KAFKA_CONFIG_COMMAND. What do you think?
> > > > > > - I have seen maps like Map<List<ConfigResource>,
> > > > Collection<QuotaType>>.
> > > > > > If List<ConfigResource> is the key type, you should make sure
> that
> > > this
> > > > > > List is immutable. Have you considered to introduce a new wrapper
> > > > class?
> > > > > >
> > > > > > Regards,
> > > > > > - Attila
> > > > > >
> > > > > > On Thu, Mar 29, 2018 at 1:46 PM, zhenya Sun <toke...@126.com>
> > wrote:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > >
> > > > > > > | |
> > > > > > > zhenya Sun
> > > > > > > 邮箱:toke...@126.com
> > > > > > > |
> > > > > > >
> > > > > > > 签名由 网易邮箱大师 定制
> > > > > > >
> > > > > > > On 03/29/2018 19:40, Sandor Murakozi wrote:
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > > Thanks for the KIP, Viktor
> > > > > > >
> > > > > > > On Wed, Mar 21, 2018 at 5:41 PM, Viktor Somogyi <
> > > > > viktorsomo...@gmail.com
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Everyone,
> > > > > > > >
> > > > > > > > I've started a vote on KIP-248
> > > > > > > > <https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 248+-+Create+New+
> > > > > > > > ConfigCommand+That+Uses+The+New+AdminClient#KIP-248-
> > > > > > > > CreateNewConfigCommandThatUsesTheNewAdminClient-
> > DescribeQuotas>
> > > > > > > > a few weeks ago but at the time I got a couple more comments
> > and
> > > it
> > > > > was
> > > > > > > > very close to 1.1 feature freeze, people were occupied with
> > that,
> > > > so
> > > > > I
> > > > > > > > wanted to restart the vote on this.
> > > > > > > >
> > > > > > > >
> > > > > > > > *Summary of the KIP*
> > > > > > > > For those who don't have context I thought I'd summarize it
> in
> > a
> > > > few
> > > > > > > > sentence.
> > > > > > > > *Problem & Motivation: *The basic problem that the KIP tries
> to
> > > > solve
> > > > > > is
> > > > > > > > that kafka-configs.sh (which in turn uses the ConfigCommand
> > > class)
> > > > > uses
> > > > > > a
> > > > > > > > direct zookeeper connection. This is not desirable as getting
> > > > around
> > > > > > the
> > > > > > > > broker opens up security issues and prevents the tool from
> > being
> > > > used
> > > > > > in
> > > > > > > > deployments where only the brokers are exposed to clients.
> > Also a
> > > > > > somewhat
> > > > > > > > smaller motivation is to rewrite the tool in java as part of
> > the
> > > > > tools
> > > > > > > > component so we can get rid of requiring the core module on
> the
> > > > > > classpath
> > > > > > > > for the kafka-configs tool.
> > > > > > > > *Solution:*
> > > > > > > > - I've designed new 2 protocols: DescribeQuotas and
> > AlterQuotas.
> > > > > > > > - Also redesigned the output format of the command line tool
> so
> > > it
> > > > > > provides
> > > > > > > > a nicer result.
> > > > > > > > - kafka-configs.[sh/bat] will use a new java based
> > ConfigCommand
> > > > that
> > > > > > is
> > > > > > > > placed in tools.
> > > > > > > >
> > > > > > > >
> > > > > > > > I'd be happy to receive any votes or feedback on this.
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Viktor
> > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to