Build failed in Jenkins: kafka-trunk-jdk8 #3934

2019-09-30 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-8911: Using proper WindowSerdes constructors in their implicit


--
[...truncated 2.65 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED


Build failed in Jenkins: kafka-trunk-jdk11 #844

2019-09-30 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-8911: Using proper WindowSerdes constructors in their implicit


--
[...truncated 2.66 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [VOTE] KIP-429: Kafka Consumer Incremental Rebalance Protocol

2019-09-30 Thread Guozhang Wang
Hello folks,

One last update on the KIP: we've added a section with a list of newly
added metrics corresponding to consumer rebalance events as part of this
proposal as well, detailed list can be found here:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-ConsumerMetrics


Guozhang

On Mon, Aug 5, 2019 at 2:36 PM Guozhang Wang  wrote:

> Hello folks,
>
> I've also updated the wiki page by making the augmented
> `ConsumerPartitionAssignor` out as a public API in `o.a.k.clients.consumer`
> and deprecate the old `PartitionAssignor` as in
> `o.a.k.clients.consumer.internals`.
>
>
> Guozhang
>
> On Fri, Jun 28, 2019 at 11:30 AM Sophie Blee-Goldman 
> wrote:
>
>> It is now! Also updated the KIP to reflect that we will be adding a new
>> CooperativeStickyAssignor rather than making the existing StickyAssignor
>> cooperative to prevent users who already use the StickyAssignor from
>> blindly upgrading and hitting potential problems during a rolling bounce
>>
>> On Thu, Jun 27, 2019 at 8:15 PM Boyang Chen 
>> wrote:
>>
>> > Thank you Sophie for the update. Is this also reflected on the KIP?
>> >
>> > On Thu, Jun 27, 2019 at 3:28 PM Sophie Blee-Goldman <
>> sop...@confluent.io>
>> > wrote:
>> >
>> > > We would like to tack on some rebalance-related metrics as part of
>> this
>> > KIP
>> > > as well. The details can be found in the sub-task JIRA:
>> > > https://issues.apache.org/jira/browse/KAFKA-8609
>> > >
>> > > On Thu, May 30, 2019 at 5:09 PM Guozhang Wang 
>> > wrote:
>> > >
>> > > > +1 (binding) from me as well.
>> > > >
>> > > > Thanks to everyone who have voted! I'm closing this vote thread
>> with a
>> > > > tally:
>> > > >
>> > > > binding +1: 3 (Guozhang, Harsha, Matthias)
>> > > >
>> > > > non-binding +1: 2 (Boyang, Liquan)
>> > > >
>> > > >
>> > > > Guozhang
>> > > >
>> > > > On Wed, May 22, 2019 at 9:22 PM Matthias J. Sax <
>> matth...@confluent.io
>> > >
>> > > > wrote:
>> > > >
>> > > > > +1 (binding)
>> > > > >
>> > > > >
>> > > > > On 5/22/19 7:37 PM, Harsha wrote:
>> > > > > > +1 (binding). Thanks for the KIP looking forward for this to be
>> > > > avaiable
>> > > > > in consumers.
>> > > > > >
>> > > > > > Thanks,
>> > > > > > Harsha
>> > > > > >
>> > > > > > On Wed, May 22, 2019, at 12:24 AM, Liquan Pei wrote:
>> > > > > >> +1 (non-binding)
>> > > > > >>
>> > > > > >> On Tue, May 21, 2019 at 11:34 PM Boyang Chen <
>> bche...@outlook.com
>> > >
>> > > > > wrote:
>> > > > > >>
>> > > > > >>> Thank you Guozhang for all the hard work.
>> > > > > >>>
>> > > > > >>> +1 (non-binding)
>> > > > > >>>
>> > > > > >>> 
>> > > > > >>> From: Guozhang Wang 
>> > > > > >>> Sent: Wednesday, May 22, 2019 1:32 AM
>> > > > > >>> To: dev
>> > > > > >>> Subject: [VOTE] KIP-429: Kafka Consumer Incremental Rebalance
>> > > > Protocol
>> > > > > >>>
>> > > > > >>> Hello folks,
>> > > > > >>>
>> > > > > >>> I'd like to start the voting for KIP-429 now, details can be
>> > found
>> > > > > here:
>> > > > > >>>
>> > > > > >>>
>> > > > > >>>
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-RebalanceCallbackErrorHandling
>> > > > > >>>
>> > > > > >>> And the on-going PRs available for review:
>> > > > > >>>
>> > > > > >>> Part I: https://github.com/apache/kafka/pull/6528
>> > > > > >>> Part II: https://github.com/apache/kafka/pull/6778
>> > > > > >>>
>> > > > > >>>
>> > > > > >>> Thanks
>> > > > > >>> -- Guozhang
>> > > > > >>>
>> > > > > >>
>> > > > > >>
>> > > > > >> --
>> > > > > >> Liquan Pei
>> > > > > >> Software Engineer, Confluent Inc
>> > > > > >>
>> > > > >
>> > > > >
>> > > >
>> > > > --
>> > > > -- Guozhang
>> > > >
>> > >
>> >
>>
>
>
> --
> -- Guozhang
>


-- 
-- Guozhang


[jira] [Resolved] (KAFKA-8609) Add consumer metrics for rebalances (part 9)

2019-09-30 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8609.
--
Resolution: Fixed

> Add consumer metrics for rebalances (part 9)
> 
>
> Key: KAFKA-8609
> URL: https://issues.apache.org/jira/browse/KAFKA-8609
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
>
> We would like to track some additional metrics on the consumer side related 
> to rebalancing as part of this KIP, including
>  # listener callback latency
>  ## partitions-revoked-time-avg
>  ## partitions-revoked-time-max
>  ## partitions-assigned-time-avg
>  ## partitions-assigned-time-max
>  ## partitions-lost-time-avg
>  ## partitions-lost-time-max
>  # rebalance rate (# rebalances per day)
>  ## rebalance-rate-per-day



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8963) Benchmark and optimize incremental fetch session handler

2019-09-30 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-8963:
---

 Summary: Benchmark and optimize incremental fetch session handler
 Key: KAFKA-8963
 URL: https://issues.apache.org/jira/browse/KAFKA-8963
 Project: Kafka
  Issue Type: Task
Reporter: Lucas Bradstreet


The FetchSessionHandler is a cause of high CPU usage in the replica fetcher for 
brokers with high partition counts. We should add a jmh benchmark and optimize 
the incremental fetch session building.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] 2.2.2 Bug Fix Release

2019-09-30 Thread Sophie Blee-Goldman
Matthias is at Kafka Summit, we should be able to get the fix for it
merged by the end of today though. Will let you know when it's done.

Thanks!
Sophie

On Sat, Sep 28, 2019 at 8:24 PM Randall Hauch  wrote:

> Sounds fine, Matthias. Do you have an ETA for the fix?
>
> On Sat, Sep 28, 2019 at 12:57 PM Matthias J. Sax 
> wrote:
>
> > We recently identified the root cause of
> > https://issues.apache.org/jira/browse/KAFKA-8649 and plan to do PR asap.
> >
> > It seems to be a critical fix as it affect upgrading Kafka Streams
> > applications. We would love to get a fix into 2.2.2 release.
> >
> >
> > -Matthias
> >
> > On 9/23/19 10:45 PM, Matthias J. Sax wrote:
> > > Just FYI:
> > >
> > > There was no further feedback about how to handle the regression in
> > > 2.2.1 and hence, we consider the proposal accepted, what implies that
> we
> > > can move forward with the 2.2.2 release.
> > >
> > >
> > > -Matthias
> > >
> > > On 9/13/19 5:23 PM, Randall Hauch wrote:
> > >> Thanks, Matthias. I'll get things ready locally but won't cut a
> release
> > >> candidate until everyone is ready.
> > >>
> > >> On Fri, Sep 13, 2019 at 4:13 PM Matthias J. Sax <
> matth...@confluent.io>
> > >> wrote:
> > >>
> > >>> Thanks Randall!
> > >>>
> > >>> Overall SGTM, however, we need to resolve the open question about the
> > >>> 2.2.1 regression before we can release 2.2.2.
> > >>>
> > >>> I sent an email about this (subject `[DISCUSS] Streams-Broker
> > >>> compatibility regression in 2.2.1 release`) couple of days ago.
> > >>>
> > >>>
> > >>> -Matthias
> > >>>
> > >>> On 9/12/19 4:03 PM, Randall Hauch wrote:
> >  Hey everyone,
> > 
> >  I'd like to volunteer for the release manager of the 2.2.2 bug fix
> > >>> release.
> >  Kafka 2.2.1 was
> >  released on June 3 and so far 25 issues have been fixed since then.
> > Here
> > >>> is
> >  a complete
> >  list:
> > 
> > >>>
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.2
> > 
> >  The release plan is documented here:
> > 
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.2
> > 
> >  Thanks!
> > 
> >  Randall
> > 
> > >>>
> > >>>
> > >>
> > >
> >
> >
>


[jira] [Created] (KAFKA-8962) KafkaAdminClient#describeTopics always goes through the controller

2019-09-30 Thread Dhruvil Shah (Jira)
Dhruvil Shah created KAFKA-8962:
---

 Summary: KafkaAdminClient#describeTopics always goes through the 
controller
 Key: KAFKA-8962
 URL: https://issues.apache.org/jira/browse/KAFKA-8962
 Project: Kafka
  Issue Type: Bug
Reporter: Dhruvil Shah


KafkaAdminClient#describeTopic makes a MetadataRequest against the controller. 
We should consider routing the request to any broker in the cluster using 
`LeastLoadedNodeProvider` instead, so that we don't overwhelm the controller 
with these requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Status of user Admin API for quotas

2019-09-30 Thread Colin McCabe
Hi Tom,

KIP-455 allows us to distinguish between replication traffic and reassignment 
traffic, if the brokers involved have been upgraded to the latest inter-broker 
protocol version.  Now that KIP-455 has been implemented, it should be possible 
to create a quota that does what people actually want here, which is throttling 
just reassignment traffic.  We've discussed doing this in the past, but there 
are definitely some details we haven't figured out -- like should it be a new 
quota type, or should we use the existing quota type but change the semantics a 
bit, etc.  This would definitely be a good project for someone to tackle to 
improve the reassignment experience.

best,
Colin


On Fri, Sep 27, 2019, at 02:13, Tom Bentley wrote:
> Hi Viktor and Colin,
> 
> Thanks for the update. Viktor, if you publish your KIP after summit then we
> can at least what comes out of the discussion. Distinguishing between
> normal ISR traffic and reassignment traffic would be nice (if that's
> something your KIP would enable), and any such distinction would need to be
> made in the API.
> 
> Kind regards,
> 
> Tom
> 
> On Thu, Sep 26, 2019 at 6:46 PM Viktor Somogyi-Vass 
> wrote:
> 
> > Hi Tom, Colin,
> >
> > I abandoned my KIP because at that time I haven't had time unfortunately to
> > continue working on it (KIP-248) and I wanted to rework it to remove the
> > java based client since the admin commands use Scala. Since at that time it
> > mostly had consensus on the design I think it might be a good approach to
> > continue. It was still on my to-dos but didn't want to interrupt KIP-422
> > because at that time it seemed to be active.
> > If you're writing the email with the attempt of continuing then I can only
> > endorse you or perhaps if you need someone to help in I could be able to
> > join efforts (even by reviewing and participating the discussion).
> >
> > My KIP didn't include reassignment quotas but I've been working on a KIP
> > about reworking replication throttling to introduce reassignment throttling
> > so perhaps it would make to discuss them somewhat together. I'll try to
> > publish it soon but not sure I can until after the summit.
> >
> > Viktor
> >
> > On Fri, Sep 20, 2019 at 6:30 PM Colin McCabe  wrote:
> >
> > > Hi Tom,
> > >
> > > As you said, there were a few KIPs, but they seem to have become
> > inactive.
> > >
> > > It's kind of a tough problem-- probably at least as complex as the admin
> > > interface for ACLs.
> > >
> > > There's also the headache of how reassignment quotas should work.  We
> > > probably want to change that quota to actually throttle only reassignment
> > > traffic, not just any non-ISR traffic as it does now.  Or add a different
> > > quota type?
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Fri, Sep 20, 2019, at 04:38, Tom Bentley wrote:
> > > > Hi,
> > > >
> > > > I was wondering, what is the current status of efforts to add an
> > > > AdminClient API for managing user quotas? My trawl of the list archives
> > > > didn't turn up anything current (KIP-248 was rejected KIP-422 was
> > > > discussed), but perhaps I missed something.
> > > >
> > > > Many thanks,
> > > >
> > > > Tom
> > > >
> > >
> >
>


Re: [VOTE] KIP-482: The Kafka Protocol should Support Optional Tagged Fields

2019-09-30 Thread Colin McCabe
On Sat, Sep 28, 2019, at 17:49, Magnus Edenhill wrote:
> Den mån 23 sep. 2019 kl 14:42 skrev Colin McCabe :
> 
> > On Fri, Sep 20, 2019, at 18:05, Jun Rao wrote:
> > > 101. We already use varInt in the message format. I assume that the
> > > protocol uses the same varInt representation?
> >
> > It uses a slightly different varint representation.  Basically, the
> > difference is that the existing representation uses serpentine encoding to
> > make representing negative numbers more efficient, at the cost of making
> > positive numbers less efficient.  Since tags (and lengths) can't be
> > negative, there is no need for serpentine encoding, and we can be more
> > efficient without it.
> >
> 
> While I don't see anything technically wrong with the proposed custom
> varint encoding, it does
> come at a price since it prevents client developers from using an existing,
> tested, and optimized zigzag varint implementation,
> and it makes the Kafka protocol more complex by now having 4 ways to encode
> integers.
> 
> I'm not strongly opposed, but unless there is an actual efficiency gain in
> using the custom encoding,
> I'd see us using the existing zigzag varint encoding instead, or having the
> protocol semantics state
> that a zero-sized tag value is the same as null, or that omission of a tag
> means null.
> 
> //Magnus
>

Hi Magnus,

I don't really think of this as a custom encoding.  Protobuf supports both 
varints that use zigzag and varints that don't, for example.  The call the 
latter "unsigned varints."  Zigzag encoding reserves half of the bit patterns 
for negative integers, and sometimes you know that something isn't going to be 
negative-- like a length.

There is a big efficiency gain from not using zigzag encoding when we don't 
need negative numbers.  It allows us to encode 2x as many integers before 
moving to using more bytes.  Going from memory without checking the code, it's 
something like we have to start using a two byte encoding when values get 
larger than 63 when using zigzag encoding, but we can go up to 127 without it.

It's should be simple to implement both variations.  Basically the code for 
implementing varints with zigzag encoding is:

>public static void writeVarint(int value, DataOutput out) throws 
> IOException {
>writeUnsignedVarint((value << 1) ^ (value >> 31), out);
>}

You can zigzag encode an int with "(value << 1) ^ (value >> 31)"  And then you 
just do the normal variable-length encoding.  This is not something that needs 
a lot of testing or optimization because it's just a single bit-munging step 
that you either do or don't do, like choosing to negative the int or something. 
 Decoding is similar -- just a simple bit manipulation expression that you 
either do or omit.

We can't make omission of a tag mean null because a lot of datatypes in the 
Kafka protocol just aren't nullable.  int8, int16, int32, int64, UUID, etc.  
Even for datatypes that are nullable, the Kafka protocol lets us choose whether 
null is even an allowed value, let alone the default.  Keep in mind that 
existing fields can be converted to tagged fields in new message versions, so 
we have to work with the existing semantics.

best,
Colin


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-30 Thread Nikolay Izhikov
Hello, Bruno.

Thanks for feedback.
KIP [1] updated according to your comments.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes

В Пн, 30/09/2019 в 16:51 +0200, Bruno Cadonna пишет:
> Hi Nikolay,
> 
> Thank you for the KIP.
> 
> I have a couple of minor comments:
> 
> 1. I would not put implementation details into the KIP as you did with
> the bodies of the constructor of the `VoidSerde` and the `serialize`
> and `deserialize` methods. IMO, the signatures suffice. The
> implementation is then discussed on the PR.
> 
> 2. I guess you mean that you want to add the `VoidSerde` to the
> `Serdes` class when you say "I want to add VoidSerde to main SerDe
> collection.". If my guess is right, then please be more specific and
> mention the `Serdes` class there.
> 
> 3. The rejected alternative in the KIP is rather a workaround than a
> rejected alternative. IMO it would be better to instead list the
> rejected names for the Serde there if anything.
> 
> Best,
> Bruno
> 
> On Sat, Sep 28, 2019 at 1:42 PM Nikolay Izhikov  wrote:
> > 
> > Hello.
> > 
> > Any additional comments?
> > Should I start a vote for this KIP?
> > 
> > В Вт, 24/09/2019 в 16:20 +0300, Nikolay Izhikov пишет:
> > > Hello,
> > > 
> > > KIP [1] updated to VoidSerde.
> > > 
> > > [1] 
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > 
> > > 
> > > В Вт, 24/09/2019 в 09:11 -0400, Andrew Otto пишет:
> > > > Ah it is!  +1 to VoidSerde
> > > > 
> > > > On Mon, Sep 23, 2019 at 11:25 PM Matthias J. Sax 
> > > > wrote:
> > > > 
> > > > > Because the actually data type is `Void`, I am wondering if 
> > > > > `VoidSerde`
> > > > > might be a more descriptive name?
> > > > > 
> > > > > -Matthias
> > > > > 
> > > > > On 9/23/19 12:25 PM, Nikolay Izhikov wrote:
> > > > > > Hello, guys
> > > > > > 
> > > > > > Any additional feeback on this KIP?
> > > > > > Should I start a vote?
> > > > > > 
> > > > > > В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> > > > > > > Hello, Andrew.
> > > > > > > 
> > > > > > > OK, if nobody mind, let's change it to Null.
> > > > > > > 
> > > > > > > В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > > > > > > > NullSerdes seems more descriptive, but up to you!  :)
> > > > > > > > 
> > > > > > > > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov 
> > > > > > > > 
> > > > > 
> > > > > wrote:
> > > > > > > > 
> > > > > > > > > Hello, Andrew.
> > > > > > > > > 
> > > > > > > > > Seems, usage null or nothing is matter of taste. I dont mind 
> > > > > > > > > if we
> > > > > 
> > > > > call it
> > > > > > > > > NullSerde
> > > > > > > > > 
> > > > > > > > > чт, 19 сент. 2019 г., 20:28 Andrew Otto :
> > > > > > > > > 
> > > > > > > > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > > > > > > > 
> > > > > > > > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > > > > > > > >  > > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > > All,
> > > > > > > > > > > 
> > > > > > > > > > > I'd like to start a discussion for adding a NothingSerde 
> > > > > > > > > > > to Serdes.
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > 
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > > > > > > > 
> > > > > > > > > > > Your comments and suggestions are welcome.
> > > > > > > > > > > 
> > > > > 
> > > > > 


signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-30 Thread Bruno Cadonna
Hi Nikolay,

Thank you for the KIP.

I have a couple of minor comments:

1. I would not put implementation details into the KIP as you did with
the bodies of the constructor of the `VoidSerde` and the `serialize`
and `deserialize` methods. IMO, the signatures suffice. The
implementation is then discussed on the PR.

2. I guess you mean that you want to add the `VoidSerde` to the
`Serdes` class when you say "I want to add VoidSerde to main SerDe
collection.". If my guess is right, then please be more specific and
mention the `Serdes` class there.

3. The rejected alternative in the KIP is rather a workaround than a
rejected alternative. IMO it would be better to instead list the
rejected names for the Serde there if anything.

Best,
Bruno

On Sat, Sep 28, 2019 at 1:42 PM Nikolay Izhikov  wrote:
>
> Hello.
>
> Any additional comments?
> Should I start a vote for this KIP?
>
> В Вт, 24/09/2019 в 16:20 +0300, Nikolay Izhikov пишет:
> > Hello,
> >
> > KIP [1] updated to VoidSerde.
> >
> > [1] 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> >
> >
> > В Вт, 24/09/2019 в 09:11 -0400, Andrew Otto пишет:
> > > Ah it is!  +1 to VoidSerde
> > >
> > > On Mon, Sep 23, 2019 at 11:25 PM Matthias J. Sax 
> > > wrote:
> > >
> > > > Because the actually data type is `Void`, I am wondering if `VoidSerde`
> > > > might be a more descriptive name?
> > > >
> > > > -Matthias
> > > >
> > > > On 9/23/19 12:25 PM, Nikolay Izhikov wrote:
> > > > > Hello, guys
> > > > >
> > > > > Any additional feeback on this KIP?
> > > > > Should I start a vote?
> > > > >
> > > > > В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> > > > > > Hello, Andrew.
> > > > > >
> > > > > > OK, if nobody mind, let's change it to Null.
> > > > > >
> > > > > > В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > > > > > > NullSerdes seems more descriptive, but up to you!  :)
> > > > > > >
> > > > > > > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov 
> > > > > > > 
> > > >
> > > > wrote:
> > > > > > >
> > > > > > > > Hello, Andrew.
> > > > > > > >
> > > > > > > > Seems, usage null or nothing is matter of taste. I dont mind if 
> > > > > > > > we
> > > >
> > > > call it
> > > > > > > > NullSerde
> > > > > > > >
> > > > > > > > чт, 19 сент. 2019 г., 20:28 Andrew Otto :
> > > > > > > >
> > > > > > > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > > > > > >
> > > > > > > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > > > > > > >  > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > All,
> > > > > > > > > >
> > > > > > > > > > I'd like to start a discussion for adding a NothingSerde to 
> > > > > > > > > > Serdes.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > > > > > >
> > > > > > > > > > Your comments and suggestions are welcome.
> > > > > > > > > >
> > > >
> > > >


Re: Vulnerabilities found for jackson-databind-2.9.9.jar and guava-20.0.jar in latest Apache-kafka latest version 2.3.0

2019-09-30 Thread David Arthur
Namrata,

I'll work on producing the next RC for 2.3.1 once this and a couple of
patches are available. A [VOTE] email will be sent out once the next RC is
ready.

Thanks,
David


On Mon, Sep 30, 2019 at 3:16 AM namrata kokate 
wrote:

> Thank you for the update, I would like to know when can I expect this
> release?
>
> Regards,
> Namrata kokate
>
> On Sat, Sep 28, 2019, 11:21 PM Matthias J. Sax 
> wrote:
>
> > Thanks Namrata,
> >
> > I think we should fix this for upcoming 2.3.1 release.
> >
> > -Matthias
> >
> >
> > On 9/26/19 10:58 PM, namrata kokate wrote:
> > > Hi,
> > >
> > > I am currently using apache kafka latest version-2.3.0 from the
> official
> > > site https://kafka.apache.org/downloads, however When I deployed the
> > binary
> > > on the containers, I can see the vulnerability reported for the two
> jars
> > -
> > > jackson-databind-2.9.9.jar and  guava-20.0.jar
> > >
> > > I can see these vulnerabilities have been removed in
> > > the jackson-databind-2.9.10.jar and guava-24.1.1-jre.jar jars but the
> > > apache-kafka version 2.3.0 does not include these new jars. Can you
> help
> > > me with this?
> > >
> > > Regards,
> > > Namrata Kokate
> > >
> >
> >
>


-- 
David Arthur


Regarding permissions for creating KIP

2019-09-30 Thread RABI K.C.
Hello,

I am new to kafka and have to create KIP for KAFKA-8953 and was going
through
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
to
create KIP. However, it seems I don't have permission to create KIP.

*Wiki id: rabikumar.kc*

Please do let me know if anything else is required.

With regards,
Rabi


join kafka contribution

2019-09-30 Thread Xu Jianhai
Hi:
I am engineer from Bytedance, living in China. because of related job,
I wish join kafka community and make some contribution. but I can not know
how to kickstart. I try to resolve some ut, but I can not get response when
I comment "maybe I can try it", meaning no assignee for me. so what should
i do is a better solution?
thx.


[jira] [Created] (KAFKA-8961) Unable to create secure JDBC connection through Kafka Connect

2019-09-30 Thread Monika Bainsala (Jira)
Monika Bainsala created KAFKA-8961:
--

 Summary: Unable to create secure JDBC connection through Kafka 
Connect
 Key: KAFKA-8961
 URL: https://issues.apache.org/jira/browse/KAFKA-8961
 Project: Kafka
  Issue Type: Bug
  Components: build, clients, KafkaConnect, network
Affects Versions: 2.2.1
Reporter: Monika Bainsala


As per below article for enabling JDBC secure connection, we can use updated 
URL parameter while calling the create connector REST API.

Exampl:

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=X)(PORT=1520)))(CONNECT_DATA=(SERVICE_NAME=XXAP)));EncryptionLevel=requested;EncryptionTypes=RC4_256;DataIntegrityLevel=requested;DataIntegrityTypes=MD5"

 

But this approach is not working currently, kindly help in resolving this issue.

 

Reference :

[https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Vulnerabilities found for jackson-databind-2.9.9.jar and guava-20.0.jar in latest Apache-kafka latest version 2.3.0

2019-09-30 Thread namrata kokate
Thank you for the update, I would like to know when can I expect this
release?

Regards,
Namrata kokate

On Sat, Sep 28, 2019, 11:21 PM Matthias J. Sax 
wrote:

> Thanks Namrata,
>
> I think we should fix this for upcoming 2.3.1 release.
>
> -Matthias
>
>
> On 9/26/19 10:58 PM, namrata kokate wrote:
> > Hi,
> >
> > I am currently using apache kafka latest version-2.3.0 from the official
> > site https://kafka.apache.org/downloads, however When I deployed the
> binary
> > on the containers, I can see the vulnerability reported for the two jars
> -
> > jackson-databind-2.9.9.jar and  guava-20.0.jar
> >
> > I can see these vulnerabilities have been removed in
> > the jackson-databind-2.9.10.jar and guava-24.1.1-jre.jar jars but the
> > apache-kafka version 2.3.0 does not include these new jars. Can you help
> > me with this?
> >
> > Regards,
> > Namrata Kokate
> >
>
>