Build failed in Jenkins: kafka-trunk-jdk8 #3702

2019-06-04 Thread Apache Jenkins Server
See 


Changes:

[cshapi] KAFKA-4893; Fix deletion and moving of topics with long names

[github] Improve logging in the consumer for epoch updates (#6879)

[github] MINOR: Update docs to say 2.3 (#6881)

--
[...truncated 3.63 MB...]
org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundConnectorDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundConnectorDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetStateUnexpectedDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetStateUnexpectedDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreConnectorDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreConnectorDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSetNull 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSetNull 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
STARTED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet STARTED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.integration.ExampleConnectIntegrationTest > 
testSourceConnector STARTED


[VOTE] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-04 Thread Cyrus Vafadari
Hi all,

Like like to start voting in the following KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-475%3A+New+Metric+to+Measure+Number+of+Tasks+on+a+Connector

Discussion thread:
https://lists.apache.org/thread.html/bf7c92224aa798336c14d7e96ec8f2e3406c61879ec381a50652acfe@%3Cdev.kafka.apache.org%3E

Thanks!

Cyrus


Re: [DISCUSS] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-04 Thread Cyrus Vafadari
Thanks, everyone, for the feedback.

It looks like there's support, so I'll open the voting thread.

Cyrus

On Tue, Jun 4, 2019 at 9:02 PM Randall Hauch  wrote:

> Sorry that I was not clear. Yes, I was suggesting the state-specific counts
> were in addition to the simple task count you originally proposed. Thanks
> for taking my suggestion into account -- the updated KIP looks great.
>
> Thanks for contributing this improvement, Cyrus!
>
> Best regards,
>
> Randall
>
> On Tue, Jun 4, 2019 at 6:35 PM Cyrus Vafadari  wrote:
>
> > Randall,
> >
> > I've updated the KIP to include all of your recommendations!
> >
> > Cyrus
> >
> > On Tue, Jun 4, 2019 at 2:55 PM Cyrus Vafadari 
> wrote:
> >
> > > Randall,
> > >
> > > I plan to update the public details section and the performance impact
> as
> > > you recommended.
> > >
> > > Regarding state-specific counts, I do agree this is a useful addition.
> > > Before I make the change, I'd like to agree that these state-specific
> > > counts should be in addition to the already-proposed total tasks count
> > > (even though might be redundant, it is robust against new/missed
> > connector
> > > states, and is a useful metric in its own right), yes?
> > >
> > > Cyrus
> > >
> > > On Tue, Jun 4, 2019 at 12:24 PM Randall Hauch 
> wrote:
> > >
> > >> Thanks, Cyrus -- this will be quite useful. I do have a few
> > >> comments/requests.
> > >>
> > >> Can you please be more specific about the public details about the
> > metric?
> > >> What is the MBean name on which the metric will appear? For example,
> the
> > >> AK
> > >> documentation (
> > https://kafka.apache.org/documentation/#connect_monitoring
> > >> )
> > >> defines all of the metrics an where they will appear, as does
> > >>
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
> > >> .
> > >>
> > >> Secondly, while a metric showing the total number of tasks is very
> > useful,
> > >> might it be worth considering also adding metrics for the number of
> > >> running
> > >> tasks, the number of paused tasks, and the number of failed tasks for
> a
> > >> connector. It might require using the herder's `connectorStatus(String
> > >> connectorName)` method instead, but that appears to be just as
> effective
> > >> at
> > >> using the local snapshot of the status store cache.
> > >>
> > >> Thirdly, it might be useful for the KIP to address the potential
> > >> performance impact of computing these methods. Again, IIUC, the herder
> > >> methods that the proposal mentions use the status and config stores
> > caches
> > >> only, so the impact should be negligible.
> > >>
> > >> Best regards,
> > >>
> > >> Randall
> > >>
> > >> On Sun, Jun 2, 2019 at 10:05 PM Ryanne Dolan 
> > >> wrote:
> > >>
> > >> > Cyrus, I agree this would be useful.
> > >> >
> > >> > Ryanne
> > >> >
> > >> > On Fri, May 31, 2019, 7:10 PM Oleksandr Diachenko <
> > >> odiache...@apache.org>
> > >> > wrote:
> > >> >
> > >> > >
> > >> > >
> > >> > > On 2019/05/30 06:06:12, Cyrus Vafadari 
> wrote:
> > >> > > > Hello Dev,
> > >> > > >
> > >> > > > I'd like to start the discussion of KIP-475: New Metric to
> Measure
> > >> > Number
> > >> > > > of Tasks on a Connector.
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-475%3A+New+Metric+to+Measure+Number+of+Tasks+on+a+Connector
> > >> > > >
> > >> > > > The proposal is pretty straightforward -- to add a new metric to
> > >> > Connect
> > >> > > to
> > >> > > > measure the number of tasks on a Connector. Currently, we
> support
> > >> this
> > >> > on
> > >> > > > Worker level, so this KIP just adds another metric to support
> this
> > >> > > > per-connector.
> > >> > > >
> > >> > > > There is also a PR:
> > >> > > > https://github.com/apache/kafka/pull/6843
> > >> > > >
> > >> > > > Thanks,
> > >> > > >
> > >> > > > Cyrus
> > >> > > >
> > >> > >
> > >> > > Hi Cyrus,
> > >> > >
> > >> > > That sounds like a useful addition.
> > >> > >
> > >> > > Regards, Alex.
> > >> > >
> > >> >
> > >>
> > >
> >
>


Re: [DISCUSS] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-04 Thread Randall Hauch
Sorry that I was not clear. Yes, I was suggesting the state-specific counts
were in addition to the simple task count you originally proposed. Thanks
for taking my suggestion into account -- the updated KIP looks great.

Thanks for contributing this improvement, Cyrus!

Best regards,

Randall

On Tue, Jun 4, 2019 at 6:35 PM Cyrus Vafadari  wrote:

> Randall,
>
> I've updated the KIP to include all of your recommendations!
>
> Cyrus
>
> On Tue, Jun 4, 2019 at 2:55 PM Cyrus Vafadari  wrote:
>
> > Randall,
> >
> > I plan to update the public details section and the performance impact as
> > you recommended.
> >
> > Regarding state-specific counts, I do agree this is a useful addition.
> > Before I make the change, I'd like to agree that these state-specific
> > counts should be in addition to the already-proposed total tasks count
> > (even though might be redundant, it is robust against new/missed
> connector
> > states, and is a useful metric in its own right), yes?
> >
> > Cyrus
> >
> > On Tue, Jun 4, 2019 at 12:24 PM Randall Hauch  wrote:
> >
> >> Thanks, Cyrus -- this will be quite useful. I do have a few
> >> comments/requests.
> >>
> >> Can you please be more specific about the public details about the
> metric?
> >> What is the MBean name on which the metric will appear? For example, the
> >> AK
> >> documentation (
> https://kafka.apache.org/documentation/#connect_monitoring
> >> )
> >> defines all of the metrics an where they will appear, as does
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
> >> .
> >>
> >> Secondly, while a metric showing the total number of tasks is very
> useful,
> >> might it be worth considering also adding metrics for the number of
> >> running
> >> tasks, the number of paused tasks, and the number of failed tasks for a
> >> connector. It might require using the herder's `connectorStatus(String
> >> connectorName)` method instead, but that appears to be just as effective
> >> at
> >> using the local snapshot of the status store cache.
> >>
> >> Thirdly, it might be useful for the KIP to address the potential
> >> performance impact of computing these methods. Again, IIUC, the herder
> >> methods that the proposal mentions use the status and config stores
> caches
> >> only, so the impact should be negligible.
> >>
> >> Best regards,
> >>
> >> Randall
> >>
> >> On Sun, Jun 2, 2019 at 10:05 PM Ryanne Dolan 
> >> wrote:
> >>
> >> > Cyrus, I agree this would be useful.
> >> >
> >> > Ryanne
> >> >
> >> > On Fri, May 31, 2019, 7:10 PM Oleksandr Diachenko <
> >> odiache...@apache.org>
> >> > wrote:
> >> >
> >> > >
> >> > >
> >> > > On 2019/05/30 06:06:12, Cyrus Vafadari  wrote:
> >> > > > Hello Dev,
> >> > > >
> >> > > > I'd like to start the discussion of KIP-475: New Metric to Measure
> >> > Number
> >> > > > of Tasks on a Connector.
> >> > > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-475%3A+New+Metric+to+Measure+Number+of+Tasks+on+a+Connector
> >> > > >
> >> > > > The proposal is pretty straightforward -- to add a new metric to
> >> > Connect
> >> > > to
> >> > > > measure the number of tasks on a Connector. Currently, we support
> >> this
> >> > on
> >> > > > Worker level, so this KIP just adds another metric to support this
> >> > > > per-connector.
> >> > > >
> >> > > > There is also a PR:
> >> > > > https://github.com/apache/kafka/pull/6843
> >> > > >
> >> > > > Thanks,
> >> > > >
> >> > > > Cyrus
> >> > > >
> >> > >
> >> > > Hi Cyrus,
> >> > >
> >> > > That sounds like a useful addition.
> >> > >
> >> > > Regards, Alex.
> >> > >
> >> >
> >>
> >
>


Jenkins build is back to normal : kafka-2.3-jdk8 #39

2019-06-04 Thread Apache Jenkins Server
See 




Re: contributor access

2019-06-04 Thread Jun Rao
Hi, Balazs,

Thanks for your interest. Just added you to the contributor list.

Jun

On Tue, Jun 4, 2019 at 5:56 PM Balazs Bence Sari 
wrote:

> Dear Kafka Team,
>
> please register me as a contributor to the Kafka project.
>
> My github id: benyoka
> My JIRA id: bsari
>
> Br
> Bence
> *Balázs Bence Sári* | Staff Software Engineer
> t. (3670) 340-3 <(3670)3403341>341
> cloudera.com 
>
> [image: Cloudera] 
>
> [image: Cloudera on Twitter]  [image:
> Cloudera on Facebook]  [image: Cloudera
> on LinkedIn] 
> --
>


contributor access

2019-06-04 Thread Balazs Bence Sari
Dear Kafka Team,

please register me as a contributor to the Kafka project.

My github id: benyoka
My JIRA id: bsari

Br
Bence
*Balázs Bence Sári* | Staff Software Engineer
t. (3670) 340-3 <(3670)3403341>341
cloudera.com 

[image: Cloudera] 

[image: Cloudera on Twitter]  [image:
Cloudera on Facebook]  [image: Cloudera
on LinkedIn] 
--


Contributor Mailing List

2019-06-04 Thread surya teja
Hi Team,
Could you add me to contributor mailing list?


Thanks,
Surya


Re: [DISCUSS] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-04 Thread Cyrus Vafadari
Randall,

I've updated the KIP to include all of your recommendations!

Cyrus

On Tue, Jun 4, 2019 at 2:55 PM Cyrus Vafadari  wrote:

> Randall,
>
> I plan to update the public details section and the performance impact as
> you recommended.
>
> Regarding state-specific counts, I do agree this is a useful addition.
> Before I make the change, I'd like to agree that these state-specific
> counts should be in addition to the already-proposed total tasks count
> (even though might be redundant, it is robust against new/missed connector
> states, and is a useful metric in its own right), yes?
>
> Cyrus
>
> On Tue, Jun 4, 2019 at 12:24 PM Randall Hauch  wrote:
>
>> Thanks, Cyrus -- this will be quite useful. I do have a few
>> comments/requests.
>>
>> Can you please be more specific about the public details about the metric?
>> What is the MBean name on which the metric will appear? For example, the
>> AK
>> documentation (https://kafka.apache.org/documentation/#connect_monitoring
>> )
>> defines all of the metrics an where they will appear, as does
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
>> .
>>
>> Secondly, while a metric showing the total number of tasks is very useful,
>> might it be worth considering also adding metrics for the number of
>> running
>> tasks, the number of paused tasks, and the number of failed tasks for a
>> connector. It might require using the herder's `connectorStatus(String
>> connectorName)` method instead, but that appears to be just as effective
>> at
>> using the local snapshot of the status store cache.
>>
>> Thirdly, it might be useful for the KIP to address the potential
>> performance impact of computing these methods. Again, IIUC, the herder
>> methods that the proposal mentions use the status and config stores caches
>> only, so the impact should be negligible.
>>
>> Best regards,
>>
>> Randall
>>
>> On Sun, Jun 2, 2019 at 10:05 PM Ryanne Dolan 
>> wrote:
>>
>> > Cyrus, I agree this would be useful.
>> >
>> > Ryanne
>> >
>> > On Fri, May 31, 2019, 7:10 PM Oleksandr Diachenko <
>> odiache...@apache.org>
>> > wrote:
>> >
>> > >
>> > >
>> > > On 2019/05/30 06:06:12, Cyrus Vafadari  wrote:
>> > > > Hello Dev,
>> > > >
>> > > > I'd like to start the discussion of KIP-475: New Metric to Measure
>> > Number
>> > > > of Tasks on a Connector.
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-475%3A+New+Metric+to+Measure+Number+of+Tasks+on+a+Connector
>> > > >
>> > > > The proposal is pretty straightforward -- to add a new metric to
>> > Connect
>> > > to
>> > > > measure the number of tasks on a Connector. Currently, we support
>> this
>> > on
>> > > > Worker level, so this KIP just adds another metric to support this
>> > > > per-connector.
>> > > >
>> > > > There is also a PR:
>> > > > https://github.com/apache/kafka/pull/6843
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Cyrus
>> > > >
>> > >
>> > > Hi Cyrus,
>> > >
>> > > That sounds like a useful addition.
>> > >
>> > > Regards, Alex.
>> > >
>> >
>>
>


Build failed in Jenkins: kafka-trunk-jdk8 #3701

2019-06-04 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA 8311: better handle timeout exception on Stream thread (#6662)

--
[...truncated 2.50 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED


[jira] [Created] (KAFKA-8484) ProducerId reset can cause IllegalStateException

2019-06-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8484:
--

 Summary: ProducerId reset can cause IllegalStateException
 Key: KAFKA-8484
 URL: https://issues.apache.org/jira/browse/KAFKA-8484
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


If the producerId is reset while inflight requests are pending, we can get the 
follow uncaught error.
{code}
[2019-06-03 08:20:45,320] ERROR [Producer clientId=producer-1] Uncaught error 
in request completion: (org.apache.kafka.clients.NetworkClient) 

  
java.lang.IllegalStateException: Sequence number for partition test_topic-13 is 
going to become negative : -965
at 
org.apache.kafka.clients.producer.internals.TransactionManager.adjustSequencesDueToFailedBatch(TransactionManager.java:561)

  
at 
org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:744)
at 
org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:717)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:667)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:574)
at 
org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:75)
at 
org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:818)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:561)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:253)
at java.lang.Thread.run(Thread.java:748)
{code}

The impact of this is that a failed batch will not be completed until the 
delivery timeout is exceeded. We are missing validation when we receive a 
produce response that the producerId and epoch still match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8483) Possible reordering of messages by producer after UNKNOWN_PRODUCER_ID error

2019-06-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8483:
--

 Summary: Possible reordering of messages by producer after 
UNKNOWN_PRODUCER_ID error
 Key: KAFKA-8483
 URL: https://issues.apache.org/jira/browse/KAFKA-8483
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


The producer attempts to detect spurious UNKNOWN_PRODUCER_ID errors and handle 
them by reassigning sequence numbers to the inflight batches. The inflight 
batches are tracked in a PriorityQueue. The problem is that the reassignment of 
sequence numbers depends on the iteration order of PriorityQueue, which does 
not guarantee any ordering. So this can result in sequence numbers being 
assigned in the wrong order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8482) alterReplicaLogDirs should be better documented

2019-06-04 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8482:
--

 Summary: alterReplicaLogDirs should be better documented
 Key: KAFKA-8482
 URL: https://issues.apache.org/jira/browse/KAFKA-8482
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Colin P. McCabe
 Fix For: 2.4.0


alterReplicaLogDirs should be better documented.  In particular, it should 
document what exceptions it throws in {{AdminClient.java}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8449) Restart task on reconfiguration under incremental cooperative rebalancing

2019-06-04 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-8449.

Resolution: Fixed

> Restart task on reconfiguration under incremental cooperative rebalancing
> -
>
> Key: KAFKA-8449
> URL: https://issues.apache.org/jira/browse/KAFKA-8449
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Blocker
> Fix For: 2.3.0
>
>
> Tasks that are already running and are not redistributed are not currently 
> restarted under incremental cooperative rebalancing when their configuration 
> changes. With eager rebalancing the restart was triggered and therefore 
> implied by rebalancing itself. But now existing tasks will not read the new 
> configuration unless restarted via the REST api. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-460: Admin Leader Election RPC

2019-06-04 Thread Jose Armando Garcia Sancio
Hi all,

During the implementation of KIP-460, we discovered that we had to make
some minor changes to the design. I have updated the KIP wiki[1]. You can
see the difference here[2].

At a high-level we made the following changes:

   1. Added a top level ErrorCode to the response for errors that apply to
   all of the topic partitions. This is currently being used for cluster
   authorization errors.
   2. We renamed the delayedOperation for DelayOperationPurgatory from
   ElectPreferredLeader to ElectLeader.


[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-460%3A+Admin+Leader+Election+RPC
.
[2]
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=113707931=19=18

On Tue, May 7, 2019 at 9:59 AM Jose Armando Garcia Sancio <
jsan...@confluent.io> wrote:

> Thanks Gwen for the clarification!
>
> On Mon, May 6, 2019 at 5:03 PM Gwen Shapira  wrote:
>
>> All committer votes are binding, so you have binding +1 from Colin, Jason
>> and myself - which is just the 3 you need for the KIP to be accepted.
>> Mickael added non-binding community support, which is great signal as
>> well.
>>
>> On Mon, May 6, 2019 at 4:39 PM Jose Armando Garcia Sancio <
>> jsan...@confluent.io> wrote:
>>
>> > I am closing the voting. KIP is accepted with:
>> >
>> > +1 (binding): Colin McCabe
>> > +1 (non-binding): Jason Gustafson, Gwen Shapira, Mickael Maison
>> >
>> > Thanks!
>> >
>> > On Fri, May 3, 2019 at 9:59 AM Mickael Maison > >
>> > wrote:
>> >
>> > > +1 (non binding)
>> > > Thanks for the KIP
>> > >
>> > > On Thu, May 2, 2019 at 11:02 PM Colin McCabe 
>> wrote:
>> > > >
>> > > > +1 (binding)
>> > > >
>> > > > thanks, Jose.
>> > > >
>> > > > best,
>> > > > Colin
>> > > >
>> > > > On Wed, May 1, 2019, at 14:44, Jose Armando Garcia Sancio wrote:
>> > > > > Hi all,
>> > > > >
>> > > > > I would like to start the voting for KIP-460:
>> > > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-460%3A+Admin+Leader+Election+RPC
>> > > > >
>> > > > > The thread discussion is here:
>> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg97226.html
>> > > > >
>> > > > > Thanks!
>> > > > > -Jose
>> > > > >
>> > >
>> >
>> >
>> > --
>> > -Jose
>> >
>>
>>
>> --
>> *Gwen Shapira*
>> Product Manager | Confluent
>> 650.450.2760 | @gwenshap
>> Follow us: Twitter  | blog
>> 
>>
>
>
> --
> -Jose
>


-- 
-Jose


Re: [DISCUSS] KIP-460: Admin Leader Election RPC

2019-06-04 Thread Jose Armando Garcia Sancio
Hi all,

During the implementation of KIP-460, we discovered that we had to make
some minor changes to the design. I have updated the KIP wiki[1]. You can
see the difference here[2].

At a high-level we made the following changes:

   1. Added a top level ErrorCode to the response for errors that apply to
   all of the topic partitions. This is currently being used for cluster
   authorization errors.
   2. We renamed the delayedOperation for DelayOperationPurgatory from
   ElectPreferredLeader to ElectLeader.


[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-460%3A+Admin+Leader+Election+RPC
.
[2]
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=113707931=19=18


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-06-04 Thread Sophie Blee-Goldman
Hey Bruno,

I tend to agree with Guozhang on this matter although you do bring up some
good points that should be addressed. Regarding 1) I think it is probably
fairly uncommon in practice for users to leverage the individual store
names passed to RocksDBConfigSetter#setConfig in order to specify options
on a per-store basis. When this actually is used, it does seem likely that
users would be doing something like pattern matching the physical store
name prefix in order to apply configs to all physical stores (segments)
within a single logical RocksDBStore. As you mention this is something of a
hassle already as physical stores are created/deleted, while most likely
all anyone cares about is the prefix corresponding to the logical store. It
seems like rather than persist this hassle to the metric layer, we should
consider refactoring RocksDBConfigSetter to apply to a logical store rather
than a specific physical segment. Or maybe providing some kind of tooling
to at least make this easier on users, but that's definitely outside the
scope of this KIP.

Regarding 2) can you clarify your point about accessing stores uniformly?
While I agree there will definitely be variance in the access pattern of
different segments, I think it's unlikely that it will vary in any kind of
predictable or deterministic way, hence it is not that useful to know in
hindsight the difference reflected by the metrics.

Cheers,
Sophie

On Tue, Jun 4, 2019 at 2:09 PM Bruno Cadonna  wrote:

> Hi Guozhang,
>
> After some thoughts, I tend to be in favour of the option with metrics
> for each physical RocksDB instance for the following reasons:
>
> 1) A user already needs to be aware of segmented state stores when
> providing a custom RocksDBConfigSetter. In RocksDBConfigSetter one can
> specify settings for a store depending on the name of the store. Since
> segments (i.e. state store) in a segmented state store have names that
> share a prefix but have suffixes that are created at runtime, increase
> with time and are theoretically unbounded, a user needs to take
> account of the segments to provide the settings for all (i.e. matching
> the common prefix) or some (i.e. matching the common prefix and for
> example suffixes according to a specific pattern) of the segments of a
> specific segmented state store.
> 2) Currently settings for RocksDB can only be specified by a user per
> physical instance and not per logical instance. Deriving good settings
> for physical instances from metrics for a logical instance can be hard
> if the physical instances are not accessed uniformly. In segmented
> state stores segments are not accessed uniformly.
> 3) Simpler to implement and to get things done.
>
> Any thoughts on this from anybody?
>
> Best,
> Bruno
>
> On Thu, May 30, 2019 at 8:33 PM Guozhang Wang  wrote:
> >
> > Hi Bruno:
> >
> > Regarding 2) I think either way has some shortcomings: exposing the
> metrics
> > per rocksDB instance for window / session stores exposed some
> > implementation internals (that we use segmented stores) to enforce users
> to
> > be aware of them. E.g. what if we want to silently change the internal
> > implementation by walking away from the segmented approach? On the other
> > hand, coalescing multiple rocksDB instances' metrics into a single one
> per
> > each logical store also has some concerns as I mentioned above. What I'm
> > thinking is actually that, if we can customize the aggregation logic to
> > still has one set of metrics per each logical store which may be composed
> > of multiple rocksDB ones -- e.g. for `bytes-written-rate` we sum them
> > across rocksDBs, while for `memtable-hit-rate` we do weighted average?
> >
> > Regarding logging levels, I think have DEBUG is fine, but also that means
> > without manually turning it on users would not get it. Maybe we should
> > consider adding some dynamic setting mechanisms in the future to allow
> > users turn it on / off during run-time.
> >
> >
> > Guozhang
> >
> >
> >
> > On Tue, May 28, 2019 at 6:23 AM Bruno Cadonna 
> wrote:
> >
> > > Hi,
> > >
> > > Thank you for your comments.
> > >
> > > @Bill:
> > >
> > > 1. It is like Guozhang wrote:
> > > - rocksdb-state-id is for key-value stores
> > > - rocksdb-session-state-id is for session stores
> > > - rocksdb-window-state-id is for window stores
> > > These tags are defined in the corresponding store builders and I think
> > > it is a good idea to re-use them.
> > >
> > > 2. I could not find any exposed ticker or histogram to get the total
> > > and average number of compactions, although RocksDB dumps the number
> > > of compactions between levels in its log files. There is the
> > > NUM_SUBCOMPACTIONS_SCHEDULED histogram that gives you statistics about
> > > the number of subcompactions actually scheduled during a compaction,
> > > but that is not that what you are looking for. If they will expose the
> > > number of compaction in the future, we can still add the metrics you
> > > propose. I guess, 

Re: [DISCUSS] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-04 Thread Cyrus Vafadari
Randall,

I plan to update the public details section and the performance impact as
you recommended.

Regarding state-specific counts, I do agree this is a useful addition.
Before I make the change, I'd like to agree that these state-specific
counts should be in addition to the already-proposed total tasks count
(even though might be redundant, it is robust against new/missed connector
states, and is a useful metric in its own right), yes?

Cyrus

On Tue, Jun 4, 2019 at 12:24 PM Randall Hauch  wrote:

> Thanks, Cyrus -- this will be quite useful. I do have a few
> comments/requests.
>
> Can you please be more specific about the public details about the metric?
> What is the MBean name on which the metric will appear? For example, the AK
> documentation (https://kafka.apache.org/documentation/#connect_monitoring)
> defines all of the metrics an where they will appear, as does
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
> .
>
> Secondly, while a metric showing the total number of tasks is very useful,
> might it be worth considering also adding metrics for the number of running
> tasks, the number of paused tasks, and the number of failed tasks for a
> connector. It might require using the herder's `connectorStatus(String
> connectorName)` method instead, but that appears to be just as effective at
> using the local snapshot of the status store cache.
>
> Thirdly, it might be useful for the KIP to address the potential
> performance impact of computing these methods. Again, IIUC, the herder
> methods that the proposal mentions use the status and config stores caches
> only, so the impact should be negligible.
>
> Best regards,
>
> Randall
>
> On Sun, Jun 2, 2019 at 10:05 PM Ryanne Dolan 
> wrote:
>
> > Cyrus, I agree this would be useful.
> >
> > Ryanne
> >
> > On Fri, May 31, 2019, 7:10 PM Oleksandr Diachenko  >
> > wrote:
> >
> > >
> > >
> > > On 2019/05/30 06:06:12, Cyrus Vafadari  wrote:
> > > > Hello Dev,
> > > >
> > > > I'd like to start the discussion of KIP-475: New Metric to Measure
> > Number
> > > > of Tasks on a Connector.
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-475%3A+New+Metric+to+Measure+Number+of+Tasks+on+a+Connector
> > > >
> > > > The proposal is pretty straightforward -- to add a new metric to
> > Connect
> > > to
> > > > measure the number of tasks on a Connector. Currently, we support
> this
> > on
> > > > Worker level, so this KIP just adds another metric to support this
> > > > per-connector.
> > > >
> > > > There is also a PR:
> > > > https://github.com/apache/kafka/pull/6843
> > > >
> > > > Thanks,
> > > >
> > > > Cyrus
> > > >
> > >
> > > Hi Cyrus,
> > >
> > > That sounds like a useful addition.
> > >
> > > Regards, Alex.
> > >
> >
>


Re: [VOTE] KIP-476: Add Java AdminClient interface

2019-06-04 Thread Ryanne Dolan
> The existing `AdminClient` will be marked as deprecated.

What's the reasoning behind this? I'm fine with the other changes, but
would prefer to keep the existing public API intact if it's not hurting
anything.

Also, what will AdminClient.create() return? Would it be a breaking change?

Ryanne

On Tue, Jun 4, 2019, 11:17 AM Andy Coates  wrote:

> Hi folks
>
> As there's been no chatter on this KIP I'm assuming it's non-contentious,
> (or just boring), hence I'd like to call a vote for KIP-476:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-476%3A+Add+Java+AdminClient+Interface
>
> Thanks,
>
> Andy
>


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-06-04 Thread Bruno Cadonna
Hi Guozhang,

After some thoughts, I tend to be in favour of the option with metrics
for each physical RocksDB instance for the following reasons:

1) A user already needs to be aware of segmented state stores when
providing a custom RocksDBConfigSetter. In RocksDBConfigSetter one can
specify settings for a store depending on the name of the store. Since
segments (i.e. state store) in a segmented state store have names that
share a prefix but have suffixes that are created at runtime, increase
with time and are theoretically unbounded, a user needs to take
account of the segments to provide the settings for all (i.e. matching
the common prefix) or some (i.e. matching the common prefix and for
example suffixes according to a specific pattern) of the segments of a
specific segmented state store.
2) Currently settings for RocksDB can only be specified by a user per
physical instance and not per logical instance. Deriving good settings
for physical instances from metrics for a logical instance can be hard
if the physical instances are not accessed uniformly. In segmented
state stores segments are not accessed uniformly.
3) Simpler to implement and to get things done.

Any thoughts on this from anybody?

Best,
Bruno

On Thu, May 30, 2019 at 8:33 PM Guozhang Wang  wrote:
>
> Hi Bruno:
>
> Regarding 2) I think either way has some shortcomings: exposing the metrics
> per rocksDB instance for window / session stores exposed some
> implementation internals (that we use segmented stores) to enforce users to
> be aware of them. E.g. what if we want to silently change the internal
> implementation by walking away from the segmented approach? On the other
> hand, coalescing multiple rocksDB instances' metrics into a single one per
> each logical store also has some concerns as I mentioned above. What I'm
> thinking is actually that, if we can customize the aggregation logic to
> still has one set of metrics per each logical store which may be composed
> of multiple rocksDB ones -- e.g. for `bytes-written-rate` we sum them
> across rocksDBs, while for `memtable-hit-rate` we do weighted average?
>
> Regarding logging levels, I think have DEBUG is fine, but also that means
> without manually turning it on users would not get it. Maybe we should
> consider adding some dynamic setting mechanisms in the future to allow
> users turn it on / off during run-time.
>
>
> Guozhang
>
>
>
> On Tue, May 28, 2019 at 6:23 AM Bruno Cadonna  wrote:
>
> > Hi,
> >
> > Thank you for your comments.
> >
> > @Bill:
> >
> > 1. It is like Guozhang wrote:
> > - rocksdb-state-id is for key-value stores
> > - rocksdb-session-state-id is for session stores
> > - rocksdb-window-state-id is for window stores
> > These tags are defined in the corresponding store builders and I think
> > it is a good idea to re-use them.
> >
> > 2. I could not find any exposed ticker or histogram to get the total
> > and average number of compactions, although RocksDB dumps the number
> > of compactions between levels in its log files. There is the
> > NUM_SUBCOMPACTIONS_SCHEDULED histogram that gives you statistics about
> > the number of subcompactions actually scheduled during a compaction,
> > but that is not that what you are looking for. If they will expose the
> > number of compaction in the future, we can still add the metrics you
> > propose. I guess, the metric in this KIP that would indirectly be
> > influenced by the number of L0 files would be write-stall-duration. If
> > there are too many compactions this duration should increase. However,
> > this metric is also influenced by memtable flushes.
> >
> > @John:
> >
> > I think it is a good idea to prefix the flush-related metrics with
> > memtable to avoid ambiguity. I changed the KIP accordingly.
> >
> > @Dongjin:
> >
> > For the tag and compaction-related comments, please see my answers to Bill.
> >
> > I cannot follow your second paragraph. Are you saying that a tuning
> > guide for RocksDB within Streams based on the metrics in this KIP is
> > out of scope? I also think that it doesn't need to be included in this
> > KIP, but it is worth to work on it afterwards.
> >
> > @Guozhang:
> >
> > 1. Thank you for the explanation. I missed that. I modified the KIP
> > accordingly.
> >
> > 2. No, my plan is to record and expose the set of metrics for each
> > RocksDB store separately. Each set of metrics can then be
> > distinguished by its store ID. Do I miss something here?
> >
> > 3. I agree with you and Sophie about user education and that we should
> > work on it after this KIP.
> >
> > 4. I agree also on the user API. However, I would like to open a
> > separate KIP for it because I still need a bit of thinking to get it.
> > I also think it is a good idea to do one step after the other to get
> > at least the built-in RocksDB metrics into the next release.
> > Do you think I chose too many metrics as built-in metrics for this
> > KIP? What do others think?
> >
> > @Sophie:
> >
> > I tend to DEBUG level, 

[jira] [Resolved] (KAFKA-8385) Fix leader election RPC for all partition so that only partition that had elections are returned

2019-06-04 Thread Jose Armando Garcia Sancio (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio resolved KAFKA-8385.
---
Resolution: Fixed

We ended up implementing this on the original PR: 
https://github.com/apache/kafka/pull/6686

> Fix leader election RPC for all partition so that only partition that had 
> elections are returned
> 
>
> Key: KAFKA-8385
> URL: https://issues.apache.org/jira/browse/KAFKA-8385
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Critical
>
> Currently the elect leaders RPC returns all the partitions when election 
> across all of the partition is request even if some of the partitions already 
> have a leader (for unclean) or a preferred leader (for preferred).
> Change this behavior so that only partitions that changed leader are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8481) Clients may fetch incomplete set of topic partitions just after topic is created

2019-06-04 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-8481:
---

 Summary: Clients may fetch incomplete set of topic partitions just 
after topic is created
 Key: KAFKA-8481
 URL: https://issues.apache.org/jira/browse/KAFKA-8481
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.2.1
Reporter: Anna Povzner


KafkaConsumer#partitionsFor() or AdminClient#describeTopics() may return 
incomplete set of partitions for the given topic if the topic just got created.

Cause: When topic gets created, in most cases, controller sends partitions of 
this topics via several UpdateMetadataRequests (vs. one UpdateMetadataRequest 
with all partitions). First UpdateMetadataRequest contains partitions for which 
this broker hosts replicas, and then one or more UpdateMetadataRequest for the 
remaining partitions. This means that if a broker gets topic metadata requests 
between first and last UpdateMetadataRequest, the response will contain only 
subset of topic partitions.

Proposed fix: In KafkaController#processTopicChange(), before calling 
OnNewPartitionCreation(), send UpdateRequestMetadata with partitions of new 
topics (addedPartitionReplicaAssignment) to all live brokers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8480) Clients may fetch incomplete set of topic partitions during cluster startup

2019-06-04 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-8480:
---

 Summary: Clients may fetch incomplete set of topic partitions 
during cluster startup
 Key: KAFKA-8480
 URL: https://issues.apache.org/jira/browse/KAFKA-8480
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.2.1
Reporter: Anna Povzner
Assignee: Anna Povzner


KafkaConsumer#partitionsFor() or AdminClient#describeTopics() may return not 
all partitions for a given topic when the cluster is starting up (after cluster 
was down). 

The cause is controller, on becoming a controller, sending 
UpdateMetadataRequest for all partitions with at least one online replica, and 
then a separate UpdateMetadataRequest for all partitions with at least one 
offline replica. If client sends metadata request in between broker processing 
those two update metadata requests, clients will get incomplete set of 
partitions.

Proposed fix: controller should send one UpdateMetadataRequest (containing all 
partitions) in  ReplicaStateMachine#startup().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-04 Thread Randall Hauch
Thanks, Cyrus -- this will be quite useful. I do have a few
comments/requests.

Can you please be more specific about the public details about the metric?
What is the MBean name on which the metric will appear? For example, the AK
documentation (https://kafka.apache.org/documentation/#connect_monitoring)
defines all of the metrics an where they will appear, as does
https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
.

Secondly, while a metric showing the total number of tasks is very useful,
might it be worth considering also adding metrics for the number of running
tasks, the number of paused tasks, and the number of failed tasks for a
connector. It might require using the herder's `connectorStatus(String
connectorName)` method instead, but that appears to be just as effective at
using the local snapshot of the status store cache.

Thirdly, it might be useful for the KIP to address the potential
performance impact of computing these methods. Again, IIUC, the herder
methods that the proposal mentions use the status and config stores caches
only, so the impact should be negligible.

Best regards,

Randall

On Sun, Jun 2, 2019 at 10:05 PM Ryanne Dolan  wrote:

> Cyrus, I agree this would be useful.
>
> Ryanne
>
> On Fri, May 31, 2019, 7:10 PM Oleksandr Diachenko 
> wrote:
>
> >
> >
> > On 2019/05/30 06:06:12, Cyrus Vafadari  wrote:
> > > Hello Dev,
> > >
> > > I'd like to start the discussion of KIP-475: New Metric to Measure
> Number
> > > of Tasks on a Connector.
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-475%3A+New+Metric+to+Measure+Number+of+Tasks+on+a+Connector
> > >
> > > The proposal is pretty straightforward -- to add a new metric to
> Connect
> > to
> > > measure the number of tasks on a Connector. Currently, we support this
> on
> > > Worker level, so this KIP just adds another metric to support this
> > > per-connector.
> > >
> > > There is also a PR:
> > > https://github.com/apache/kafka/pull/6843
> > >
> > > Thanks,
> > >
> > > Cyrus
> > >
> >
> > Hi Cyrus,
> >
> > That sounds like a useful addition.
> >
> > Regards, Alex.
> >
>


Re: [DISCUSS] KIP-333 Consider a faster form of rebalancing

2019-06-04 Thread Boyang Chen
Hey Matthias,


I took a quick overview of this proposal. My impression is that the proposal 
lacks concrete use case to support it. For example, the goal here is to 
immediately resume processing the latest data being populated, then we could 
just reset application to point to latest offset. If the user claims they need 
the state built on data after last committed offset, they should bootstrap and 
wait for reprocessing from last offset. In either case, there is no need to 
have a separate consumer working in background because the main consumer thread 
could not start until we rebuilt the state.


Boyang


From: Matthias J. Sax 
Sent: Tuesday, June 4, 2019 1:29 PM
To: dev
Subject: Fwd: [DISCUSS] KIP-333 Consider a faster form of rebalancing

Just cycling back to this older KIP discussion.

I still have some concerns about the proposal, and there was no activity
for a long time. I am wondering if there is still interest in this KIP,
or if we should discard it?

-Matthias


 Forwarded Message 
Subject: Re: [DISCUSS] KIP-333 Consider a faster form of rebalancing
Date: Sun, 30 Sep 2018 12:01:14 -0700
From: Matthias J. Sax 
Organization: Confluent Inc
To: dev@kafka.apache.org

What is the status of this KIP?

I was just catching up and I agree with Becket that it seems a very
special use case, that might not be generic enough to be part of Kafka
itself. Also, for regular rebalance, as Becket pointed out, catching up
should not take very long. Only for longer offline times, this might be
an issue -- however, this this case, either the whole consumer group is
offline, or you timeouts (max.poll.interval.ms and session.timeout.ms)
are set too high.

I am also wondering, how consecutive failures would be handled? Assume
you have 2 consumer, the "regular" consumer that #seekToEnd() and the
"catch-up" consumer.

 - What happens if any (or both) consumers die?
 - How to do you track the offsets of both consumers?
 - How can this be integrated with EOS?

To me, it seems that you might want to implement this as a custom
solution via re-balance callbacks that you can register on a consumer.


-Matthias

On 8/7/18 8:05 PM, Becket Qin wrote:
> Hi Richard,
>
> Sorry for the late response. As discussed in the other offline thread, I am
> still not sure if this use case is common enough to have a built-in
> rebalance policy.
>
> I think usually the time to detect the consumer failure and rebalance would
> be the longer than the catching up time as the catch up usually happens in
> parallel by all the other consumers in a group. If the there is a
> bottleneck of consuming a single hot partition, this problem will exist
> regardless of rebalance. In any case, the approach of having an ad-hoc
> hidden consumer seems a little hacky.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Wed, Jul 18, 2018 at 2:39 PM, Richard Yu 
> wrote:
>
>>  Hi Becket,
>> I made some changes and clarified the motivation for this KIP. :)It should
>> be easier to understand now since I included a diagram.
>> Thanks,Richard Yu
>> On Tuesday, July 17, 2018, 4:38:11 PM GMT+8, Richard Yu
>>  wrote:
>>
>>   Hi Becket,
>> Thanks for reviewing this KIP. :)
>> I probably did not explicitly state what we were trying to avoid by
>> introducing this mode. As mentioned in the KIP, there is a offset lag which
>> could result after a crash. Our main goal is to avoid this lag (i.e. the
>> latency in terms of time that results from the crash, not to reduce the
>> number of records reprocessed).
>> I could provide a couple of diagrams with what I am envisioning because
>> some points in my KIP might otherwise be hard to grasp (I will also include
>> some diagrams to give you a better idea of an use case). As for your
>> questions, I could provide a couple of answers:
>> 1. Yes, the two consumers will in fact be processing in parallel. We do
>> this because we want to accelerate the processing speed of the records to
>> make up for the latency caused by the crash.
>> 2. After the recovery point, records will not be processed twice. Let me
>> describe the scenario I was envisioning: we would let the consumer that
>> crashed seek to the end of the log using KafkaConsumer#seekToEnd.
>> Meanwhile, a secondary consumer will start processing from the latest
>> checkpointed offset and continue until it  has hit the place where the
>> first consumer that crashed began processing after seekToEnd was first
>> called. Since the consumer that crashed skipped from the recovery point to
>> the end of the log, the intermediate offsets will be processed only by the
>> secondary consumer. So it is important to note that the offset ranges which
>> the two threads process will not overlap. (This is important as it prevents
>> offsets from being processed more than once)
>>
>> 3. As for the committed offsets, the possibility of rewinding is not
>> likely. If my understanding is correct, you are probably worried that after
>> the crash, 

Re: [DISCUSSION] KIP-418: A method-chaining way to branch KStream

2019-06-04 Thread John Roesler
Thanks for the idea, Matthias, it does seem like this would satisfy
everyone. Returning the map from the terminal operations also solves
the problem of merging/joining the branched streams, if we want to add
support for the compliment later on.

Under your suggestion, it seems that the name is required. Otherwise,
we wouldn't have keys for the map to return. I this this is actually
not too bad, since experience has taught us that, although names for
operations are not required to define stream processing logic, it does
significantly improve the operational experience when you can map the
topology, logs, metrics, etc. back to the source code. Since you
wouldn't (have to) reference the name to chain extra processing onto
the branch (thanks to the second argument), you can avoid the
"unchecked name" problem that Ivan pointed out.

In the current implementation of Branch, you can name the branch
operator itself, and then all the branches get index-suffixed names
built from the branch operator name. I guess under this proposal, we
could naturally append the branch name to the branching operator name,
like this:

   stream.split(Named.withName("mysplit")) //creates node "mysplit"
  .branch(..., ..., "abranch") // creates node "mysplit-abranch"
  .defaultBranch(...) // creates node "mysplit-default"

It does make me wonder about the DSL syntax itself, though.

We don't have a defined grammar, so there's plenty of room to debate
the "best" syntax in the context of each operation, but in general,
the KStream DSL operators follow this pattern:

operator(function, config_object?) OR operator(config_object)

where config_object is often just Named in the "function" variant.
Even when the config_object isn't a Named, but some other config
class, that config class _always_ implements NamedOperation.

Here, we're introducing a totally different pattern:

  operator(function, function, string)

where the string is the name.
My first question is whether the name should instead be specified with
the NamedOperation interface.

My second question is whether we should just roll all these arguments
up into a config object like:

   KBranchedStream#branch(BranchConfig)

   interface BranchConfig extends NamedOperation {
withPredicate(...);
withChain(...);
withName(...);
  }

Although I guess we'd like to call BranchConfig something more like
"Branched", even if I don't particularly like that pattern.

This makes the source code a little noisier, but it also makes us more
future-proof, as we can deal with a wide range of alternatives purely
in the config interface, and never have to deal with adding overloads
to the KBranchedStream if/when we decide we want the name to be
optional, or the KStream->KStream to be optional.

WDYT?

Thanks,
-John

On Fri, May 24, 2019 at 5:25 PM Michael Drogalis
 wrote:
>
> Matthias: I think that's pretty reasonable from my point of view. Good
> suggestion.
>
> On Thu, May 23, 2019 at 9:50 PM Matthias J. Sax 
> wrote:
>
> > Interesting discussion.
> >
> > I am wondering, if we cannot unify the advantage of both approaches:
> >
> >
> >
> > KStream#split() -> KBranchedStream
> >
> > // branch is not easily accessible in current scope
> > KBranchedStream#branch(Predicate, Consumer)
> >   -> KBranchedStream
> >
> > // assign a name to the branch and
> > // return the sub-stream to the current scope later
> > //
> > // can be simple as `#branch(p, s->s, "name")`
> > // or also complex as `#branch(p, s->s.filter(...), "name")`
> > KBranchedStream#branch(Predicate, Function, String)
> >   -> KBranchedStream
> >
> > // default branch is not easily accessible
> > // return map of all named sub-stream into current scope
> > KBranchedStream#default(Cosumer)
> >   -> Map
> >
> > // assign custom name to default-branch
> > // return map of all named sub-stream into current scope
> > KBranchedStream#default(Function, String)
> >   -> Map
> >
> > // assign a default name for default
> > // return map of all named sub-stream into current scope
> > KBranchedStream#defaultBranch(Function)
> >   -> Map
> >
> > // return map of all names sub-stream into current scope
> > KBranchedStream#noDefaultBranch()
> >   -> Map
> >
> >
> >
> > Hence, for each sub-stream, the user can pick to add a name and return
> > the branch "result" to the calling scope or not. The implementation can
> > also check at runtime that all returned names are unique. The returned
> > Map can be empty and it's also optional to use the Map.
> >
> > To me, it seems like a good way to get best of both worlds.
> >
> > Thoughts?
> >
> >
> >
> > -Matthias
> >
> >
> >
> >
> > On 5/6/19 5:15 PM, John Roesler wrote:
> > > Ivan,
> > >
> > > That's a very good point about the "start" operator in the dynamic case.
> > > I had no problem with "split()"; I was just questioning the necessity.
> > > Since you've provided a proof of necessity, I'm in favor of the
> > > "split()" start operator. Thanks!
> > >
> > > Separately, 

Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-04 Thread Rajini Sivaram
Thanks for managing the release, Vahid!

Regards,

Rajini

On Tue, Jun 4, 2019 at 4:45 PM Viktor Somogyi-Vass 
wrote:

> Thanks Vahid!
>
> On Tue, Jun 4, 2019 at 5:20 PM Colin McCabe  wrote:
>
> > Thanks, Vahid.
> >
> > best,
> > Colin
> >
> > On Mon, Jun 3, 2019, at 07:23, Vahid Hashemian wrote:
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 2.2.1
> > >
> > > This is a bugfix release for Kafka 2.2.0. All of the changes in this
> > > release can be found in the release notes:
> > > https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> > >
> > > You can download the source and binary release from:
> > > https://kafka.apache.org/downloads#2.2.1
> > >
> > >
> >
> ---
> > >
> > > Apache Kafka is a distributed streaming platform with four core APIs:
> > >
> > > ** The Producer API allows an application to publish a stream records
> to
> > > one or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe to one or more
> > > topics and process the stream of records produced to them.
> > >
> > > ** The Streams API allows an application to act as a stream processor,
> > > consuming an input stream from one or more topics and producing an
> output
> > > stream to one or more output topics, effectively transforming the input
> > > streams to output streams.
> > >
> > > ** The Connector API allows building and running reusable producers or
> > > consumers that connect Kafka topics to existing applications or data
> > > systems. For example, a connector to a relational database might
> capture
> > > every change to a table.
> > >
> > > With these APIs, Kafka can be used for two broad classes of
> application:
> > >
> > > ** Building real-time streaming data pipelines that reliably get data
> > > between systems or applications.
> > >
> > > ** Building real-time streaming applications that transform or react to
> > the
> > > streams of data.
> > >
> > > Apache Kafka is in use at large and small companies worldwide,
> including
> > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > >
> > > A big thank you for the following 30 contributors to this release!
> > >
> > > Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> > > Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil
> > Shah,
> > > Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> > > Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh
> Nandakumar,
> > > Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker,
> > pkleindl,
> > > Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian,
> > Victoria
> > > Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> > >
> > > We welcome your help and feedback. For more information on how to
> report
> > > problems, and to get involved, visit the project website at
> > > https://kafka.apache.org/
> > >
> > > Thank you!
> > >
> > > Regards,
> > > --Vahid Hashemian
> > >
> >
>


[jira] [Created] (KAFKA-8479) kafka-topics does not allow client config settings using "--config"

2019-06-04 Thread Yeva Byzek (JIRA)
Yeva Byzek created KAFKA-8479:
-

 Summary: kafka-topics does not allow client config settings using 
"--config" 
 Key: KAFKA-8479
 URL: https://issues.apache.org/jira/browse/KAFKA-8479
 Project: Kafka
  Issue Type: Bug
Reporter: Yeva Byzek


{{kafka-topics}} argument {{--config}} is used for setting topic config 
parameters only.  It cannot be used for setting client config parameters, 
otherwise an error results:


{noformat}
Error while executing topic command : Unknown topic config name: sasl.mechanism
[2019-06-04 12:37:05,746] ERROR 
org.apache.kafka.common.errors.InvalidConfigurationException: Unknown topic 
config name: sasl.mechanism
(kafka.admin.TopicCommand$){noformat}
 

 

Workaround: use the argument {{--command-config}} to pass in those client 
config parameters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] KIP-476: Add Java AdminClient interface

2019-06-04 Thread Andy Coates
Hi folks

As there's been no chatter on this KIP I'm assuming it's non-contentious,
(or just boring), hence I'd like to call a vote for KIP-476:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-476%3A+Add+Java+AdminClient+Interface

Thanks,

Andy


Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-04 Thread Viktor Somogyi-Vass
Thanks Vahid!

On Tue, Jun 4, 2019 at 5:20 PM Colin McCabe  wrote:

> Thanks, Vahid.
>
> best,
> Colin
>
> On Mon, Jun 3, 2019, at 07:23, Vahid Hashemian wrote:
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 2.2.1
> >
> > This is a bugfix release for Kafka 2.2.0. All of the changes in this
> > release can be found in the release notes:
> > https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> >
> > You can download the source and binary release from:
> > https://kafka.apache.org/downloads#2.2.1
> >
> >
> ---
> >
> > Apache Kafka is a distributed streaming platform with four core APIs:
> >
> > ** The Producer API allows an application to publish a stream records to
> > one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an output
> > stream to one or more output topics, effectively transforming the input
> > streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might capture
> > every change to a table.
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react to
> the
> > streams of data.
> >
> > Apache Kafka is in use at large and small companies worldwide, including
> > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> >
> > A big thank you for the following 30 contributors to this release!
> >
> > Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> > Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil
> Shah,
> > Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> > Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
> > Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker,
> pkleindl,
> > Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian,
> Victoria
> > Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> >
> > We welcome your help and feedback. For more information on how to report
> > problems, and to get involved, visit the project website at
> > https://kafka.apache.org/
> >
> > Thank you!
> >
> > Regards,
> > --Vahid Hashemian
> >
>


[jira] [Created] (KAFKA-8478) Poll for more records before forced processing

2019-06-04 Thread John Roesler (JIRA)
John Roesler created KAFKA-8478:
---

 Summary: Poll for more records before forced processing
 Key: KAFKA-8478
 URL: https://issues.apache.org/jira/browse/KAFKA-8478
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


While analyzing the algorithm of Streams's poll/process loop, I noticed the 
following:
The algorithm of runOnce is:

{code}
loop0:
  long poll for records (100ms)
  loop1:
loop2: for BATCH_SIZE iterations:
  process one record in each task that has data enqueued
adjust BATCH_SIZE
if loop2 processed any records, repeat loop 1
else, break loop1 and repeat loop0
{code}

There's potentially an unwanted interaction between "keep processing as long as 
any record is processed" and forcing processing after `max.task.idle.ms`.

If there are two tasks, A and B, and A runs out of records on one input before 
B, then B could keep the processing loop running, and hence prevent A from 
getting any new records, until max.task.idle.ms expires, at which point A will 
force processing on its other input partition. The intent of idling is to at 
least give A a chance of getting more records on the empty input, but under 
this situation, we'd never even check for more records before forcing 
processing.

I'm thinking we should only enforce processing if there was a completed poll 
since we noticed the task was missing inputs (otherwise, we may as well not 
bother idling at all).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-04 Thread Colin McCabe
Thanks, Vahid.

best,
Colin

On Mon, Jun 3, 2019, at 07:23, Vahid Hashemian wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.2.1
> 
> This is a bugfix release for Kafka 2.2.0. All of the changes in this
> release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> 
> You can download the source and binary release from:
> https://kafka.apache.org/downloads#2.2.1
> 
> ---
> 
> Apache Kafka is a distributed streaming platform with four core APIs:
> 
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
> 
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
> 
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an output
> stream to one or more output topics, effectively transforming the input
> streams to output streams.
> 
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might capture
> every change to a table.
> 
> With these APIs, Kafka can be used for two broad classes of application:
> 
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
> 
> ** Building real-time streaming applications that transform or react to the
> streams of data.
> 
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
> 
> A big thank you for the following 30 contributors to this release!
> 
> Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil Shah,
> Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
> Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker, pkleindl,
> Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian, Victoria
> Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> 
> We welcome your help and feedback. For more information on how to report
> problems, and to get involved, visit the project website at
> https://kafka.apache.org/
> 
> Thank you!
> 
> Regards,
> --Vahid Hashemian
>


RE: Kafka Connect

2019-06-04 Thread meirkhan.rakhmetzhanov
Yeah, I tried it. But it works only on CDC mode. And I need connector which 
executes queries.

-Message d'origine-
De : Sagar [mailto:sagarmeansoc...@gmail.com] 
Envoyé : mardi 4 juin 2019 15:29
À : dev@kafka.apache.org
Objet : Re: Kafka Connect

Hi meirkhan,

You can give debezium a shot. It has a connector for mongodb


On Tue, 4 Jun 2019 at 6:56 PM,  wrote:

> Hello Kafka!
>
> Here is my case. I need the Kafka Connector to Mongodb which imitates JDBC
> Source Connector.
> So that data is loaded to Kafka by periodically executing a query. I could
> not find any such tool. Can you give any resources or advices on it?
>
> Thank you,
> Meirkhan
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
>

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.



Re: Kafka Connect

2019-06-04 Thread Sagar
Hi meirkhan,

You can give debezium a shot. It has a connector for mongodb


On Tue, 4 Jun 2019 at 6:56 PM,  wrote:

> Hello Kafka!
>
> Here is my case. I need the Kafka Connector to Mongodb which imitates JDBC
> Source Connector.
> So that data is loaded to Kafka by periodically executing a query. I could
> not find any such tool. Can you give any resources or advices on it?
>
> Thank you,
> Meirkhan
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
>


Kafka Connect

2019-06-04 Thread meirkhan.rakhmetzhanov
Hello Kafka!

Here is my case. I need the Kafka Connector to Mongodb which imitates JDBC 
Source Connector.
So that data is loaded to Kafka by periodically executing a query. I could not 
find any such tool. Can you give any resources or advices on it?

Thank you,
Meirkhan

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.



Build failed in Jenkins: kafka-2.3-jdk8 #38

2019-06-04 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8155: Add 2.2.0 release to system tests (#6597)

--
[...truncated 1.87 MB...]
kafka.admin.DescribeConsumerGroupTest > 
testDescribeStateWithMultiPartitionTopicAndMultipleConsumers PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeStateOfExistingGroupWithRoundRobinAssignor STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeStateOfExistingGroupWithRoundRobinAssignor PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeOffsetsOfNonExistingGroup 
STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeOffsetsOfNonExistingGroup 
PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroupWithNoMembers 
STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroupWithNoMembers 
PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeAllExistingGroups STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeAllExistingGroups PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeWithConsumersWithoutAssignedPartitions STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeWithConsumersWithoutAssignedPartitions PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeGroupWithShortInitializationTimeout STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeGroupWithShortInitializationTimeout PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeOffsetsWithMultiPartitionTopicAndMultipleConsumers STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeOffsetsWithMultiPartitionTopicAndMultipleConsumers PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeStateOfExistingGroupWithNoMembers STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeStateOfExistingGroupWithNoMembers PASSED

kafka.admin.BrokerApiVersionsCommandTest > checkBrokerApiVersionCommandOutput 
STARTED

kafka.admin.BrokerApiVersionsCommandTest > checkBrokerApiVersionCommandOutput 
PASSED

kafka.admin.ListConsumerGroupTest > testListWithUnrecognizedNewConsumerOption 
STARTED

kafka.admin.ListConsumerGroupTest > testListWithUnrecognizedNewConsumerOption 
PASSED

kafka.admin.ListConsumerGroupTest > testListConsumerGroups STARTED

kafka.admin.ListConsumerGroupTest > testListConsumerGroups PASSED

kafka.admin.TimeConversionTests > testDateTimeFormats STARTED

kafka.admin.TimeConversionTests > testDateTimeFormats PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 

Build failed in Jenkins: kafka-2.3-jdk8 #37

2019-06-04 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8404: Add HttpHeader to RestClient HTTP Request and Connector 
REST

--
[...truncated 2.91 MB...]

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3700

2019-06-04 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Allow multi-batches for old format and no compression (#6871)

[rhauch] KAFKA-8404: Add HttpHeader to RestClient HTTP Request and Connector 
REST

[github] KAFKA-8155: Add 2.2.0 release to system tests (#6597)

--
[...truncated 2.51 MB...]
org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > longToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > 

Build failed in Jenkins: kafka-2.0-jdk8 #273

2019-06-04 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8404: Add HttpHeader to RestClient HTTP Request and Connector 
REST

--
[...truncated 441.32 KB...]
kafka.server.ClientQuotaManagerTest > testUserQuotaParsing STARTED

kafka.server.ClientQuotaManagerTest > testUserQuotaParsing PASSED

kafka.server.ClientQuotaManagerTest > testClientIdQuotaParsing STARTED

kafka.server.ClientQuotaManagerTest > testClientIdQuotaParsing PASSED

kafka.server.ClientQuotaManagerTest > testQuotaViolation STARTED

kafka.server.ClientQuotaManagerTest > testQuotaViolation PASSED

kafka.server.ClientQuotaManagerTest > testRequestPercentageQuotaViolation 
STARTED

kafka.server.ClientQuotaManagerTest > testRequestPercentageQuotaViolation PASSED

kafka.server.ClientQuotaManagerTest > testQuotaConfigPrecedence STARTED

kafka.server.ClientQuotaManagerTest > testQuotaConfigPrecedence PASSED

kafka.server.ClientQuotaManagerTest > testExpireQuotaSensors STARTED

kafka.server.ClientQuotaManagerTest > testExpireQuotaSensors PASSED

kafka.server.ClientQuotaManagerTest > testClientIdNotSanitized STARTED

kafka.server.ClientQuotaManagerTest > testClientIdNotSanitized PASSED

kafka.server.ClientQuotaManagerTest > testExpireThrottleTimeSensor STARTED

kafka.server.ClientQuotaManagerTest > testExpireThrottleTimeSensor PASSED

kafka.server.ClientQuotaManagerTest > testUserClientIdQuotaParsing STARTED

kafka.server.ClientQuotaManagerTest > testUserClientIdQuotaParsing PASSED

kafka.server.ClientQuotaManagerTest > 
testUserClientQuotaParsingIdWithDefaultClientIdQuota STARTED

kafka.server.ClientQuotaManagerTest > 
testUserClientQuotaParsingIdWithDefaultClientIdQuota PASSED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
PASSED

kafka.server.ReplicaManagerQuotasTest > 
testCompleteInDelayedFetchWithReplicaThrottling STARTED

kafka.server.ReplicaManagerQuotasTest > 
testCompleteInDelayedFetchWithReplicaThrottling PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
PASSED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncryption STARTED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncryption PASSED

kafka.server.DynamicBrokerConfigTest > testSecurityConfigs STARTED

kafka.server.DynamicBrokerConfigTest > testSecurityConfigs PASSED

kafka.server.DynamicBrokerConfigTest > testSynonyms STARTED

kafka.server.DynamicBrokerConfigTest > testSynonyms PASSED

kafka.server.DynamicBrokerConfigTest > 
testDynamicConfigInitializationWithoutConfigsInZK STARTED

kafka.server.DynamicBrokerConfigTest > 
testDynamicConfigInitializationWithoutConfigsInZK PASSED

kafka.server.DynamicBrokerConfigTest > testConfigUpdateWithSomeInvalidConfigs 
STARTED

kafka.server.DynamicBrokerConfigTest > testConfigUpdateWithSomeInvalidConfigs 
PASSED

kafka.server.DynamicBrokerConfigTest > testDynamicListenerConfig STARTED

kafka.server.DynamicBrokerConfigTest > testDynamicListenerConfig PASSED

kafka.server.DynamicBrokerConfigTest > testReconfigurableValidation STARTED

kafka.server.DynamicBrokerConfigTest > testReconfigurableValidation PASSED

kafka.server.DynamicBrokerConfigTest > testConfigUpdate STARTED

kafka.server.DynamicBrokerConfigTest > testConfigUpdate PASSED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncoderSecretChange 
STARTED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncoderSecretChange 
PASSED

kafka.server.DynamicBrokerConfigTest > 
testConfigUpdateWithReconfigurableValidationFailure STARTED

kafka.server.DynamicBrokerConfigTest > 
testConfigUpdateWithReconfigurableValidationFailure PASSED

kafka.server.ThrottledChannelExpirationTest > testThrottledChannelDelay STARTED

kafka.server.ThrottledChannelExpirationTest > testThrottledChannelDelay PASSED

kafka.server.ThrottledChannelExpirationTest > 
testCallbackInvocationAfterExpiration STARTED

kafka.server.ThrottledChannelExpirationTest > 
testCallbackInvocationAfterExpiration PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest >