Kafka point in time recovery?

2019-04-18 Thread kumar
Is there any possibility of recovering kafka topic and brokers point in
time?.  I want to recover kafka topics and brokers as of yesterday 5 PM. I
dont want any data arrived in kafka yesterday after 5 PM. I read about
mirroring of data using kafka replicator, ureplicator..etc..All the
mirroring tools do is mirror data from 1 kafka cluster to another kafka
cluster in real time async mode.  I am looking for a feature in kafka where
I can do point in time recovery like oracle database  or elasticsearch.

What is working for us is taking snapshots of kafka vms and recovering it
from VM snapshots. This is also helping us with consumer offsetts.  We did
not face any issue as of now. not sure if this is appropriate method.

Thanks,
AK


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Ankur Rana
Congratulations Matthias.

On Fri, Apr 19, 2019 at 4:39 AM Manikumar  wrote:

> Congrats Matthias!. well deserved.
>
> On Fri, Apr 19, 2019 at 7:44 AM Dong Lin  wrote:
>
> > Congratulations Matthias!
> >
> > Very well deserved!
> >
> > On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang 
> wrote:
> >
> > > Hello Everyone,
> > >
> > > I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> > >
> > > Matthias has been a committer since Jan. 2018, and since then he
> > continued
> > > to be active in the community and made significant contributions the
> > > project.
> > >
> > >
> > > Congratulations to Matthias!
> > >
> > > -- Guozhang
> > >
> >
>


-- 
Thanks,

Ankur Rana
Software Developer
FarEye


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Vahid Hashemian
Congratulations Matthias!

--Vahid

On Thu, Apr 18, 2019 at 9:39 PM Manikumar  wrote:

> Congrats Matthias!. well deserved.
>
> On Fri, Apr 19, 2019 at 7:44 AM Dong Lin  wrote:
>
> > Congratulations Matthias!
> >
> > Very well deserved!
> >
> > On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang 
> wrote:
> >
> > > Hello Everyone,
> > >
> > > I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> > >
> > > Matthias has been a committer since Jan. 2018, and since then he
> > continued
> > > to be active in the community and made significant contributions the
> > > project.
> > >
> > >
> > > Congratulations to Matthias!
> > >
> > > -- Guozhang
> > >
> >
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New Kafka PMC member: Sriharsh Chintalapan

2019-04-18 Thread Manikumar
Congrats Harsha!.

On Fri, Apr 19, 2019 at 7:43 AM Dong Lin  wrote:

> Congratulations Sriharsh!
>
> On Thu, Apr 18, 2019 at 11:46 AM Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > Sriharsh Chintalapan has been active in the Kafka community since he
> became
> > a Kafka committer in 2015. I am glad to announce that Harsh is now a
> member
> > of Kafka PMC.
> >
> > Congratulations, Harsh!
> >
> > Jun
> >
>


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Manikumar
Congrats Matthias!. well deserved.

On Fri, Apr 19, 2019 at 7:44 AM Dong Lin  wrote:

> Congratulations Matthias!
>
> Very well deserved!
>
> On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang  wrote:
>
> > Hello Everyone,
> >
> > I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> >
> > Matthias has been a committer since Jan. 2018, and since then he
> continued
> > to be active in the community and made significant contributions the
> > project.
> >
> >
> > Congratulations to Matthias!
> >
> > -- Guozhang
> >
>


Build failed in Jenkins: kafka-2.0-jdk8 #252

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: remove unused import (#6605)

--
[...truncated 439.76 KB...]
kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.TopicFilterTest > testWhitelists STARTED

kafka.utils.TopicFilterTest > testWhitelists PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > 

Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Dong Lin
Congratulations Matthias!

Very well deserved!

On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang  wrote:

> Hello Everyone,
>
> I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
>
> Matthias has been a committer since Jan. 2018, and since then he continued
> to be active in the community and made significant contributions the
> project.
>
>
> Congratulations to Matthias!
>
> -- Guozhang
>


Re: [ANNOUNCE] New Kafka PMC member: Sriharsh Chintalapan

2019-04-18 Thread Dong Lin
Congratulations Sriharsh!

On Thu, Apr 18, 2019 at 11:46 AM Jun Rao  wrote:

> Hi, Everyone,
>
> Sriharsh Chintalapan has been active in the Kafka community since he became
> a Kafka committer in 2015. I am glad to announce that Harsh is now a member
> of Kafka PMC.
>
> Congratulations, Harsh!
>
> Jun
>


Build failed in Jenkins: kafka-trunk-jdk11 #448

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-6958: Allow to name operation using parameter classes (#6410)

--
[...truncated 2.38 MB...]
org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInTransformations STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInTransformations PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInHeaderConverter STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInHeaderConverter PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleNonRetriableErrorThrice STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleNonRetriableErrorThrice PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testCheckRetryLimit STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testCheckRetryLimit PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testBackoffLimit STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testBackoffLimit PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testToleranceLimit STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testToleranceLimit PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testDefaultConfigs STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testDefaultConfigs PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testSetConfigs STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testSetConfigs PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInValueConverter STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInValueConverter PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInKeyConverter STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInKeyConverter PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInTaskPut STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInTaskPut PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInTaskPoll STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testHandleExceptionInTaskPoll PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInTaskPut STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInTaskPut PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInTaskPoll STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInTaskPoll PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInKafkaConsume STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInKafkaConsume PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInKafkaProduce STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testThrowExceptionInKafkaProduce PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleRetriableErrorOnce STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleRetriableErrorOnce PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleRetriableErrorThrice STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleRetriableErrorThrice PASSED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleNonRetriableErrorOnce STARTED

org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest > 
testExecAndHandleNonRetriableErrorOnce PASSED

org.apache.kafka.connect.runtime.errors.ErrorReporterTest > 
testDLQConfigWithValidTopicName STARTED


Re: [VOTE] KIP-258: Allow to Store Record Timestamps in RocksDB

2019-04-18 Thread Matthias J. Sax
Quick update to the KIP. While working on

https://github.com/apache/kafka/pull/6601

I realized that I forgot to list the following three new
methods that we need to add in addition:

> public static KeyValueBytesStoreSupplier 
> persistentTimestampedKeyValueStore(final String name);
> 
> public static WindowBytesStoreSupplier 
> persistentTimestampedWindowStore(final String name,
> 
> final Duration retentionPeriod,
> 
> final Duration windowSize,
> 
> final boolean retainDuplicates);
> 
> public static SessionBytesStoreSupplier 
> persistentTimestampedSessionStore(final String name,
>   
> final Duration retentionPeriod);

I updated the KIP accordingly.


I don't think there is any need to revote, because this is a minor and
straight forward change to the KIP.


-Matthias


On 1/28/19 6:32 PM, Matthias J. Sax wrote:
> Hi,
> 
> during PR reviews, we discovered a couple of opportunities to simply and
> improve the KIP and code. Thus, the following minor changes to the
> public API are done (the KIP is already updated). I revote is not
> necessary as the changes are minor.
> 
>  - interface `ValueAndTimestamp` is going to be a class
> 
>  - interface `RecordConverter` is renamed to `TimestampedBytesStore` and
> we add a static method that converts values from old to new format
> 
>  - the three new interfaces `TimestampedXxxStore` don't add any new methods
> 
> 
> 
> Let us know if there are any objections. I can also provide more details
> why those changes make sense.
> 
> Thanks a lot!
> 
> 
> -Matthias
> 
> 
> On 1/18/19 10:00 PM, Matthias J. Sax wrote:
>> +1 from myself.
>>
>>
>> I am also closing this vote. The KIP is accepted with
>>
>> - 3 binding votes (Damian, Guozhang, Matthias)
>> - 3 non-binding votes (Bill, Patrik, John)
>>
>>
>> Thanks for the discussion and voting.
>>
>>
>> -Matthias
>>
>>
>> On 1/16/19 10:35 AM, John Roesler wrote:
>>> +1 (nonbinding) from me.
>>>
>>> Thanks for the KIP, Matthias.
>>>
>>> -John
>>>
>>> On Wed, Jan 16, 2019 at 12:01 PM Guozhang Wang  wrote:
>>>
 Thanks Matthias, I left some minor comments but since they do not involve
 in any major architectural changes and I did not feel strong about the
 naming etc as well. I'd +1 on the proposal as well.

 Feel free to reply / accept or reject my suggestions on the other DISCUSS
 thread.


 Guozhang

 On Wed, Jan 16, 2019 at 6:38 AM Damian Guy  wrote:

> +1
>
> On Wed, 16 Jan 2019 at 05:09, Patrik Kleindl  wrote:
>
>> +1 (non-binding)
>> Thanks too
>> Best regards
>> Patrik
>>
>>> Am 16.01.2019 um 03:30 schrieb Bill Bejeck :
>>>
>>> Thanks for the KIP Matthias.
>>>
>>> +1
>>>
>>> -Bill
>>>
>>> On Tue, Jan 15, 2019 at 7:33 PM Matthias J. Sax <
 matth...@confluent.io
>>
>>> wrote:
>>>
 Hi,

 I would like to start the vote for KIP-258:



>>
>
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB

 The KIP adds new stores that allow to store record timestamps next
 to
 key and value. Additionally, we will allow to upgrade exiting stores
> to
 the new stores; this will allow us to use the new stores in the DSL
> with
 a smooth upgrade path.


 -Matthias


>>
>


 --
 -- Guozhang

>>>
>>
> 





signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-451: Make TopologyTestDriver output iterable

2019-04-18 Thread Matthias J. Sax
I am not sure if (C) is the best option to pick.

What is the reasoning to suggest (C) over the other options?

It seems that users cannot clear buffered output using option (C). This
might it make difficult to write tests.

The original Jira tickets suggest:

> which returns either an iterator or list over the records that are currently 
> available in the topic

This implies that the current buffer would be cleared when getting the
iterator.

Also, from my understanding, the idea of iterating in general, is to
step through a finite collection of objects/elements. Hence, if
`hasNext()` returns `false` is will never return `true` later on.

As John mentioned, Java also has support for streams, that offer
different semantics, that would align with option (C). However, I am not
sure if this would be the test API to write tests?

Thoughts?

In any way: whatever semantics we pick, the KIP should explain them.
Atm, this part is missing in the KIP.


-Matthias

On 4/18/19 12:47 PM, Patrik Kleindl wrote:
> Hi John
> 
> Thanks for your feedback
> It's C, it does not consume the messages in contrast to the readOutput.
> Is it a requirement to do so?
> That's why I picked a different name so the difference is more noticeable.
> I will add that to the JavaDoc.
> 
> I see your point regarding future changes, that's why I linked KIP-456
> where such a method is proposed and would maybe allow to deprecate my
> version in favor of a bigger solution.
> 
> Hope that answers your questions
> 
> best regards
> Patrik
> 
> 
> On Thu, 18 Apr 2019 at 19:46, John Roesler  wrote:
> 
>> Hi, Patrik,
>>
>> Thanks for this proposal!
>>
>> I have one question, which I didn't see addressed by the KIP. Currently,
>> when you call `readOutput`, it consumes the result (removes it from the
>> test driver's output). Does your proposed method:
>> A: consume the whole output stream for that topic "atomically" when it
>> returns the iterable? (i.e., two calls in a row would guarantee the second
>> call is always an empty iterable?)
>> B: consume each record when we iterate over it? (i.e., this is like a
>> stream) If this is the case, is the returned object iterable once (uncached
>> stream), or could we iterate over it repeatedly (cached stream)?
>> C: not consume at all? (i.e., this is a view on the output topic, but we
>> need a separate method to consume/clear the output)
>> D: something else?
>>
>> Also, one suggestion: maybe name the method "readAllOutput" or something.
>> Specifically naming it "iterable" makes it awkward if we do want to tighten
>> the return type (e.g., to List) in the future. This is something we may
>> actually want to do, if there's an easy way to say, "assert that the output
>> equals [...some literal list...]".
>>
>> Thanks again!
>> -John
>>
>> On Wed, Apr 17, 2019 at 4:01 PM Patrik Kleindl  wrote:
>>
>>> Hi all
>>>
>>> Unless someone has objections I will start a VOTE thread tomorrow.
>>> The KIP adds two methods to the TopologyTestDriver and has no conflicts
>> for
>>> existing users.
>>> PR https://github.com/apache/kafka/pull/6556 is already being reviewed.
>>>
>>> Side-note:
>>>
>>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-456%3A+Helper+classes+to+make+it+simpler+to+write+test+logic+with+TopologyTestDriver
>>> will
>>> provide a much larger solution for the TopologyTestDriver, but is just
>>> starting the discussion.
>>>
>>> best regards
>>>
>>> Patrik
>>>
>>> On Thu, 11 Apr 2019 at 22:14, Patrik Kleindl  wrote:
>>>
 Hi Matthias

 Thanks for the questions.

 Regarding the return type:
 Iterable offers the option of being used in a foreach loop directly and
>>> it
 gives you access to the .iterator method, too.
 (ref:

>>>
>> https://www.techiedelight.com/differences-between-iterator-and-iterable-in-java/
 )

 To return a List object would require an additional conversion and I
>>> don't see the immediate benefit.

 Regarding the ordering:
 outputRecordsByTopic gives back a Queue

 private final Map>>
>>> outputRecordsByTopic = new HashMap<>();

 which has a LinkedList behind it

 outputRecordsByTopic.computeIfAbsent(record.topic(), k -> new
>>> LinkedList<>()).add(record);

 So the order is handled by the linked list and should not be modified
>> by
 my changes,
 not even the .stream.map etc. (ref:

>>>
>> https://stackoverflow.com/questions/30258566/java-stream-map-and-collect-order-of-resulting-container
 )


 Then again, I am open to change it if people have some strong
>> preference

 best regards

 Patrik


 On Thu, 11 Apr 2019 at 17:45, Matthias J. Sax 
 wrote:

> Thanks for the KIP!
>
> Overall, this makes sense and can simplify testing.
>
> What I am wondering is, why you suggest to return an `Iterable`? Maybe
> returning an `Iterator` would make more sense? Or a List? Note that
>> the
> order of emits 

Re: [VOTE] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-18 Thread Matthias J. Sax
Thanks a lot for the KIP!


+1 (binding)


-Matthias

On 4/18/19 12:31 PM, Guozhang Wang wrote:
> +1 (binding).
> 
> Thanks.
> 
> On Thu, Apr 18, 2019 at 10:48 AM John Roesler  wrote:
> 
>> Thanks, Maarten!
>>
>> +1 (non-binding)
>>
>> -John
>>
>> On Wed, Apr 17, 2019 at 1:45 PM Bill Bejeck  wrote:
>>
>>> Thanks for the KIP.
>>>
>>> +1(binding)
>>>
>>> -Bill
>>>
>>> On Wed, Apr 17, 2019 at 12:58 PM Bruno Cadonna 
>> wrote:
>>>
 Hi Maarten Duijn,

 Thank you for driving this.

 +1 (non-binding)

 Best,
 Bruno

 On Wed, Apr 17, 2019 at 8:21 AM Maarten Duijn 
 wrote:

> Hello all,
>
> There has been informal agreement so I would like to call for a vote
>> on
> KIP-446: Add changelog topic configuration to KTable suppress. This
>>> will
> allow users to configure internal topics created by the suppress
>>> operator
> via the streams DSL.
>
> KIP:
>

>>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
> JIRA: https://issues.apache.org/jira/browse/KAFKA-8147<
> https://issues.apache.org/jira/browse/KAFKA-8029>
> PR: will follow shortly
>
> Cheers,
> Maarten
>

>>>
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Reopened] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-04-18 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reopened KAFKA-7965:

  Assignee: Jason Gustafson  (was: huxihx)

I'm still seeing this: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3890/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/].
 

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 1.1.1, 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.1-jdk8 #166

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7866; Ensure no duplicate offsets after txn index append failure

--
[...truncated 923.79 KB...]

kafka.coordinator.group.GroupMetadataTest > testOffsetCommit STARTED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommit PASSED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToStableTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToStableTransition PASSED

kafka.coordinator.group.GroupMetadataTest > testSupportsProtocols STARTED

kafka.coordinator.group.GroupMetadataTest > testSupportsProtocols PASSED

kafka.coordinator.group.GroupMetadataTest > testEmptyToStableIllegalTransition 
STARTED

kafka.coordinator.group.GroupMetadataTest > testEmptyToStableIllegalTransition 
PASSED

kafka.coordinator.group.GroupMetadataTest > testCanRebalanceWhenStable STARTED

kafka.coordinator.group.GroupMetadataTest > testCanRebalanceWhenStable PASSED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommitWithAnotherPending 
STARTED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommitWithAnotherPending 
PASSED

kafka.coordinator.group.GroupMetadataTest > 
testPreparingRebalanceToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testPreparingRebalanceToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.group.GroupMetadataTest > 
testSelectProtocolChoosesCompatibleProtocol STARTED

kafka.coordinator.group.GroupMetadataTest > 
testSelectProtocolChoosesCompatibleProtocol PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentTxnGoodPathSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentTxnGoodPathSequence PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentRandomSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentRandomSequence PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithEmptyControlBatch STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithEmptyControlBatch PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreNonEmptyGroup 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreNonEmptyGroup PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithTombstones STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithTombstones PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadWithCommittedAndAbortedTransactionalOffsetCommits STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadWithCommittedAndAbortedTransactionalOffsetCommits PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testTransactionalCommitOffsetCommitted STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testTransactionalCommitOffsetCommitted PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetsWithoutGroup 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetsWithoutGroup 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testGroupNotExists STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testGroupNotExists PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadEmptyGroupWithOffsets STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadEmptyGroupWithOffsets PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testSerdeOffsetCommitValue 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testSerdeOffsetCommitValue 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadTransactionalOffsetCommitsFromMultipleProducers STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadTransactionalOffsetCommitsFromMultipleProducers PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptySimpleGroup 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptySimpleGroup 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetWithExplicitRetention STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetWithExplicitRetention PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetFromOldCommit 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetFromOldCommit 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testAddGroup STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testAddGroup PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadGroupWithLargeGroupMetadataRecord STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 

Build failed in Jenkins: kafka-2.2-jdk8 #86

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7866; Ensure no duplicate offsets after txn index append failure

--
[...truncated 2.51 MB...]

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
STARTED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED


Re: [ANNOUNCE] New Kafka PMC member: Sriharsh Chintalapan

2019-04-18 Thread Matthias J. Sax
Congrats!

-Matthias

On 4/18/19 2:11 PM, Vahid Hashemian wrote:
> Congratulations Harsh!
> 
> --Vahid
> 
> On Thu, Apr 18, 2019 at 12:35 PM Bill Bejeck  wrote:
> 
>> Congrats Harsh!
>>
>> -Bill
>>
>> On Thu, Apr 18, 2019 at 3:14 PM Guozhang Wang  wrote:
>>
>>> Congrats Harsh!
>>>
>>>
>>> Guozhang
>>>
>>> On Thu, Apr 18, 2019 at 11:46 AM Jun Rao  wrote:
>>>
 Hi, Everyone,

 Sriharsh Chintalapan has been active in the Kafka community since he
>>> became
 a Kafka committer in 2015. I am glad to announce that Harsh is now a
>>> member
 of Kafka PMC.

 Congratulations, Harsh!

 Jun

>>>
>>>
>>> --
>>> -- Guozhang
>>>
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Jonathan Santilli
Definitely well deserved! Congrats Matthias!

On Thu, Apr 18, 2019, 10:46 PM Bill Bejeck  wrote:

> Congrats Matthias! Well deserved!
>
> -Bill
>
> On Thu, Apr 18, 2019 at 5:35 PM Guozhang Wang  wrote:
>
> > Hello Everyone,
> >
> > I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> >
> > Matthias has been a committer since Jan. 2018, and since then he
> continued
> > to be active in the community and made significant contributions the
> > project.
> >
> >
> > Congratulations to Matthias!
> >
> > -- Guozhang
> >
>


[VOTE] KIP-421: Automatically resolve external configurations.

2019-04-18 Thread TEJAL ADSUL
Hi All,

As we have reached a consensus on the design, I would like to start a vote for 
KIP-421. Below are the links for this proposal:

KIP Link: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100829515
DiscussionThread: 
https://lists.apache.org/thread.html/a2f834d876e9f8fb3977db794bf161818c97f7f481edd1b10449d89f@%3Cdev.kafka.apache.org%3E

Thanks,
Tejal


Build failed in Jenkins: kafka-2.0-jdk8 #251

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7866; Ensure no duplicate offsets after txn index append failure

--
[...truncated 2.69 MB...]
org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

> Task :streams:examples:compileJava
> Task :streams:examples:processResources NO-SOURCE
> Task :streams:examples:classes
> Task :streams:examples:checkstyleMain
> Task :streams:examples:compileTestJava
> Task :streams:examples:processTestResources NO-SOURCE
> Task :streams:examples:testClasses
> Task :streams:examples:checkstyleTest
> Task :streams:examples:findbugsMain

> Task :streams:examples:test

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test 
STARTED

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test PASSED

> Task :streams:streams-scala:compileJava NO-SOURCE

> Task :streams:streams-scala:compileScala
Pruning sources from previous analysis, due to incompatible CompileSetup.

> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:classes
> Task :streams:streams-scala:checkstyleMain NO-SOURCE
> Task :streams:streams-scala:compileTestJava NO-SOURCE

> Task :streams:streams-scala:compileTestScala
Pruning sources from previous analysis, due to incompatible CompileSetup.

> Task :streams:streams-scala:processTestResources UP-TO-DATE
> Task :streams:streams-scala:testClasses
> Task 

Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Bill Bejeck
Congrats Matthias! Well deserved!

-Bill

On Thu, Apr 18, 2019 at 5:35 PM Guozhang Wang  wrote:

> Hello Everyone,
>
> I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
>
> Matthias has been a committer since Jan. 2018, and since then he continued
> to be active in the community and made significant contributions the
> project.
>
>
> Congratulations to Matthias!
>
> -- Guozhang
>


Jenkins build is back to normal : kafka-trunk-jdk8 #3568

2019-04-18 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-454: Expansion of the ConnectClusterState interface

2019-04-18 Thread Chris Egerton
Hi Magesh,

Thanks for your comments. I'll address them in the order you provided them:

1 - Reason for exposing task configurations to REST extensions:
Yes, the motivation is a little thin for exposing task configs to REST
extensions. I can think of a few uses for this functionality, such as
attempting to infer problematic configurations by examining failed tasks
and comparing their configurations to the configurations of running tasks,
but like you've indicated it's dubious that the best place for anything
like that belongs in a REST extension.
I'd be interested to hear others' thoughts, but right now I'm not too
opposed to erring on the side of caution and leaving it out. Worst case, it
takes another KIP to add this later on down the road, but that's a small
price to pay to avoid adding support for a feature that nobody needs.

2. Usefulness of exposing Kafka cluster ID to REST extensions:
As the KIP states, "the Kafka cluster ID may be useful for the purpose of
uniquely identifying a Connect cluster from within a REST extension, since
users may be running multiple Kafka clusters and the group.id for a
distributed Connect cluster may not be sufficient to identify a cluster."
Even though there may be producer or consumer overrides for
bootstrap.servers present in the configuration for the worker, these will
not affect which Kafka cluster is used as a backing store for connector
configurations, offsets, and statuses, so the Kafka cluster ID for the
worker in conjunction with the Connect group ID should be sufficient to
uniquely identify a Connect cluster.
We can and should document that the Connect cluster with overridden
producer.bootstrap.servers or consumer.bootstrap.servers may be writing
to/reading from a different Kafka cluster. However, REST extensions are
already passed the configs for their worker through their configure(...)
method, so they'd be able to detect any such overrides and act accordingly.

Thanks again for your thoughts!

Cheers,

Chris

On Thu, Apr 18, 2019 at 11:08 AM Magesh Nandakumar 
wrote:

> Hi Chris,
>
> Thanks for the KIP. Overall, it looks good and straightforward to me.
>
> I had a few questions on the new methods
>
> 1. I'm not sure if an extension would ever require the task configs. An
> extension generally should only require the health and current state of the
> connector which includes the connector config. I was wondering if there is
> a specific reason it would need task configs.
> 2. Also, I'm not convinced that kafkaClusterId() belongs to the
> ConnectClusterState
> interface. The interface is generally to provide information about the
> Connect cluster and its information.  Also, the kafkaClusterId could
> potentially change based on whether there is a "producer." or "consumer."
> prefix, right?
>
> Having said that, I would prefer to have connectorConfigs which I think is
> a great idea and addition to the interface. Let me know what you think.
>
> Thanks,
> Magesh
>
> On Sat, Apr 13, 2019 at 9:00 PM Chris Egerton  wrote:
>
> > Hi all,
> >
> > I've posted "KIP-454: Expansion of the ConnectClusterState interface",
> > which proposes that we add provide more information about the Connect
> > cluster to REST extensions.
> >
> > The KIP can be found at
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-454%3A+Expansion+of+the+ConnectClusterState+interface
> >
> > I'm eager to hear people's thoughts on this and will appreciate any
> > feedback.
> >
> > Cheers,
> >
> > Chris
> >
>


Re: [DISCUSSION] KIP-421: Automatically resolve external configurations.

2019-04-18 Thread Colin McCabe
Hi Tejal,

Thanks for the updates.  Looks good :)

best,
Colin


On Wed, Apr 17, 2019, at 16:53, TEJAL ADSUL wrote:
> Hi Rajini and Colin,
> 
> I have incorporated your feedbacks and have enabled the the automatic 
> resolution of configs for all the components. Also I have removed the 
> flag to enable or disable the feature to address Rajini's feedback as 
> the user won't be able to enable or disable the feature so it dint add 
> anything.
> 
> Please could you'll review the KIP and let me know if it has addressed 
> your concerns and any other feedback.
> 
> Thanks,
> Tejal
> 
> On 2019/04/17 23:34:37, TEJAL ADSUL  wrote: 
> > 
> > Thanks for the feedback Colin. And agree we should enable it for all the 
> > components as this feature is going to benefit all the components and not 
> > just Connect. I have updated the KIP to enable it for all the components. 
> > 
> > Thanks,
> > Tejal
> > 
> > On 2019/04/17 23:22:50, "Colin McCabe"  wrote: 
> > > On Wed, Apr 17, 2019, at 07:49, TEJAL ADSUL wrote:
> > > > 
> > > > Hi Colin,
> > > > 
> > > >  By default we are enabling this feature only Connect. All the other 
> > > > components can enable or disable the feature as needed. 
> > > 
> > > Hi Tejal,
> > > 
> > > I believe we should enable automatically resolving external 
> > > configurations in all components, not just Connect.  I really can't 
> > > emphasize this point enough.  If all we care about is Connect, we don't 
> > > need a new KIP, because Connect already has external configuration 
> > > support.  I see no reason why we shouldn't just enable external 
> > > configuration support in every component.
> > > 
> > > > No it won't reload the configuration if its not configured using KIP 
> > > > 226 as in order for the dynamic configs to be reloaded it has to be 
> > > > triggered by the user using the Admin Client. For static configs we 
> > > > dont perform any reloading and happens only at the construction time.
> > > 
> > > OK.  We probably should have some generic way of triggering a reload of 
> > > configurations, whether or not they're stored in ZK.  But I guess that 
> > > can wait for a different KIP.
> > > 
> > > best,
> > > Colin
> > > 
> > > > 
> > > > Thanks,
> > > > Tejal
> > > > 
> > > > On 2019/04/17 14:36:33, TEJAL ADSUL  wrote: 
> > > > > Hi Rajini,
> > > > > 
> > > > > The user wont have the ability to choose whether config value should 
> > > > > be enabled/disabled. Its enable/disabled per component by hardcoding 
> > > > > the value. I have documented your recommendations in the 
> > > > > compatibility sections.
> > > > > 
> > > > > Thanks,
> > > > > Tejal
> > > > > 
> > > >
> > > 
> > 
>


[ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Guozhang Wang
Hello Everyone,

I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.

Matthias has been a committer since Jan. 2018, and since then he continued
to be active in the community and made significant contributions the
project.


Congratulations to Matthias!

-- Guozhang


[jira] [Created] (KAFKA-8260) Flaky test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-04-18 Thread John Roesler (JIRA)
John Roesler created KAFKA-8260:
---

 Summary: Flaky test 
ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
 Key: KAFKA-8260
 URL: https://issues.apache.org/jira/browse/KAFKA-8260
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.3.0
Reporter: John Roesler


I have seen this fail again just now. See also KAFKA-7965 and KAFKA-7936.

https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3874/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/

{noformat}
Error Message
org.scalatest.junit.JUnitTestFailedError: Should have received an class 
org.apache.kafka.common.errors.GroupMaxSizeReachedException during the cluster 
roll
Stacktrace
org.scalatest.junit.JUnitTestFailedError: Should have received an class 
org.apache.kafka.common.errors.GroupMaxSizeReachedException during the cluster 
roll
at 
org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException(AssertionsForJUnit.scala:100)
at 
org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException$(AssertionsForJUnit.scala:99)
at 
org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
at org.scalatest.Assertions.fail(Assertions.scala:1089)
at org.scalatest.Assertions.fail$(Assertions.scala:1085)
at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71)
at 
kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:350)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8259) Build RPM for Kafka

2019-04-18 Thread Patrick Dignan (JIRA)
Patrick Dignan created KAFKA-8259:
-

 Summary: Build RPM for Kafka
 Key: KAFKA-8259
 URL: https://issues.apache.org/jira/browse/KAFKA-8259
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Patrick Dignan


RPM packaging eases the installation and deployment of Kafka to make it much 
more standard.

I noticed in https://issues.apache.org/jira/browse/KAFKA-1324 [~jkreps] closed 
the issue because other sources provide packaging.  I think it's worthwhile for 
the standard, open source project to provide this as a base to reduce redundant 
work and provide this functionality for users.  Other similar open source 
software like Elasticsearch create an RPM [(Elasticsearch 
RPM)|[https://github.com/elastic/elasticsearch/blob/0ad3d90a36529bf369813ea6253f305e11aff2e9/distribution/packages/build.gradle]].
  This also makes forking internally more maintainable by reducing the amount 
of work to be done for each version upgrade.

I have a patch to add this functionality that I will clean up and PR on Github.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] KIP-451: Make TopologyTestDriver output iterable

2019-04-18 Thread Patrik Kleindl
Hello all,

I would like to start a vote for KIP-451 which adds two methods for the
TopologyTestDriver to iterate over the output.

KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-451%3A+Make+TopologyTestDriver+output+iterable
JIRA: https://issues.apache.org/jira/browse/KAFKA-8200
PR: https://github.com/apache/kafka/pull/6556

best regards

Patrik


Jenkins build is back to normal : kafka-trunk-jdk11 #447

2019-04-18 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-18 Thread Guozhang Wang
+1 (binding).

Thanks.

On Thu, Apr 18, 2019 at 10:48 AM John Roesler  wrote:

> Thanks, Maarten!
>
> +1 (non-binding)
>
> -John
>
> On Wed, Apr 17, 2019 at 1:45 PM Bill Bejeck  wrote:
>
> > Thanks for the KIP.
> >
> > +1(binding)
> >
> > -Bill
> >
> > On Wed, Apr 17, 2019 at 12:58 PM Bruno Cadonna 
> wrote:
> >
> > > Hi Maarten Duijn,
> > >
> > > Thank you for driving this.
> > >
> > > +1 (non-binding)
> > >
> > > Best,
> > > Bruno
> > >
> > > On Wed, Apr 17, 2019 at 8:21 AM Maarten Duijn 
> > > wrote:
> > >
> > > > Hello all,
> > > >
> > > > There has been informal agreement so I would like to call for a vote
> on
> > > > KIP-446: Add changelog topic configuration to KTable suppress. This
> > will
> > > > allow users to configure internal topics created by the suppress
> > operator
> > > > via the streams DSL.
> > > >
> > > > KIP:
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
> > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-8147<
> > > > https://issues.apache.org/jira/browse/KAFKA-8029>
> > > > PR: will follow shortly
> > > >
> > > > Cheers,
> > > > Maarten
> > > >
> > >
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-451: Make TopologyTestDriver output iterable

2019-04-18 Thread Patrik Kleindl
Hi John

Thanks for your feedback
It's C, it does not consume the messages in contrast to the readOutput.
Is it a requirement to do so?
That's why I picked a different name so the difference is more noticeable.
I will add that to the JavaDoc.

I see your point regarding future changes, that's why I linked KIP-456
where such a method is proposed and would maybe allow to deprecate my
version in favor of a bigger solution.

Hope that answers your questions

best regards
Patrik


On Thu, 18 Apr 2019 at 19:46, John Roesler  wrote:

> Hi, Patrik,
>
> Thanks for this proposal!
>
> I have one question, which I didn't see addressed by the KIP. Currently,
> when you call `readOutput`, it consumes the result (removes it from the
> test driver's output). Does your proposed method:
> A: consume the whole output stream for that topic "atomically" when it
> returns the iterable? (i.e., two calls in a row would guarantee the second
> call is always an empty iterable?)
> B: consume each record when we iterate over it? (i.e., this is like a
> stream) If this is the case, is the returned object iterable once (uncached
> stream), or could we iterate over it repeatedly (cached stream)?
> C: not consume at all? (i.e., this is a view on the output topic, but we
> need a separate method to consume/clear the output)
> D: something else?
>
> Also, one suggestion: maybe name the method "readAllOutput" or something.
> Specifically naming it "iterable" makes it awkward if we do want to tighten
> the return type (e.g., to List) in the future. This is something we may
> actually want to do, if there's an easy way to say, "assert that the output
> equals [...some literal list...]".
>
> Thanks again!
> -John
>
> On Wed, Apr 17, 2019 at 4:01 PM Patrik Kleindl  wrote:
>
> > Hi all
> >
> > Unless someone has objections I will start a VOTE thread tomorrow.
> > The KIP adds two methods to the TopologyTestDriver and has no conflicts
> for
> > existing users.
> > PR https://github.com/apache/kafka/pull/6556 is already being reviewed.
> >
> > Side-note:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-456%3A+Helper+classes+to+make+it+simpler+to+write+test+logic+with+TopologyTestDriver
> > will
> > provide a much larger solution for the TopologyTestDriver, but is just
> > starting the discussion.
> >
> > best regards
> >
> > Patrik
> >
> > On Thu, 11 Apr 2019 at 22:14, Patrik Kleindl  wrote:
> >
> > > Hi Matthias
> > >
> > > Thanks for the questions.
> > >
> > > Regarding the return type:
> > > Iterable offers the option of being used in a foreach loop directly and
> > it
> > > gives you access to the .iterator method, too.
> > > (ref:
> > >
> >
> https://www.techiedelight.com/differences-between-iterator-and-iterable-in-java/
> > > )
> > >
> > > To return a List object would require an additional conversion and I
> > don't see the immediate benefit.
> > >
> > > Regarding the ordering:
> > > outputRecordsByTopic gives back a Queue
> > >
> > > private final Map>>
> > outputRecordsByTopic = new HashMap<>();
> > >
> > > which has a LinkedList behind it
> > >
> > > outputRecordsByTopic.computeIfAbsent(record.topic(), k -> new
> > LinkedList<>()).add(record);
> > >
> > > So the order is handled by the linked list and should not be modified
> by
> > > my changes,
> > > not even the .stream.map etc. (ref:
> > >
> >
> https://stackoverflow.com/questions/30258566/java-stream-map-and-collect-order-of-resulting-container
> > > )
> > >
> > >
> > > Then again, I am open to change it if people have some strong
> preference
> > >
> > > best regards
> > >
> > > Patrik
> > >
> > >
> > > On Thu, 11 Apr 2019 at 17:45, Matthias J. Sax 
> > > wrote:
> > >
> > >> Thanks for the KIP!
> > >>
> > >> Overall, this makes sense and can simplify testing.
> > >>
> > >> What I am wondering is, why you suggest to return an `Iterable`? Maybe
> > >> returning an `Iterator` would make more sense? Or a List? Note that
> the
> > >> order of emits matters, thus returning a generic `Collection` would
> not
> > >> seem to be appropriate.
> > >>
> > >> Can you elaborate on the advantages to use `Iterable` compared to the
> > >> other options?
> > >>
> > >>
> > >>
> > >> -Matthias
> > >>
> > >> On 4/11/19 2:09 AM, Patrik Kleindl wrote:
> > >> > Hi everyone,
> > >> >
> > >> > I would like to start the discussion on this small enhancement of
> > >> > the TopologyTestDriver.
> > >> >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-451%3A+Make+TopologyTestDriver+output+iterable
> > >> >
> > >> > Pull request is available at
> > https://github.com/apache/kafka/pull/6556
> > >> >
> > >> > Any feedback is welcome
> > >> >
> > >> > best regards
> > >> >
> > >> > Patrik
> > >> >
> > >>
> > >>
> >
>


Re: New Contributor Request

2019-04-18 Thread Bill Bejeck
Brandt,

You should be all set now.

-Bill

On Thu, Apr 18, 2019 at 3:36 PM Newton, Brandt (CAI - Burlington) <
brandt.new...@coxautoinc.com> wrote:

> Hello,
>
> I’d like to contribute to Kafka. Can someone please add me as a
> contributor to JIRA so I can assign an issue to myself? My username is
> bnewton
>
> Thanks,
> Brandt
>


New Contributor Request

2019-04-18 Thread Newton, Brandt (CAI - Burlington)
Hello,

I’d like to contribute to Kafka. Can someone please add me as a contributor to 
JIRA so I can assign an issue to myself? My username is bnewton

Thanks,
Brandt


[jira] [Created] (KAFKA-8258) Verbose logs in org.apache.kafka.clients.consumer.internals.Fetcher

2019-04-18 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created KAFKA-8258:
---

 Summary: Verbose logs in 
org.apache.kafka.clients.consumer.internals.Fetcher
 Key: KAFKA-8258
 URL: https://issues.apache.org/jira/browse/KAFKA-8258
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Shixiong Zhu


We noticed that the Spark's Kafka connector outputs a lot of following verbose 
logs:
{code}
19/04/18 04:31:06 INFO Fetcher: [Consumer clientId=consumer-2, groupId=...] 
Resetting offset for partition ... to offset  
{code}

It comes from 
https://github.com/hachikuji/kafka/blob/76c796ca128c3c97231f3ebda994a07bb06b26aa/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L567

This log was added in https://github.com/apache/kafka/pull/4557

In Spark, we use `seekToEnd` to discover latest offsets of a topic. If there 
are thousands of partitions in this topic, it will output thousands of INFO 
logs.

Is it intentional? If not, can we change it to DEBUG?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.0-jdk8 #250

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Ensure producer state append exceptions areuseful (#6591)

--
[...truncated 2.68 MB...]
org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

> Task :streams:examples:compileJava
> Task :streams:examples:processResources NO-SOURCE
> Task :streams:examples:classes
> Task :streams:examples:checkstyleMain
> Task :streams:examples:compileTestJava
> Task :streams:examples:processTestResources NO-SOURCE
> Task :streams:examples:testClasses
> Task :streams:examples:checkstyleTest
> Task :streams:examples:findbugsMain

> Task :streams:examples:test

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test 
STARTED

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test PASSED

> Task :streams:streams-scala:compileJava NO-SOURCE

> Task :streams:streams-scala:compileScala
Pruning sources from previous analysis, due to incompatible CompileSetup.

> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:classes
> Task :streams:streams-scala:checkstyleMain NO-SOURCE
> Task :streams:streams-scala:compileTestJava NO-SOURCE

> Task :streams:streams-scala:compileTestScala
Pruning sources from previous analysis, due to incompatible CompileSetup.

> Task :streams:streams-scala:processTestResources UP-TO-DATE
> Task :streams:streams-scala:testClasses
> Task 

[ANNOUNCE] New Kafka PMC member: Sriharsh Chintalapan

2019-04-18 Thread Jun Rao
Hi, Everyone,

Sriharsh Chintalapan has been active in the Kafka community since he became
a Kafka committer in 2015. I am glad to announce that Harsh is now a member
of Kafka PMC.

Congratulations, Harsh!

Jun


[jira] [Resolved] (KAFKA-7866) Duplicate offsets after transaction index append failure

2019-04-18 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7866.

   Resolution: Fixed
 Assignee: Jason Gustafson
Fix Version/s: 2.2.1
   2.1.2
   2.0.2

> Duplicate offsets after transaction index append failure
> 
>
> Key: KAFKA-7866
> URL: https://issues.apache.org/jira/browse/KAFKA-7866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
>
> We have encountered a situation in which an ABORT marker was written 
> successfully to the log, but failed to be written to the transaction index. 
> This prevented the log end offset from being incremented. This resulted in 
> duplicate offsets when the next append was attempted. The broker was using 
> JBOD and we would normally expect IOExceptions to cause the log directory to 
> be failed. That did not seem to happen here and the duplicates continued for 
> several hours.
> Unfortunately, we are not sure what the cause of the failure was. 
> Significantly, the first duplicate was also the first ABORT marker in the 
> log. Unlike the offset and timestamp index, the transaction index is created 
> on demand after the first aborted transction. It is likely that the attempt 
> to create and open the transaction index failed. There is some suggestion 
> that the process may have bumped into the open file limit. Whatever the 
> problem was, it also prevented log collection, so we cannot confirm our 
> guesses. 
> Without knowing the underlying cause, we can still consider some potential 
> improvements:
> 1. We probably should be catching non-IO exceptions in the append process. If 
> the append to one of the indexes fails, we potentially truncate the log or 
> re-throw it as an IOException to ensure that the log directory is no longer 
> used.
> 2. Even without the unexpected exception, there is a small window during 
> which even an IOException could lead to duplicate offsets. Marking a log 
> directory offline is an asynchronous operation and there is no guarantee that 
> another append cannot happen first. Given this, we probably need to detect 
> and truncate duplicates during the log recovery process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8257) Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota

2019-04-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8257:
--

 Summary: Flaky Test 
DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota
 Key: KAFKA-8257
 URL: https://issues.apache.org/jira/browse/KAFKA-8257
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3566/tests]
{quote}java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at kafka.server.BaseRequestTest.receiveResponse(BaseRequestTest.scala:87)
at kafka.server.BaseRequestTest.sendAndReceive(BaseRequestTest.scala:148)
at 
kafka.network.DynamicConnectionQuotaTest.verifyConnection(DynamicConnectionQuotaTest.scala:229)
at 
kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4(DynamicConnectionQuotaTest.scala:133)
at 
kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4$adapted(DynamicConnectionQuotaTest.scala:133)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at 
kafka.network.DynamicConnectionQuotaTest.testDynamicListenerConnectionQuota(DynamicConnectionQuotaTest.scala:133){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-454: Expansion of the ConnectClusterState interface

2019-04-18 Thread Magesh Nandakumar
Hi Chris,

Thanks for the KIP. Overall, it looks good and straightforward to me.

I had a few questions on the new methods

1. I'm not sure if an extension would ever require the task configs. An
extension generally should only require the health and current state of the
connector which includes the connector config. I was wondering if there is
a specific reason it would need task configs.
2. Also, I'm not convinced that kafkaClusterId() belongs to the
ConnectClusterState
interface. The interface is generally to provide information about the
Connect cluster and its information.  Also, the kafkaClusterId could
potentially change based on whether there is a "producer." or "consumer."
prefix, right?

Having said that, I would prefer to have connectorConfigs which I think is
a great idea and addition to the interface. Let me know what you think.

Thanks,
Magesh

On Sat, Apr 13, 2019 at 9:00 PM Chris Egerton  wrote:

> Hi all,
>
> I've posted "KIP-454: Expansion of the ConnectClusterState interface",
> which proposes that we add provide more information about the Connect
> cluster to REST extensions.
>
> The KIP can be found at
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-454%3A+Expansion+of+the+ConnectClusterState+interface
>
> I'm eager to hear people's thoughts on this and will appreciate any
> feedback.
>
> Cheers,
>
> Chris
>


Re: [VOTE] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-18 Thread John Roesler
Thanks, Maarten!

+1 (non-binding)

-John

On Wed, Apr 17, 2019 at 1:45 PM Bill Bejeck  wrote:

> Thanks for the KIP.
>
> +1(binding)
>
> -Bill
>
> On Wed, Apr 17, 2019 at 12:58 PM Bruno Cadonna  wrote:
>
> > Hi Maarten Duijn,
> >
> > Thank you for driving this.
> >
> > +1 (non-binding)
> >
> > Best,
> > Bruno
> >
> > On Wed, Apr 17, 2019 at 8:21 AM Maarten Duijn 
> > wrote:
> >
> > > Hello all,
> > >
> > > There has been informal agreement so I would like to call for a vote on
> > > KIP-446: Add changelog topic configuration to KTable suppress. This
> will
> > > allow users to configure internal topics created by the suppress
> operator
> > > via the streams DSL.
> > >
> > > KIP:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
> > > JIRA: https://issues.apache.org/jira/browse/KAFKA-8147<
> > > https://issues.apache.org/jira/browse/KAFKA-8029>
> > > PR: will follow shortly
> > >
> > > Cheers,
> > > Maarten
> > >
> >
>


Re: [DISCUSS] KIP-451: Make TopologyTestDriver output iterable

2019-04-18 Thread John Roesler
Hi, Patrik,

Thanks for this proposal!

I have one question, which I didn't see addressed by the KIP. Currently,
when you call `readOutput`, it consumes the result (removes it from the
test driver's output). Does your proposed method:
A: consume the whole output stream for that topic "atomically" when it
returns the iterable? (i.e., two calls in a row would guarantee the second
call is always an empty iterable?)
B: consume each record when we iterate over it? (i.e., this is like a
stream) If this is the case, is the returned object iterable once (uncached
stream), or could we iterate over it repeatedly (cached stream)?
C: not consume at all? (i.e., this is a view on the output topic, but we
need a separate method to consume/clear the output)
D: something else?

Also, one suggestion: maybe name the method "readAllOutput" or something.
Specifically naming it "iterable" makes it awkward if we do want to tighten
the return type (e.g., to List) in the future. This is something we may
actually want to do, if there's an easy way to say, "assert that the output
equals [...some literal list...]".

Thanks again!
-John

On Wed, Apr 17, 2019 at 4:01 PM Patrik Kleindl  wrote:

> Hi all
>
> Unless someone has objections I will start a VOTE thread tomorrow.
> The KIP adds two methods to the TopologyTestDriver and has no conflicts for
> existing users.
> PR https://github.com/apache/kafka/pull/6556 is already being reviewed.
>
> Side-note:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-456%3A+Helper+classes+to+make+it+simpler+to+write+test+logic+with+TopologyTestDriver
> will
> provide a much larger solution for the TopologyTestDriver, but is just
> starting the discussion.
>
> best regards
>
> Patrik
>
> On Thu, 11 Apr 2019 at 22:14, Patrik Kleindl  wrote:
>
> > Hi Matthias
> >
> > Thanks for the questions.
> >
> > Regarding the return type:
> > Iterable offers the option of being used in a foreach loop directly and
> it
> > gives you access to the .iterator method, too.
> > (ref:
> >
> https://www.techiedelight.com/differences-between-iterator-and-iterable-in-java/
> > )
> >
> > To return a List object would require an additional conversion and I
> don't see the immediate benefit.
> >
> > Regarding the ordering:
> > outputRecordsByTopic gives back a Queue
> >
> > private final Map>>
> outputRecordsByTopic = new HashMap<>();
> >
> > which has a LinkedList behind it
> >
> > outputRecordsByTopic.computeIfAbsent(record.topic(), k -> new
> LinkedList<>()).add(record);
> >
> > So the order is handled by the linked list and should not be modified by
> > my changes,
> > not even the .stream.map etc. (ref:
> >
> https://stackoverflow.com/questions/30258566/java-stream-map-and-collect-order-of-resulting-container
> > )
> >
> >
> > Then again, I am open to change it if people have some strong preference
> >
> > best regards
> >
> > Patrik
> >
> >
> > On Thu, 11 Apr 2019 at 17:45, Matthias J. Sax 
> > wrote:
> >
> >> Thanks for the KIP!
> >>
> >> Overall, this makes sense and can simplify testing.
> >>
> >> What I am wondering is, why you suggest to return an `Iterable`? Maybe
> >> returning an `Iterator` would make more sense? Or a List? Note that the
> >> order of emits matters, thus returning a generic `Collection` would not
> >> seem to be appropriate.
> >>
> >> Can you elaborate on the advantages to use `Iterable` compared to the
> >> other options?
> >>
> >>
> >>
> >> -Matthias
> >>
> >> On 4/11/19 2:09 AM, Patrik Kleindl wrote:
> >> > Hi everyone,
> >> >
> >> > I would like to start the discussion on this small enhancement of
> >> > the TopologyTestDriver.
> >> >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-451%3A+Make+TopologyTestDriver+output+iterable
> >> >
> >> > Pull request is available at
> https://github.com/apache/kafka/pull/6556
> >> >
> >> > Any feedback is welcome
> >> >
> >> > best regards
> >> >
> >> > Patrik
> >> >
> >>
> >>
>


[jira] [Created] (KAFKA-8256) Replace Heartbeat request/response with automated protocol

2019-04-18 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-8256:
-

 Summary: Replace Heartbeat request/response with automated protocol
 Key: KAFKA-8256
 URL: https://issues.apache.org/jira/browse/KAFKA-8256
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8255) Replica fetcher thread exits with OffsetOutOfRangeException

2019-04-18 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8255:
--

 Summary: Replica fetcher thread exits with 
OffsetOutOfRangeException
 Key: KAFKA-8255
 URL: https://issues.apache.org/jira/browse/KAFKA-8255
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Colin P. McCabe


Replica fetcher threads can exits with OffsetOutOfRangeException when the log 
start offset has advanced beyond the high water mark on the fetching broker.

Example stack trace:
{code}
org.apache.kafka.common.KafkaException: Error processing data for partition 
__consumer_offsets-46 offset 18761
at 
kafka.server.AbstractFetcherThread$$anonfun$kafka$server$AbstractFetcherThread$$processFetchRequest$2$$anonfun$apply$mcV$sp$3$$anonfun$apply$10.apply(AbstractFetcherThread.scala:335)
at 
kafka.server.AbstractFetcherThread$$anonfun$kafka$server$AbstractFetcherThread$$processFetchRequest$2$$anonfun$apply$mcV$sp$3$$anonfun$apply$10.apply(AbstractFetcherThread.scala:294)
at scala.Option.foreach(Option.scala:257)
at 
kafka.server.AbstractFetcherThread$$anonfun$kafka$server$AbstractFetcherThread$$processFetchRequest$2$$anonfun$apply$mcV$sp$3.apply(AbstractFetcherThread.scala:294)
at 
kafka.server.AbstractFetcherThread$$anonfun$kafka$server$AbstractFetcherThread$$processFetchRequest$2$$anonfun$apply$mcV$sp$3.apply(AbstractFetcherThread.scala:293)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.server.AbstractFetcherThread$$anonfun$kafka$server$AbstractFetcherThread$$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:293)
at 
kafka.server.AbstractFetcherThread$$anonfun$kafka$server$AbstractFetcherThread$$processFetchRequest$2.apply(AbstractFetcherThread.scala:293)
at 
kafka.server.AbstractFetcherThread$$anonfun$kafka$server$AbstractFetcherThread$$processFetchRequest$2.apply(AbstractFetcherThread.scala:293)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at 
kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:292)
at 
kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:132)
at 
kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:131)
at scala.Option.foreach(Option.scala:257)
at 
kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:131)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:113)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Caused by: org.apache.kafka.common.errors.OffsetOutOfRangeException: Cannot 
increment the log start offset to 4808819 of partition __consumer_offsets-46 
since it is larger than the high watermark 18761
[2019-04-16 14:16:42,257] INFO [ReplicaFetcher replicaId=1001, leaderId=1003, 
fetcherId=0] Stopped (kafka.server.ReplicaFetcherThread)
{code}

It seems that we should not terminate the replica fetcher thread in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #446

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7026; Sticky Assignor Partition Assignment Improvement (KIP-341)

[github] MINOR: Ensure producer state append exceptions areuseful (#6591)

[github] KAFKA-7866; Ensure no duplicate offsets after txn index append failure

--
[...truncated 1.35 MB...]

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit PASSED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions STARTED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks PASSED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle STARTED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle PASSED

org.apache.kafka.trogdor.workload.TimeIntervalTransactionsGeneratorTest > 
testCommitsTransactionAfterIntervalPasses STARTED

org.apache.kafka.trogdor.workload.TimeIntervalTransactionsGeneratorTest > 
testCommitsTransactionAfterIntervalPasses PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes PASSED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithSomePartitions STARTED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithSomePartitions PASSED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithNoPartitions STARTED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithNoPartitions PASSED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testInvalidTopicNameRaisesExceptionInMaterialize STARTED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testInvalidTopicNameRaisesExceptionInMaterialize PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3567

2019-04-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7026; Sticky Assignor Partition Assignment Improvement (KIP-341)

[github] MINOR: Ensure producer state append exceptions areuseful (#6591)

[github] KAFKA-7866; Ensure no duplicate offsets after txn index append failure

--
[...truncated 1.35 MB...]
org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 

[jira] [Resolved] (KAFKA-7652) Kafka Streams Session store performance degradation from 0.10.2.2 to 0.11.0.0

2019-04-18 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7652.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

It should be fixed by the latest PR (details about the performance benchmarks 
can be found inside the PR).

> Kafka Streams Session store performance degradation from 0.10.2.2 to 0.11.0.0
> -
>
> Key: KAFKA-7652
> URL: https://issues.apache.org/jira/browse/KAFKA-7652
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 0.11.0.1, 0.11.0.2, 0.11.0.3, 1.1.1, 2.0.0, 
> 2.0.1
>Reporter: Jonathan Gordon
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: kip
> Fix For: 2.3.0
>
> Attachments: 0.10.2.1-NamedCache.txt, 2.2.0-rc0_b-NamedCache.txt, 
> 2.3.0-7652-NamedCache.txt, kafka_10_2_1_flushes.txt, kafka_11_0_3_flushes.txt
>
>
> I'm creating this issue in response to [~guozhang]'s request on the mailing 
> list:
> [https://lists.apache.org/thread.html/97d620f4fd76be070ca4e2c70e2fda53cafe051e8fc4505dbcca0321@%3Cusers.kafka.apache.org%3E]
> We are attempting to upgrade our Kafka Streams application from 0.10.2.1 but 
> experience a severe performance degradation. The highest amount of CPU time 
> seems spent in retrieving from the local cache. Here's an example thread 
> profile with 0.11.0.0:
> [https://i.imgur.com/l5VEsC2.png]
> When things are running smoothly we're gated by retrieving from the state 
> store with acceptable performance. Here's an example thread profile with 
> 0.10.2.1:
> [https://i.imgur.com/IHxC2cZ.png]
> Some investigation reveals that it appears we're performing about 3 orders 
> magnitude more lookups on the NamedCache over a comparable time period. I've 
> attached logs of the NamedCache flush logs for 0.10.2.1 and 0.11.0.3.
> We're using session windows and have the app configured for 
> commit.interval.ms = 30 * 1000 and cache.max.bytes.buffering = 10485760
> I'm happy to share more details if they would be helpful. Also happy to run 
> tests on our data.
> I also found this issue, which seems like it may be related:
> https://issues.apache.org/jira/browse/KAFKA-4904
>  
> KIP-420: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-420%3A+Add+Single+Value+Fetch+in+Session+Stores]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7026) Sticky assignor could assign a partition to multiple consumers (KIP-341)

2019-04-18 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7026.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Sticky assignor could assign a partition to multiple consumers (KIP-341)
> 
>
> Key: KAFKA-7026
> URL: https://issues.apache.org/jira/browse/KAFKA-7026
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Major
>  Labels: kip
> Fix For: 2.3.0
>
>
> In the following scenario sticky assignor assigns a topic partition to two 
> consumers in the group:
>  # Create a topic {{test}} with a single partition
>  # Start consumer {{c1}} in group {{sticky-group}} ({{c1}} becomes group 
> leader and gets {{test-0}})
>  # Start consumer {{c2}}  in group {{sticky-group}} ({{c1}} holds onto 
> {{test-0}}, {{c2}} does not get any partition) 
>  # Pause {{c1}} (e.g. using Java debugger) ({{c2}} becomes leader and takes 
> over {{test-0}}, {{c1}} leaves the group)
>  # Resume {{c1}}
> At this point both {{c1}} and {{c2}} will have {{test-0}} assigned to them.
>  
> The reason is {{c1}} still has kept its previous assignment ({{test-0}}) from 
> the last assignment it received from the leader (itself) and did not get the 
> next round of assignments (when {{c2}} became leader) because it was paused. 
> Both {{c1}} and {{c2}} enter the rebalance supplying {{test-0}} as their 
> existing assignment. The sticky assignor code does not currently check and 
> avoid this duplication.
>  
> Note: This issue was originally reported on 
> [StackOverflow|https://stackoverflow.com/questions/50761842/kafka-stickyassignor-breaking-delivery-to-single-consumer-in-the-group].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8254) Suppress incorrectly passes a null topic to the serdes

2019-04-18 Thread John Roesler (JIRA)
John Roesler created KAFKA-8254:
---

 Summary: Suppress incorrectly passes a null topic to the serdes
 Key: KAFKA-8254
 URL: https://issues.apache.org/jira/browse/KAFKA-8254
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.1, 2.2.0, 2.1.0
Reporter: John Roesler
 Fix For: 2.3.0, 2.1.2, 2.2.1


For example, in KTableSuppressProcessor, we do:
{noformat}
final Bytes serializedKey = Bytes.wrap(keySerde.serializer().serialize(null, 
key));
{noformat}

This violates the contract of Serializer (and Deserializer), and breaks 
integration with known Serde implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8253) Unrecoverable state when a broker's INPUT access is blocked (Zombie broker)

2019-04-18 Thread Gowtham Gutha (JIRA)
Gowtham Gutha created KAFKA-8253:


 Summary: Unrecoverable state when a broker's INPUT access is 
blocked (Zombie broker)
 Key: KAFKA-8253
 URL: https://issues.apache.org/jira/browse/KAFKA-8253
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.1.1
Reporter: Gowtham Gutha


I carried out a test as mentioned in this particular SO question.

[https://stackoverflow.com/questions/55706589/what-happens-if-the-leader-is-not-dead-but-unable-to-receive-messages-in-kafka]

*Gist of the test:*

A broker's INPUT access is blocked. So it is not able to receive any messages.

But still it can send heartbeats to ZK, so that a leader election will not 
happen.

So any message produced to the partition lead by this _zombie_ broker is never 
produced leaving the system in an unrecoverable state.

 

*Possible resolution:*

There should be a 2 way communication such that, if a broker is not able to 
have any INPUT access, the ZK MUST know of it by sending some ping messages to 
the brokers.

If there is no response from the broker, elect a new one. Since, if the broker 
is not ping-able by ZK, that broker is as good as dead for its purpose.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)