Fwd: [DISCUSS] KIP-333 Consider a faster form of rebalancing

2019-06-03 Thread Matthias J. Sax
Just cycling back to this older KIP discussion.

I still have some concerns about the proposal, and there was no activity
for a long time. I am wondering if there is still interest in this KIP,
or if we should discard it?

-Matthias


 Forwarded Message 
Subject: Re: [DISCUSS] KIP-333 Consider a faster form of rebalancing
Date: Sun, 30 Sep 2018 12:01:14 -0700
From: Matthias J. Sax 
Organization: Confluent Inc
To: dev@kafka.apache.org

What is the status of this KIP?

I was just catching up and I agree with Becket that it seems a very
special use case, that might not be generic enough to be part of Kafka
itself. Also, for regular rebalance, as Becket pointed out, catching up
should not take very long. Only for longer offline times, this might be
an issue -- however, this this case, either the whole consumer group is
offline, or you timeouts (max.poll.interval.ms and session.timeout.ms)
are set too high.

I am also wondering, how consecutive failures would be handled? Assume
you have 2 consumer, the "regular" consumer that #seekToEnd() and the
"catch-up" consumer.

 - What happens if any (or both) consumers die?
 - How to do you track the offsets of both consumers?
 - How can this be integrated with EOS?

To me, it seems that you might want to implement this as a custom
solution via re-balance callbacks that you can register on a consumer.


-Matthias

On 8/7/18 8:05 PM, Becket Qin wrote:
> Hi Richard,
> 
> Sorry for the late response. As discussed in the other offline thread, I am
> still not sure if this use case is common enough to have a built-in
> rebalance policy.
> 
> I think usually the time to detect the consumer failure and rebalance would
> be the longer than the catching up time as the catch up usually happens in
> parallel by all the other consumers in a group. If the there is a
> bottleneck of consuming a single hot partition, this problem will exist
> regardless of rebalance. In any case, the approach of having an ad-hoc
> hidden consumer seems a little hacky.
> 
> Thanks,
> 
> Jiangjie (Becket) Qin
> 
> On Wed, Jul 18, 2018 at 2:39 PM, Richard Yu 
> wrote:
> 
>>  Hi Becket,
>> I made some changes and clarified the motivation for this KIP. :)It should
>> be easier to understand now since I included a diagram.
>> Thanks,Richard Yu
>> On Tuesday, July 17, 2018, 4:38:11 PM GMT+8, Richard Yu
>>  wrote:
>>
>>   Hi Becket,
>> Thanks for reviewing this KIP. :)
>> I probably did not explicitly state what we were trying to avoid by
>> introducing this mode. As mentioned in the KIP, there is a offset lag which
>> could result after a crash. Our main goal is to avoid this lag (i.e. the
>> latency in terms of time that results from the crash, not to reduce the
>> number of records reprocessed).
>> I could provide a couple of diagrams with what I am envisioning because
>> some points in my KIP might otherwise be hard to grasp (I will also include
>> some diagrams to give you a better idea of an use case). As for your
>> questions, I could provide a couple of answers:
>> 1. Yes, the two consumers will in fact be processing in parallel. We do
>> this because we want to accelerate the processing speed of the records to
>> make up for the latency caused by the crash.
>> 2. After the recovery point, records will not be processed twice. Let me
>> describe the scenario I was envisioning: we would let the consumer that
>> crashed seek to the end of the log using KafkaConsumer#seekToEnd.
>> Meanwhile, a secondary consumer will start processing from the latest
>> checkpointed offset and continue until it  has hit the place where the
>> first consumer that crashed began processing after seekToEnd was first
>> called. Since the consumer that crashed skipped from the recovery point to
>> the end of the log, the intermediate offsets will be processed only by the
>> secondary consumer. So it is important to note that the offset ranges which
>> the two threads process will not overlap. (This is important as it prevents
>> offsets from being processed more than once)
>>
>> 3. As for the committed offsets, the possibility of rewinding is not
>> likely. If my understanding is correct, you are probably worried that after
>> the crash, offsets that has already been previously committed will be
>> committed again. The current design prevents that from happening, as the
>> policy of where to start processing after a crash is universal across all
>> Consumer instances -- we will begin processing from the latest offset
>> committed.
>>
>> I hope that you at least got some of your questions answered. I will
>> update the KIP soon, so please stay tuned.
>>
>> Thanks,Richard Yu
>> On Tuesday, July 17, 2018, 2:14:07 PM GMT+8, Becket Qin <
>> becket@gmail.com> wrote:
>>
>>  Hi Richard,
>>
>> Thanks for the KIP. I am a little confused on what is proposed. The KIP
>> suggests that after recovery from a consumer crash, there will be two
>> consumers consuming from the same partition. One consumes starting from 

Jenkins build is back to normal : kafka-2.3-jdk8 #36

2019-06-03 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8477) Cannot consume and request metadata for __consumer_offsets topic in Kafka 2.2

2019-06-03 Thread Mike Mintz (JIRA)
Mike Mintz created KAFKA-8477:
-

 Summary: Cannot consume and request metadata for 
__consumer_offsets topic in Kafka 2.2
 Key: KAFKA-8477
 URL: https://issues.apache.org/jira/browse/KAFKA-8477
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.2.0
Reporter: Mike Mintz
 Attachments: kafka-2.2.0-consumer-offset-metadata-bug-master.zip, 
logs.txt

We have an application that consumes from the __consumer_offsets topic to 
report lag metrics. When we upgraded its kafka-clients dependency from 2.0.1 to 
2.2.0, it crashed with:
{noformat}
Exception in thread "main" org.apache.kafka.common.errors.TimeoutException: 
Failed to get offsets by times in 30001ms
{noformat}
I created a minimal reproduction at 
[https://github.com/mikemintz/kafka-2.2.0-consumer-offset-metadata-bug] and I'm 
uploading a zip of this code for posterity.

In particular, the behavior happens when I call KafkaConsumer.assign(), then 
poll(), then endOffsets(). This behavior only happens for the 
__consumer_offsets topic. It also only happens on the Kafka cluster that we run 
in production, which runs Kafka 2.2.0. The error does not occur on a freshly 
created Kafka cluster, and I can't get it to reproduce with EmbeddedKafka.

It works fine with both Kafka 2.0.1 and Kafka 2.1.1, so something broke between 
2.1.1. and 2.2.0. Based on the 2.2.0 changelog and the client log messages 
(attached), it looks like it may have been introduced in KAFKA-7738 (cc 
[~mumrah]). It gets in a loop, repeating the following block of log messages:
{noformat}
2019-06-03 23:24:15 DEBUG NetworkClient:1073 - [Consumer 
clientId=test.mikemintz.lag-tracker-reproduce, 
groupId=test.mikemintz.lag-tracker-reproduce] Sending metadata request 
(type=MetadataRequest, topics=__consumer_offsets) to node REDACTED:9094 (id: 
2134 rack: us-west-2b)
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5862 to 
5862 for partition __consumer_offsets-0
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6040 to 
6040 for partition __consumer_offsets-10
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6008 to 
6008 for partition __consumer_offsets-20
2019-06-03 23:24:15 DEBUG Metadata:208 - Not replacing existing epoch 6153 with 
new epoch 6152
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5652 to 
5652 for partition __consumer_offsets-30
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6081 to 
6081 for partition __consumer_offsets-39
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5629 to 
5629 for partition __consumer_offsets-9
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5983 to 
5983 for partition __consumer_offsets-11
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5896 to 
5896 for partition __consumer_offsets-31
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5278 to 
5278 for partition __consumer_offsets-13
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6026 to 
6026 for partition __consumer_offsets-18
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5608 to 
5608 for partition __consumer_offsets-22
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6025 to 
6025 for partition __consumer_offsets-32
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5685 to 
5685 for partition __consumer_offsets-8
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5730 to 
5730 for partition __consumer_offsets-43
2019-06-03 23:24:15 DEBUG Metadata:208 - Not replacing existing epoch 5957 with 
new epoch 5956
2019-06-03 23:24:15 DEBUG Metadata:208 - Not replacing existing epoch 6047 with 
new epoch 6046
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6090 to 
6090 for partition __consumer_offsets-1
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5821 to 
5821 for partition __consumer_offsets-6
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5909 to 
5909 for partition __consumer_offsets-41
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5805 to 
5805 for partition __consumer_offsets-27
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5670 to 
5670 for partition __consumer_offsets-48
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6220 to 
6220 for partition __consumer_offsets-5
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5596 to 
5596 for partition __consumer_offsets-15
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5896 to 
5896 for partition __consumer_offsets-35
2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5858 to 
5858 for 

Jenkins build is back to normal : kafka-trunk-jdk11 #597

2019-06-03 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.3-jdk8 #35

2019-06-03 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-8426; Fix for keeping the ConfigProvider configs 
consistent with

[rhauch] KAFKA-8449: Restart tasks on reconfiguration under incremental

--
[...truncated 2.94 MB...]

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest 

[jira] [Resolved] (KAFKA-8473) Adjust Connect system tests for incremental cooperative rebalancing and enable them for both eager and incremental cooperative rebalancing

2019-06-03 Thread Randall Hauch (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8473.
--
Resolution: Fixed
  Reviewer: Randall Hauch

> Adjust Connect system tests for incremental cooperative rebalancing and 
> enable them for both eager and incremental cooperative rebalancing
> --
>
> Key: KAFKA-8473
> URL: https://issues.apache.org/jira/browse/KAFKA-8473
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Critical
> Fix For: 2.3
>
>
>  
> {{connect.protocol=compatible}} that enables incremental cooperative 
> rebalancing if all workers are in that version is now the default option. 
> System tests should be parameterized to keep running the for eager 
> rebalancing protocol as well to make sure no regression have happened. 
> Also, for the incremental cooperative protocol, a few tests need to be 
> adjusted to have a lower rebalancing delay (the delay that is applied to wait 
> for returning workers) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-474: To deprecate WindowStore#put(key, value)

2019-06-03 Thread Guozhang Wang
+1 (binding).

On Sat, Jun 1, 2019 at 3:19 PM Matthias J. Sax 
wrote:

> +1 (binding)
>
> On 5/31/19 10:58 PM, Dongjin Lee wrote:
> > +1 (non-binding).
> >
> > Thanks,
> > Dongjin
> >
> > <
> https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail_term=icon
> >
> > Virus-free.
> > www.avast.com
> > <
> https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail_term=link
> >
> > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> >
> > On Sat, Jun 1, 2019 at 2:45 PM Boyang Chen  wrote:
> >
> >> Thanks omkar for taking the initiative, +1 (non-binding).
> >>
> >> 
> >> From: omkar mestry 
> >> Sent: Saturday, June 1, 2019 1:40 PM
> >> To: dev@kafka.apache.org
> >> Subject: [VOTE] KIP-474: To deprecate WindowStore#put(key, value)
> >>
> >> Hi all,
> >>
> >> Since we seem to have an agreement in the discussion I would like to
> >> start the vote on KIP-474.
> >>
> >> KIP 474 :-
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=115526545
> >>
> >> Thanks & Regards
> >> Omkar Mestry
> >>
> >
> >
>
>

-- 
-- Guozhang


Re: [DISCUSS] KIP-474: To deprecate WindowStore#put(key, value)

2019-06-03 Thread Guozhang Wang
I've made a pass on the KIP and it looks good to me as well. I think we can
start a voting thread for it.

On Thu, May 30, 2019 at 10:27 PM Matthias J. Sax 
wrote:

> Thanks for the KIP Omkar.
>
> I think this is rather uncontroversial, and I also support this KIP. I
> think you can start a VOTE. People can still chime in on the discuss
> thread if they have any concerns.
>
>
> -Matthias
>
> On 5/27/19 11:50 PM, Dongjin Lee wrote:
> > Hi Omkar,
> >
> > Looks good to me. Thanks!
> >
> > Is there anyone who has some comments about the KIP?
> >
> > Thanks,
> > Dongjin
> >
> > On Tue, May 28, 2019 at 3:24 PM omkar mestry 
> wrote:
> >
> >> Hi Dongjin,
> >>
> >> I have updated the KIP please have a look and provide feedback on it.
> >>
> >> KIP :-
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=115526545
> >>
> >> Thanks & Regards
> >> Omkar Mestry
> >>
> >> On Mon, May 27, 2019 at 6:25 PM Dongjin Lee  wrote:
> >>
> >>> Hi Omkar,
> >>>
> >>> Thanks for the KIP. However, discussion thread should include a link to
> >> the
> >>> KIP document. Since you omitted it, here is the link.
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=115526545
> >>>
> >>> As far as I understand, the point of the KIP is the current API can
> >> result
> >>> in inconsistency to should be deprecated and finally, removed. Right?
> >> Here
> >>> are some comments on the KIP.
> >>>
> >>> *1. Minor corrections*
> >>>
> >>> Since the KIP proposes deprecation of an API, not actually removing it,
> >> it
> >>> would be better to correct the following sentences:
> >>>
> >>> - "Therefore by removing the method put(key, value), we can prevent
> >>> inconsistency." → "Therefore by deprecating (and finally removing) the
> >>> method put(key, value), we can prevent inconsistency."
> >>> - "Also, there are tests which are needed to be updated after removal
> of
> >>> the specified method." → "Also, there are tests which are needed to be
> >>> updated after deprecation of the specified method."
> >>>
> >>> *2. About 'Motivation' section*
> >>>
> >>> I think the motivation section can be more clear by referring to the
> risk
> >>> of the current API. How do you think?
> >>>
> >>> "... Therefore by ..."
> >>>
> >>> → "... This constraint makes WindowStore error prone. Therefore by ..."
> >>>
> >>> *3. About 'Rejected Alternatives' section*
> >>>
> >>> This sections should state why these alternatives were rejected. How
> >> about
> >>> this?
> >>>
> >>> "Since this API can be called by the user[^1][^2], updating the method
> >> can
> >>> break the code; By this reason, this approach is not feasible."
> >>>
> >>> Regards,
> >>> Dongjin
> >>>
> >>> [^1]:
> >>>
> >>>
> >>
> https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/state/Stores.html
> >>> [^2]:
> >>>
> >>>
> >>
> https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/state/WindowStore.html
> >>>
> >>> On Sun, May 26, 2019 at 3:18 PM omkar mestry 
> >>> wrote:
> >>>
>  We propose to deprecate the WindowStore#put(key, value), as it does
> not
>  have a timestamp as a parameter. The window store requires a timestamp
> >> to
>  map the key to a window frame. This method uses the current record
>  timestamp(as specified in the description of the method). There is a
> >>> method
>  present with a timestamp as a parameter which can be used instead.
> 
> >>>
> >>>
> >>> --
> >>> *Dongjin Lee*
> >>>
> >>> *A hitchhiker in the mathematical world.*
> >>> *github:  github.com/dongjinleekr
> >>> linkedin:
> >> kr.linkedin.com/in/dongjinleekr
> >>> speakerdeck:
> >>> speakerdeck.com/dongjin
> >>> *
> >>>
> >>
> >
> >
>
>

-- 
-- Guozhang


[jira] [Resolved] (KAFKA-8383) Write integration test for electLeaders

2019-06-03 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8383.

Resolution: Fixed

> Write integration test for electLeaders
> ---
>
> Key: KAFKA-8383
> URL: https://issues.apache.org/jira/browse/KAFKA-8383
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Critical
>
> Add tests for electLeaders in AdminClientIntegrationTest. Some cases that we 
> should cover:
> # Topic doesn't exists => UNKNOWN_TOPIC_OR_PARTITION
> # Unclean/preferred election not needed => ELECTION_NOT_NEEDED
> # Unclean/preferred election not possible => ELIGIBLE_LEADERS_NOT_AVAILABLE
> # Election succeeded => NONE
> # All partition (partitions==null) => NONE for all performed election. The 
> result map should only contain NONE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8476) Kafka 2.2.1 distribution contains JAX-RS API twice

2019-06-03 Thread Gunnar Morling (JIRA)
Gunnar Morling created KAFKA-8476:
-

 Summary: Kafka 2.2.1 distribution contains JAX-RS API twice
 Key: KAFKA-8476
 URL: https://issues.apache.org/jira/browse/KAFKA-8476
 Project: Kafka
  Issue Type: Bug
Reporter: Gunnar Morling


In kafka_2.12-2.2.1.tgz there is both javax.ws.rs-api-2.1.jar and 
javax.ws.rs-api-2.1.1.jar. I reckon only one should be there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #596

2019-06-03 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8449: Restart tasks on reconfiguration under incremental

--
[...truncated 2.49 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp 

[jira] [Resolved] (KAFKA-8475) Temporarily restore SslFactory.sslContext() helper

2019-06-03 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-8475.

   Resolution: Fixed
Fix Version/s: 2.3

> Temporarily restore SslFactory.sslContext() helper
> --
>
> Key: KAFKA-8475
> URL: https://issues.apache.org/jira/browse/KAFKA-8475
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.3
>
>
> Temporarily restore the SslFactory.sslContext() function, which some 
> connectors use.  This function is not a public API and it will be removed 
> eventually.  For now, we will mark it as deprecated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8475) Temporarily restore SslFactory.sslContext() helper

2019-06-03 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8475:
--

 Summary: Temporarily restore SslFactory.sslContext() helper
 Key: KAFKA-8475
 URL: https://issues.apache.org/jira/browse/KAFKA-8475
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Randall Hauch


Temporarily restore the SslFactory.sslContext() function, which some connectors 
use.  This function is not a public API and it will be removed eventually.  For 
now, we will mark it as deprecated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.3-jdk8 #34

2019-06-03 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Reordering the props modification with configs 
construction

[rajinisivaram] KAFKA-8425: Fix for correctly handling immutable maps (KIP-421 
bug)

--
[...truncated 2.86 MB...]

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED


[jira] [Resolved] (KAFKA-8469) Named Suppress Operator Needs to increment Name Counter

2019-06-03 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8469.

Resolution: Not A Problem

> Named Suppress Operator Needs to increment Name Counter
> ---
>
> Key: KAFKA-8469
> URL: https://issues.apache.org/jira/browse/KAFKA-8469
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> The {{KTable#suppress}} operator can take a user-supplied name for the 
> operator.  If the user does supply a name, the code should still increment 
> the name counter index to ensure any downstream operators that use the 
> generated name don't change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8474) Improve configuration layout on website

2019-06-03 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-8474:
-

 Summary: Improve configuration layout on website
 Key: KAFKA-8474
 URL: https://issues.apache.org/jira/browse/KAFKA-8474
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison
Assignee: Mickael Maison


The description of configurations on the website are really hard to read due to 
the narrow columns:
!Screenshot_2019-06-03 Apache Kafka(1).png!

Let's get rid of the tables so we can use the full page width. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8473) Adjust Connect system tests for incremental cooperative rebalancing and enable them for both eager and incremental cooperative rebalancing

2019-06-03 Thread Konstantine Karantasis (JIRA)
Konstantine Karantasis created KAFKA-8473:
-

 Summary: Adjust Connect system tests for incremental cooperative 
rebalancing and enable them for both eager and incremental cooperative 
rebalancing
 Key: KAFKA-8473
 URL: https://issues.apache.org/jira/browse/KAFKA-8473
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.3
Reporter: Konstantine Karantasis
Assignee: Konstantine Karantasis
 Fix For: 2.3


 

{{connect.protocol=compatible}} that enables incremental cooperative 
rebalancing if all workers are in that version is now the default option. 

System tests should be parameterized to keep running the for eager rebalancing 
protocol as well to make sure no regression have happened. 

Also, for the incremental cooperative protocol, a few tests need to be adjusted 
to have a lower rebalancing delay (the delay that is applied to wait for 
returning workers) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Jonathan Santilli
That's fantastic! thanks a lot Vahid for managing the release.

--
Jonathan




On Mon, Jun 3, 2019 at 5:18 PM Mickael Maison 
wrote:

> Thank you Vahid
>
> On Mon, Jun 3, 2019 at 5:12 PM Wladimir Schmidt 
> wrote:
> >
> > Thanks Vahid!
> >
> > On Mon, Jun 3, 2019, 16:23 Vahid Hashemian  wrote:
> >
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 2.2.1
> > >
> > > This is a bugfix release for Kafka 2.2.0. All of the changes in this
> > > release can be found in the release notes:
> > > https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> > >
> > > You can download the source and binary release from:
> > > https://kafka.apache.org/downloads#2.2.1
> > >
> > >
> > >
> ---
> > >
> > > Apache Kafka is a distributed streaming platform with four core APIs:
> > >
> > > ** The Producer API allows an application to publish a stream records
> to
> > > one or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe to one or more
> > > topics and process the stream of records produced to them.
> > >
> > > ** The Streams API allows an application to act as a stream processor,
> > > consuming an input stream from one or more topics and producing an
> output
> > > stream to one or more output topics, effectively transforming the input
> > > streams to output streams.
> > >
> > > ** The Connector API allows building and running reusable producers or
> > > consumers that connect Kafka topics to existing applications or data
> > > systems. For example, a connector to a relational database might
> capture
> > > every change to a table.
> > >
> > > With these APIs, Kafka can be used for two broad classes of
> application:
> > >
> > > ** Building real-time streaming data pipelines that reliably get data
> > > between systems or applications.
> > >
> > > ** Building real-time streaming applications that transform or react
> to the
> > > streams of data.
> > >
> > > Apache Kafka is in use at large and small companies worldwide,
> including
> > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > >
> > > A big thank you for the following 30 contributors to this release!
> > >
> > > Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> > > Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil
> Shah,
> > > Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> > > Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh
> Nandakumar,
> > > Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker,
> pkleindl,
> > > Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian,
> Victoria
> > > Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> > >
> > > We welcome your help and feedback. For more information on how to
> report
> > > problems, and to get involved, visit the project website at
> > > https://kafka.apache.org/
> > >
> > > Thank you!
> > >
> > > Regards,
> > > --Vahid Hashemian
> > >
>


-- 
Santilli Jonathan


Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Mickael Maison
Thank you Vahid

On Mon, Jun 3, 2019 at 5:12 PM Wladimir Schmidt  wrote:
>
> Thanks Vahid!
>
> On Mon, Jun 3, 2019, 16:23 Vahid Hashemian  wrote:
>
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 2.2.1
> >
> > This is a bugfix release for Kafka 2.2.0. All of the changes in this
> > release can be found in the release notes:
> > https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> >
> > You can download the source and binary release from:
> > https://kafka.apache.org/downloads#2.2.1
> >
> >
> > ---
> >
> > Apache Kafka is a distributed streaming platform with four core APIs:
> >
> > ** The Producer API allows an application to publish a stream records to
> > one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an output
> > stream to one or more output topics, effectively transforming the input
> > streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might capture
> > every change to a table.
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react to the
> > streams of data.
> >
> > Apache Kafka is in use at large and small companies worldwide, including
> > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> >
> > A big thank you for the following 30 contributors to this release!
> >
> > Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> > Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil Shah,
> > Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> > Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
> > Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker, pkleindl,
> > Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian, Victoria
> > Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> >
> > We welcome your help and feedback. For more information on how to report
> > problems, and to get involved, visit the project website at
> > https://kafka.apache.org/
> >
> > Thank you!
> >
> > Regards,
> > --Vahid Hashemian
> >


[jira] [Created] (KAFKA-8472) Use composition for better isolation of fetcher logic

2019-06-03 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8472:
--

 Summary: Use composition for better isolation of fetcher logic
 Key: KAFKA-8472
 URL: https://issues.apache.org/jira/browse/KAFKA-8472
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Currently the log dir fetcher and the replica fetcher extend from 
`AbstractFetcherThread` even though the logic they implement is independent of 
the follower state machine. We can simplify testing and maintain a cleaner 
separation of concerns by pulling the behavior that needs to be customized into 
a separate trait. So the `FetcherThread` implementation can focus on the state 
machine and the new trait focuses on fetch mechanics (i.e. how to pull data 
from the source log) while avoiding the pitfalls of class inheritance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Wladimir Schmidt
Thanks Vahid!

On Mon, Jun 3, 2019, 16:23 Vahid Hashemian  wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.2.1
>
> This is a bugfix release for Kafka 2.2.0. All of the changes in this
> release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
>
> You can download the source and binary release from:
> https://kafka.apache.org/downloads#2.2.1
>
>
> ---
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an output
> stream to one or more output topics, effectively transforming the input
> streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might capture
> every change to a table.
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react to the
> streams of data.
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 30 contributors to this release!
>
> Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil Shah,
> Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
> Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker, pkleindl,
> Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian, Victoria
> Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
>
> We welcome your help and feedback. For more information on how to report
> problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
> Regards,
> --Vahid Hashemian
>


Jenkins build is back to normal : kafka-trunk-jdk8 #3697

2019-06-03 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Guozhang Wang
Thank you Vahid!

On Mon, Jun 3, 2019 at 7:49 AM Ismael Juma  wrote:

> Thanks for managing the release Vahid!
>
> Ismael
>
> On Mon, Jun 3, 2019 at 7:23 AM Vahid Hashemian  wrote:
>
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 2.2.1
> >
> > This is a bugfix release for Kafka 2.2.0. All of the changes in this
> > release can be found in the release notes:
> > https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> >
> > You can download the source and binary release from:
> > https://kafka.apache.org/downloads#2.2.1
> >
> >
> >
> ---
> >
> > Apache Kafka is a distributed streaming platform with four core APIs:
> >
> > ** The Producer API allows an application to publish a stream records to
> > one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an output
> > stream to one or more output topics, effectively transforming the input
> > streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might capture
> > every change to a table.
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react to
> the
> > streams of data.
> >
> > Apache Kafka is in use at large and small companies worldwide, including
> > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> >
> > A big thank you for the following 30 contributors to this release!
> >
> > Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> > Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil
> Shah,
> > Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> > Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
> > Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker,
> pkleindl,
> > Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian,
> Victoria
> > Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> >
> > We welcome your help and feedback. For more information on how to report
> > problems, and to get involved, visit the project website at
> > https://kafka.apache.org/
> >
> > Thank you!
> >
> > Regards,
> > --Vahid Hashemian
> >
>


-- 
-- Guozhang


[jira] [Created] (KAFKA-8471) Replace control requests/responses with automated protocol

2019-06-03 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-8471:
--

 Summary: Replace control requests/responses with automated protocol
 Key: KAFKA-8471
 URL: https://issues.apache.org/jira/browse/KAFKA-8471
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
Assignee: Ismael Juma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8470) State change logs should not be in TRACE level

2019-06-03 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-8470:
--

 Summary: State change logs should not be in TRACE level
 Key: KAFKA-8470
 URL: https://issues.apache.org/jira/browse/KAFKA-8470
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski


The StateChange logger in Kafka should not be logging its state changes in 
TRACE level.
We consider these changes very useful in debugging and we additionally 
configure that logger to log in TRACE levels by default.

Since we consider it important enough to configure its own logger to log in a 
separate log level, why don't we change those logs to INFO and have the logger 
use the defaults?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk11 #595

2019-06-03 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8469) Named Suppress Operator Needs to increment Name Counter

2019-06-03 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8469:
--

 Summary: Named Suppress Operator Needs to increment Name Counter
 Key: KAFKA-8469
 URL: https://issues.apache.org/jira/browse/KAFKA-8469
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck


The {{KTable#suppress}} operator can take a user-supplied name for the 
operator.  If the user does supply a name, the code should still increment the 
name counter index to ensure any downstream operators that use the generated 
name don't change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Ismael Juma
Thanks for managing the release Vahid!

Ismael

On Mon, Jun 3, 2019 at 7:23 AM Vahid Hashemian  wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.2.1
>
> This is a bugfix release for Kafka 2.2.0. All of the changes in this
> release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
>
> You can download the source and binary release from:
> https://kafka.apache.org/downloads#2.2.1
>
>
> ---
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an output
> stream to one or more output topics, effectively transforming the input
> streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might capture
> every change to a table.
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react to the
> streams of data.
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 30 contributors to this release!
>
> Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil Shah,
> Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
> Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker, pkleindl,
> Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian, Victoria
> Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
>
> We welcome your help and feedback. For more information on how to report
> problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
> Regards,
> --Vahid Hashemian
>


[ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Vahid Hashemian
The Apache Kafka community is pleased to announce the release for Apache
Kafka 2.2.1

This is a bugfix release for Kafka 2.2.0. All of the changes in this
release can be found in the release notes:
https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html

You can download the source and binary release from:
https://kafka.apache.org/downloads#2.2.1

---

Apache Kafka is a distributed streaming platform with four core APIs:

** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an output
stream to one or more output topics, effectively transforming the input
streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might capture
every change to a table.

With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react to the
streams of data.

Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 30 contributors to this release!

Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil Shah,
Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker, pkleindl,
Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian, Victoria
Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang

We welcome your help and feedback. For more information on how to report
problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!

Regards,
--Vahid Hashemian


Re: Request for Contributor Permissions

2019-06-03 Thread Bill Bejeck
Hi,

You're all set now!  Thanks for your interest in Apache Kafka.

-Bill

On Mon, Jun 3, 2019 at 4:30 AM John Park <87johnpar...@gmail.com> wrote:

> Hi,
>
> My name is In Park and I would like to contribute to Apache Kafka.
> Could you give me contributor permissions?
> My Jira ID is :   in-park
>
> Regards,
> In
>


kafka connect stops consuming data when kafka broker goes down

2019-06-03 Thread Srinivas, Kaushik (Nokia - IN/Bangalore)
Hello kafka dev,

We are encountering an issue when kafka connect is running hdfs sink connector 
pulling data from kafka and writing to hdfs location.
In between when the data is flowing in the pipeline from producer -> kafka 
topic -> kafka connect hdfs sink connector -> hdfs,
If even one of the kafka broker goes down, the connect framework stops 
responding. Stops consuming records and REST API also becomes not interactive.

Until the kafka connect framework is restarted, it would not pull the data from 
kafka and REST api remains inactive. Nothing is coming in the logs as well.
Checked the topics in kafka used by connect, everything has been reassigned to 
another broker and has the leader available.

Has anyone encountered this issue ? what would be the expected behavior ?

Thanks in advance
Kaushik


[jira] [Created] (KAFKA-8468) AdminClient.deleteTopics doesn't wait until topic is deleted

2019-06-03 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-8468:


 Summary: AdminClient.deleteTopics doesn't wait until topic is 
deleted
 Key: KAFKA-8468
 URL: https://issues.apache.org/jira/browse/KAFKA-8468
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.2.1, 2.2.0
Reporter: Gabor Somogyi


Please see the example app to reproduce the issue: 
https://github.com/gaborgsomogyi/kafka-topic-stress

ZKUtils is deprecated from Kafka version 2.0.0 but there is no real alternative.
* deleteTopics doesn't wait
* ZookeeperClient has "private [kafka]" visibility




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8467) Trogdor - Add option to delete topic in RoundTripWorker

2019-06-03 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-8467:
--

 Summary: Trogdor - Add option to delete topic in RoundTripWorker
 Key: KAFKA-8467
 URL: https://issues.apache.org/jira/browse/KAFKA-8467
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


Trogdor's RoundTripWorker verifies that each message that is produced to the 
cluster gets consumed. It automatically creates the topics we want to use but 
it does not delete them in the end.

As topic deletion is an essential feature of Kafka, we should enable testing 
that too. Practice goes to show that we still have some bugs around that code 
path - e.g a recently uncovered memory leak 
https://issues.apache.org/jira/browse/KAFKA-8448



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-334 Include partitions in exceptions raised during consumer record deserialization/validation

2019-06-03 Thread Stanislav Kozlovski
Do people agree with the approach I outlined in my last reply?

On Mon, May 6, 2019 at 2:12 PM Stanislav Kozlovski 
wrote:

> Hey there Kamal,
>
> I'm sincerely sorry for missing your earlier message. As I open this
> thread up, I see I have an unsent draft message about resuming discussion
> from some time ago.
>
> In retrospect, I think I may have been too pedantic with the exception
> naming and hierarchy.
> I now believe a single exception type of `RecordDeserializationException`
> is enough. Let's go with that.
>
> On Mon, May 6, 2019 at 6:40 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
>> Matthias,
>>
>> We already have CorruptRecordException which doesn't extend the
>> SerializationException. So, we need an alternate
>> name suggestion for the corrupted record error if we decide to extend the
>> FaultyRecordException class.
>>
>> Stanislav,
>>
>> Our users are also facing this error. Could we bump up this discussion?
>>
>> I think we can have a single exception type
>> FaultyRecordException/RecordDeserialization exception to capture both
>> the errors. We can add an additional enum field to differentiate the
>> errors
>> if required.
>>
>> Thanks,
>> Kamal Chandraprakash
>>
>> On Wed, Apr 24, 2019 at 1:49 PM Kamal Chandraprakash <
>> kamal.chandraprak...@gmail.com> wrote:
>>
>> > Stanislav,
>> >
>> > Any updates on this KIP? We have internal users who want to skip the
>> > corrupted message while consuming the records.
>> >
>> >
>> > On Fri, Oct 19, 2018 at 11:34 PM Matthias J. Sax > >
>> > wrote:
>> >
>> >> I am not 100% familiar with the details of the consumer code, however I
>> >> tend to disagree with:
>> >>
>> >> > There's no difference between the two cases -- if (and only if) the
>> >> message is corrupt, it can't be deserialized.  If (and only if) it
>> can't be
>> >> deserialized, it is corrupt.
>> >>
>> >> Assume that a user configures a JSON deserializer but a faulty upstream
>> >> producer writes an Avro message. For this case, the message is not
>> >> corrupted, but still can't be deserialized. And I can imaging that
>> users
>> >> want to handle both cases differently.
>> >>
>> >> Thus, I think it makes sense to have two different exceptions
>> >> `RecordDeserializationException` and `CorruptedRecordException` that
>> can
>> >> both extend `FaultyRecordException` (don't like this name too much
>> >> honestly, but don't have a better idea for it anyway).
>> >>
>> >> Side remark. If we introduce class `RecordDeserializationException` and
>> >> `CorruptedRecordException`, we can also add an interface that both
>> >> implement to return partition/offset information and let both extend
>> >> `SerializationException` directly without an intermediate class in the
>> >> exception hierarchy.
>> >>
>> >>
>> >> -Matthias
>> >>
>> >> On 8/8/18 2:57 AM, Stanislav Kozlovski wrote:
>> >> >> If you are inheriting from SerializationException, your derived
>> class
>> >> > should also be a kind of serialization exception.  Not something more
>> >> > general.
>> >> > Yeah, the reason for inheriting it would be for
>> backwards-compatibility.
>> >> >
>> >> >> Hmm.  Can you think of any new scenarios that would make Kafka force
>> >> the
>> >> > user need to skip a specific record?  Perhaps one scenario is if
>> records
>> >> > are lost but we don't know how many.
>> >> > Not on the spot, but I do wonder how likely a new scenario is to
>> >> surface in
>> >> > the future and how we'd handle the exceptions' class hierarchy then.
>> >> >
>> >> >> Which offset were we planning to use in the
>> >> > exception?
>> >> > The offset of the record which caused the exception. In the case of
>> >> > batches, we use the last offset of the batch. In both cases, users
>> >> should
>> >> > have to seek +1 from the given offset. You can review the PR to
>> ensure
>> >> its
>> >> > accurate
>> >> >
>> >> >
>> >> > If both of you prefer `RecordDeserializationException`, we can go
>> with
>> >> > that. Please do confirm that is okay
>> >> >
>> >> > On Tue, Aug 7, 2018 at 11:35 PM Jason Gustafson 
>> >> wrote:
>> >> >
>> >> >> One difference between the two cases is that we can't generally
>> trust
>> >> the
>> >> >> offset of a corrupt message. Which offset were we planning to use in
>> >> the
>> >> >> exception? Maybe it should be either the fetch offset or one plus
>> the
>> >> last
>> >> >> consumed offset? I think I'm with Colin in preferring
>> >> >> RecordDeserializationException for both cases if possible. For one
>> >> thing,
>> >> >> that makes the behavior consistent whether or not `check.crs` is
>> >> enabled.
>> >> >>
>> >> >> -Jason
>> >> >>
>> >> >> On Tue, Aug 7, 2018 at 11:17 AM, Colin McCabe 
>> >> wrote:
>> >> >>
>> >> >>> Hi Stanislav,
>> >> >>>
>> >> >>> On Sat, Aug 4, 2018, at 10:44, Stanislav Kozlovski wrote:
>> >>  Hey Colin,
>> >> 
>> >>  It may be a bit vague but keep in mind we also raise the
>> exception in
>> >> >> the
>> >>  case where the record is 

[jira] [Created] (KAFKA-8466) Remove 'jackson-module-scala' dependency (and replace it with some code)

2019-06-03 Thread JIRA
Dejan Stojadinović created KAFKA-8466:
-

 Summary: Remove 'jackson-module-scala' dependency (and replace it 
with some code)
 Key: KAFKA-8466
 URL: https://issues.apache.org/jira/browse/KAFKA-8466
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Dejan Stojadinović
Assignee: Dejan Stojadinović


*Prologue:* 
 * [https://github.com/apache/kafka/pull/5454#issuecomment-497323889]
 * [https://github.com/apache/kafka/pull/5726/files#r289078080]

*Rationale:* one dependency less is always a good thing.

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Request for Contributor Permissions

2019-06-03 Thread John Park
Hi,

My name is In Park and I would like to contribute to Apache Kafka.
Could you give me contributor permissions?
My Jira ID is :   in-park

Regards,
In