[jira] [Updated] (KAFKA-4679) Remove unstable markers from Connect APIs

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4679:
-
Status: Patch Available  (was: Open)

> Remove unstable markers from Connect APIs
> -
>
> Key: KAFKA-4679
> URL: https://issues.apache.org/jira/browse/KAFKA-4679
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.2.0
>
>
> Connect has had a stable API for awhile now and we are careful about 
> compatibility. It's safe to remove the unstable markers now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4679) Remove unstable markers from Connect APIs

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4679:
-
Priority: Blocker  (was: Major)

> Remove unstable markers from Connect APIs
> -
>
> Key: KAFKA-4679
> URL: https://issues.apache.org/jira/browse/KAFKA-4679
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.2.0
>
>
> Connect has had a stable API for awhile now and we are careful about 
> compatibility. It's safe to remove the unstable markers now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4450) Add missing 0.10.1.x upgrade tests and ensure ongoing compatibility checks

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4450:
-
Status: Patch Available  (was: Open)

> Add missing 0.10.1.x upgrade tests and ensure ongoing compatibility checks
> --
>
> Key: KAFKA-4450
> URL: https://issues.apache.org/jira/browse/KAFKA-4450
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.10.1.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.2.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We have upgrade system tests, but we neglected to update them for the most 
> recent released versions (we only have LATEST_0_10_0 but not something from 
> 0_10_1).
> We should probably not only add these versions, but also a) make sure some 
> TRUNK version is always included since upgrade to trunk would always be 
> possible to avoid issues for anyone deploying off trunk (we want every commit 
> to trunk to be solid & compatible) and b) make sure there aren't gaps between 
> versions annotated on the test vs versions that are officially released 
> (which may not be easy statically with the decorators, but might be possible 
> by checking the kafkatest version against previous versions and checking for 
> gaps?).
> Perhaps we need to be able to get the most recent release/snapshot version 
> from the python code so we can always validate previous versions? Even if 
> that's possible, is there going to be a reliable way to get all the previous 
> released versions so we can make sure we have all upgrade tests in place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3209) Support single message transforms in Kafka Connect

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3209:
-
Status: Patch Available  (was: Reopened)

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3209) Support single message transforms in Kafka Connect

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842359#comment-15842359
 ] 

ASF GitHub Bot commented on KAFKA-3209:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/2458

KAFKA-3209: KIP-66: Flatten and Cast single message transforms



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-3209-even-more-transforms

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2458.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2458


commit 14692133dc80552410b2b0eac76b9fca8a73afe1
Author: Ewen Cheslack-Postava 
Date:   2017-01-27T07:26:02Z

KAFKA-3209: KIP-66: Flatten and Cast single message transforms




> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2458: KAFKA-3209: KIP-66: Flatten and Cast single messag...

2017-01-26 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/2458

KAFKA-3209: KIP-66: Flatten and Cast single message transforms



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-3209-even-more-transforms

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2458.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2458


commit 14692133dc80552410b2b0eac76b9fca8a73afe1
Author: Ewen Cheslack-Postava 
Date:   2017-01-27T07:26:02Z

KAFKA-3209: KIP-66: Flatten and Cast single message transforms




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4706) Unify StreamsKafkaClient instances

2017-01-26 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-4706:
--

 Summary: Unify StreamsKafkaClient instances
 Key: KAFKA-4706
 URL: https://issues.apache.org/jira/browse/KAFKA-4706
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Matthias J. Sax
Priority: Minor


Kafka Streams currently used two instances of {{StreamsKafkaClient}} (one in 
{{KafkaStreams}} and one in {{InternalTopicManager}}).

We want to unify both such that only a single instance is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3209) Support single message transforms in Kafka Connect

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3209:
-
Priority: Blocker  (was: Major)

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-3209) Support single message transforms in Kafka Connect

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reopened KAFKA-3209:
--
  Assignee: Ewen Cheslack-Postava  (was: Shikhar Bhushan)

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>Assignee: Ewen Cheslack-Postava
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #1224

2017-01-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Update JavaDoc for DSL PAPI-API

[jason] MINOR: Close create topics policy during shutdown and more tests

[wangguoz] MINOR: Add Streams system test for broker backwards compatibility

[wangguoz] MINOR: Streams API JavaDoc improvements

--
[...truncated 4076 lines...]

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED


Re: [DISCUSS]- JIRA ISSUE KAFKA-4566 : Can't Symlink to Kafka bins

2017-01-26 Thread Akhilesh Naidu
Hi Colin,


I had picked up this bug from when it was reported specifically for the script 
'kafka-console-consumer.sh'.

Going through the bug description it seemed the reporter wanted to call the 
scripts form maybe another

location using symlinks. Hence the extra code to replicate the readlink 
behavior.


Going ahead we thought that this could also end up being a generic case for all 
the other scripts in the bin file

and hence the confusion about having the same code duplicated in all the other 
files in bin directory.




Regards

Akhilesh





From: Colin McCabe 
Sent: Thursday, January 26, 2017 12:55:16 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS]- JIRA ISSUE KAFKA-4566 : Can't Symlink to Kafka bins

Thanks for looking at this, Akhilesh.  Can you be a little bit clearer
about why keeping all the scripts or script symlinks in the same
directory is not an option for you?


best,

Colin



On Tue, Jan 24, 2017, at 06:01, Akhilesh Naidu wrote:

> Hi,



>



>



> The bug deals with the problem one encounters in case if the script
> 'kafka-console-consumer.sh' is executed
>  through a symlink which could be placed on a different location
>  on disk
>

>  The initial suggestion provided in the bug was to make changes in the
>  below line
>  exec $(dirname $0)/kafka-run-class.sh
>  kafka.tools.ConsoleConsumer "$@"
>

>  to replace it to

>  "$(dirname "$(readlink -e "$0")")"

>

>  But as commented in the bug earlier,

>  the above would be OS dependent as MacOS version of readlink does not
>  have an -e option.
>

>  1) One approach could be to simulate the working of the readlink
> function, in a portable manner. I have a working patch for this.
>  The details are available in the comment link

>  
> https://issues.apache.org/jira/browse/KAFKA-4566?focusedCommentId=15831442=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15831442
>

>

>  2) Now seeing that the other scripts in the 'kafka/bin/' location
> also could have similar calls from symlink
>  I tried moving the function snippet into a separate utilities file,
>  in order to reuse,
>  but if we intend to include the utilities file in all the scripts we
>  need to have the exact base location to our utilities file,
>  which is what we wrote our function in the first place for :(.

>  So the only option seems to duplicate the function code in all
>  required scripts.
>

>  Any suggestions on how to go about the above.

>

>



>



>



> Regards



> Akhilesh



>



>



> DISCLAIMER == This e-mail may contain privileged and
> confidential information which is the property of Persistent Systems
> Ltd. It is intended only for the use of the individual or entity to
> which it is addressed. If you are not the intended recipient, you are
> not authorized to read, retain, copy, print, distribute or use this
> message. If you have received this communication in error, please
> notify the sender and delete all copies of this message. Persistent
> Systems Ltd. does not accept any liability for virus infected mails.


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.



Re: [DISCUSS] KIP-98: Exactly Once Delivery and Transactional Messaging

2017-01-26 Thread Apurva Mehta
Eugen, moving your email to the main thread so that it doesn't get split.

The `transaction.app.id` is a prerequisite for using transactional APIs.
And only messages wrapped inside transactions will enjoy idempotent
guarantees across sessions, and that too only when they employ a
consume-process-produce pattern.

In other words, producers where the `transaction.app.id` is specified will
not enjoy idempotence across sessions unless their messages are
transactional. ie. that the sends  are wrapped between `beginTransaction`,
`sendOffsets`, and `commitTransaction`.

The comment about the heartbeat was just a passing comment about the fact
that an AppId could be expired if a producer doesn't use transactions for a
long time. We don't plan to implement heartbeats in V1, though we might in
the future.

Hope this clarified things.

Regards,
Apurva


KIP-98 says
>  > transaction.app.id: A unique and persistent way to identify a
> producer. This is used to ensure idempotency and to enable transaction
> recovery or rollback across producer sessions. This is optional: you will
> lose cross-session guarantees if this is blank.
> which might suggest that a producer that does not use the transactional
> features, but does set the transaction.app.id, could get cross-session
> idempotency. But the design document "Exactly Once Delivery and
> Transactional Messaging in Kafka" rules that out:
>  > For the idempotent producer (i.e., producer that do not use
> transactional APIs), currently we do not make any cross-session guarantees
> in any case. In the future, we can extend this guarantee by having the
> producer to periodically send InitPIDRequest to the transaction coordinator
> to keep the AppID from expiring, which preserves the producer's zombie
> defence.
> Until that point in the future, could my non-transactional producer send a
> InitPIDRequest once and then heartbeat via 
> BeginTxnRequest/EndTxnRequest(ABORT)
> in intervals less than transaction.app.id.timeout.ms in order to
> guarantee cross-session itempotency? Or is that not guaranteed because
> "currently we do not make any cross-session guarantees in any case"? I know
> this is would be an ugly hack.
> I guess that is also what the recently added "Producer HeartBeat" feature
> proposal would address - although it is described to prevent idle
> transactional producers from having their AppIds expired.
>
> Related question: If KIP-98 does not make cross-session guarantees for
> idempotent producers, is the only improvement over the current idempotency
> situation the prevention of duplicate messages in case of a partition
> leader migration? Because if a broker fails or the publisher fails, KIP-98
> does not seem to change the risk of dupes for non-transactional producers.





>
> Btw: Good job! Both in terms of Kafka in general, and KIP-98 in particular


Cheers

On Wed, Jan 25, 2017 at 6:00 PM, Apurva Mehta  wrote:

>
>
> On Tue, Jan 17, 2017 at 6:17 PM, Apurva Mehta  wrote:
>
>> Hi Jun,
>>
>> Some answers in line.
>>
>>
>> 109. Could you describe when Producer.send() will receive an Unrecognized
>>
>> MessageException?
>>
>>
>> This exception will be thrown if the producer sends a sequence number
>> which is greater than the sequence number expected by the broker (ie. more
>> than 1 greater than the previously sent sequence number). This can happen
>> in two cases:
>>
>> a) If there is a bug in the producer where sequence numbers are
>> incremented more than once per message. So the producer itself will send
>> messages with gaps in sequence numbers.
>> b) The broker somehow lost a previous message. In a cluster configured
>> for durability (ie. no unclean leader elections, replication factor of 3,
>> min.isr of 2, acks=all, etc.), this should not happened.
>>
>> So realistically, this exception will only be thrown in clusters
>> configured for high availability where brokers could lose messages.
>>
>> Becket raised the question if we should throw this exception at all in
>> case b: it indicates a problem with a previously sent message and hence the
>> semantics are counter intuitive. We are still discussing this point, and
>> suggestions are most welcome!
>>
>>
> I updated the KIP wiki to clarify when this exception will be raised.
>
> First of all, I renamed this to OutOfOrderSequenceException. Based on
> Jay's suggestion, this is a more precise name that is easier to understand.
>
> Secondly, I updated the proposed API so that the send call will never
> raise this exception directly. Instead this exception will be returned in
> the future or passed with the callback, if any. Further, since this is a
> fatal exception, any _future_ invocations of send() or other data
> generating methods in the producer will raise an IllegalStateException. I
> think this makes the semantics clearer and addresses the feedback on this
> part of the API update.
>
> Thanks,
> Apurva
>


[GitHub] kafka pull request #2437: MINOR: Streams API JavaDoc improvements

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2437


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2403: MINOR: add Streams system test for broker backward...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2403


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2443: MINOR: Close create topics policy during shutdown ...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2443


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2413: MINOR: update JavaDoc for DSL PAPI-API

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2413


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Guaranteeing cross-session idempotency with KIP-98

2017-01-26 Thread Eugen Dueck

KIP-98 says

 > transaction.app.id: A unique and persistent way to identify a 
producer. This is used to ensure idempotency and to enable transaction 
recovery or rollback across producer sessions. This is optional: you 
will lose cross-session guarantees if this is blank.


which might suggest that a producer that does not use the transactional 
features, but does set the transaction.app.id, could get cross-session 
idempotency. But the design document "Exactly Once Delivery and 
Transactional Messaging in Kafka" rules that out:


 > For the idempotent producer (i.e., producer that do not use 
transactional APIs), currently we do not make any cross-session 
guarantees in any case. In the future, we can extend this guarantee by 
having the producer to periodically send InitPIDRequest to the 
transaction coordinator to keep the AppID from expiring, which preserves 
the producer's zombie defence.


Until that point in the future, could my non-transactional producer send 
a InitPIDRequest once and then heartbeat via 
BeginTxnRequest/EndTxnRequest(ABORT) in intervals less than 
transaction.app.id.timeout.ms in order to guarantee cross-session 
itempotency? Or is that not guaranteed because "currently we do not make 
any cross-session guarantees in any case"? I know this is would be an 
ugly hack.


I guess that is also what the recently added "Producer HeartBeat" 
feature proposal would address - although it is described to prevent 
idle transactional producers from having their AppIds expired.



Related question: If KIP-98 does not make cross-session guarantees for 
idempotent producers, is the only improvement over the current 
idempotency situation the prevention of duplicate messages in case of a 
partition leader migration? Because if a broker fails or the publisher 
fails, KIP-98 does not seem to change the risk of dupes for 
non-transactional producers.


Btw: Good job! Both in terms of Kafka in general, and KIP-98 in particular

Cheers
Eugen


[jira] [Commented] (KAFKA-4450) Add missing 0.10.1.x upgrade tests and ensure ongoing compatibility checks

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840900#comment-15840900
 ] 

ASF GitHub Bot commented on KAFKA-4450:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/2457

KAFKA-4450: Add upgrade tests for 0.10.1 releases and rename TRUNK to 
CURRENT_BRANCH to reduce confusion.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-4450-upgrade-tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2457.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2457


commit b0d74ef691214010ecfa4c0e27283fdc2373f4b4
Author: Ewen Cheslack-Postava 
Date:   2017-01-26T18:45:06Z

KAFKA-4450: Add upgrade tests for 0.10.1 releases and rename TRUNK to 
CURRENT_BRANCH to reduce confusion.




> Add missing 0.10.1.x upgrade tests and ensure ongoing compatibility checks
> --
>
> Key: KAFKA-4450
> URL: https://issues.apache.org/jira/browse/KAFKA-4450
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.10.1.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.2.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We have upgrade system tests, but we neglected to update them for the most 
> recent released versions (we only have LATEST_0_10_0 but not something from 
> 0_10_1).
> We should probably not only add these versions, but also a) make sure some 
> TRUNK version is always included since upgrade to trunk would always be 
> possible to avoid issues for anyone deploying off trunk (we want every commit 
> to trunk to be solid & compatible) and b) make sure there aren't gaps between 
> versions annotated on the test vs versions that are officially released 
> (which may not be easy statically with the decorators, but might be possible 
> by checking the kafkatest version against previous versions and checking for 
> gaps?).
> Perhaps we need to be able to get the most recent release/snapshot version 
> from the python code so we can always validate previous versions? Even if 
> that's possible, is there going to be a reliable way to get all the previous 
> released versions so we can make sure we have all upgrade tests in place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2457: KAFKA-4450: Add upgrade tests for 0.10.1 releases ...

2017-01-26 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/2457

KAFKA-4450: Add upgrade tests for 0.10.1 releases and rename TRUNK to 
CURRENT_BRANCH to reduce confusion.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-4450-upgrade-tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2457.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2457


commit b0d74ef691214010ecfa4c0e27283fdc2373f4b4
Author: Ewen Cheslack-Postava 
Date:   2017-01-26T18:45:06Z

KAFKA-4450: Add upgrade tests for 0.10.1 releases and rename TRUNK to 
CURRENT_BRANCH to reduce confusion.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #1222

2017-01-26 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1885

2017-01-26 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Change logging level for ignored maybeAddMetric from debug to

[ismael] KAFKA-4636; Per listener security settings overrides (KIP-103)

--
[...truncated 8307 lines...]

kafka.log.LogTest > testReadWithTooSmallMaxLength STARTED

kafka.log.LogTest > testReadWithTooSmallMaxLength PASSED

kafka.log.LogTest > testOverCompactedLogRecovery STARTED

kafka.log.LogTest > testOverCompactedLogRecovery PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved STARTED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages STARTED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload STARTED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog STARTED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset STARTED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate STARTED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName STARTED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles STARTED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages STARTED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages PASSED

kafka.log.LogTest > testSizeBasedLogRoll STARTED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
STARTED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter STARTED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName STARTED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo STARTED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile STARTED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets STARTED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED


Re: [DISCUSS] KIP-111 : Kafka should preserve the Principal generated by the PrincipalBuilder while processing the request received on socket channel, on the broker.

2017-01-26 Thread Mayuresh Gharat
Hi Dong,

Thanks for the review. Please see the replies inline.


1. I am not sure we need to add the method buildPrincipal(Map
principalConfigs). It seems that user can simply do
principalBuilder.configure(...).buildPrincipal(...) without using that
method.
-> I am not sure if I understand the question.
buildPrincipal(Map principalConfigs) will be used to build
individual Principals from the passed in configs. Each Principal can be
different type and the PrincipalBuilder is responsible for handling those
configs correctly and build those Principals.

2. Is there any reason specific reason that we should put the
channelPrincipal in KafkaPrincipal class instead of the Session class? If
they work equally well to serve the use-case of this KIP, then it seems
better to put this field in the Session class to avoid changing interface
that needs to be implemented by custom principal.
-> Doing this might be backwards incompatible as we need to
preserve the existing behavior of kafka-acls.sh. Also as we have field of
PrincipalType which can be used in future if Kafka decides to support
different Principal types (currently it just says "User"), we might loose
that functionality.

Thanks,

Mayuresh


On Tue, Jan 24, 2017 at 3:35 PM, Dong Lin  wrote:

> Hey Mayuresh,
>
> Thanks for the KIP. I actually like the suggestions by Ismael and Jun. Here
> are my comments:
>
> 1. I am not sure we need to add the method buildPrincipal(Map
> principalConfigs). It seems that user can simply do
> principalBuilder.configure(...).buildPrincipal(...) without using that
> method.
>
> 2. Is there any reason specific reason that we should put the
> channelPrincipal in KafkaPrincipal class instead of the Session class? If
> they work equally well to serve the use-case of this KIP, then it seems
> better to put this field in the Session class to avoid changing interface
> that needs to be implemented by custom principal.
>
> Dong
>
>
> On Mon, Jan 23, 2017 at 5:55 PM, Mayuresh Gharat <
> gharatmayures...@gmail.com
> > wrote:
>
> > Hi Rajini,
> >
> > Thanks a lot for the review. Please see the comments inline :
> >
> > It feels like the goal is to expose custom Principal as an
> > opaque object between PrincipalBuilder and Authorizer so that Kafka
> doesn't
> > really need to know anything about additional stuff added for
> > customization. But kafka-acls.sh is expecting a key-value map from which
> > Principal is constructed. This is a breaking change to the
> PrincipalBuilder
> > interface - and I am not sure what it achieves.
> > -> kafka-acls is a commandline tool where in currently we just
> specify
> > the "names" of the principal that are allowed or denied.
> > The Principal generated by PrincipalBuilder is still opaque and Kafka as
> > such does not need to know the details.
> > The key-value map that is been passed in, will be used specifically by
> the
> > user PrincipalBuilder to create the Principal. The main motivation of the
> > KIP is that, the Principal built by the PrincipalBuilder can have other
> > fields apart from the "name", which are ignored currently. Allowing a
> > key-value pair to be passed in will enable the PrincipalBuilder to create
> > such type of Principal.
> >
> > 1. A custom Principal is (a) created during authentication using custom
> > PrincipalBuilder (b) checked during authorization using
> Principal.equals()
> > and (c) stored in Zookeeper using Principal.toString(). Is that correct?
> > -> The authorization will be done as per the user supplied
> Authorizer.
> > As not everyone might be using zookeeper for storing ACLs, its storage is
> > again Authorizer  implementation dependent.
> >
> > 2. Is the reason for the new parameters in kafka-acls.sh and the breaking
> > change in PrincipalBuilder interface to enable users to specify a
> Principal
> > using properties rather than create the String in 1c) themselves?
> > -> Please see the explanation above.
> >
> > 3. Since the purpose of the new PrincipalBuilder method
> > buildPrincipal(Map > ?> principalConfigs) is to create a new Principal from command line
> > parameters, perhaps Properties or Map would be more
> > appropriate?
> > -> Yes we can, but I actually prefer to keep it similar to
> > configure(Map configs) API.
> >
> >
> > Hi Ismael,
> >
> > Thanks a lot for the review. Please see the comments inline.
> >
> > 1. PrincipalBuilder implements Configurable and gets a map of properties
> > via the `configure` method. Do we really need a new `buildPrincipal`
> method
> > given that?
> > --> The configure() API will actually be used to configure the
> > PrincipalBuilder in the same way as the Authorizer. The buildPrincipal()
> > API will be used by the PrincipalBuilder to build individual principals.
> > Each of these principals can be of different custom types like
> > GroupPrincipals, ServicePrincipals and so on, based 

[jira] [Commented] (KAFKA-4636) Per listener security setting overrides (KIP-103)

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840801#comment-15840801
 ] 

ASF GitHub Bot commented on KAFKA-4636:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2406


> Per listener security setting overrides (KIP-103)
> -
>
> Key: KAFKA-4636
> URL: https://issues.apache.org/jira/browse/KAFKA-4636
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: kip
> Fix For: 0.10.3.0
>
>
> This is a follow-up to KAFKA-4565 where most of KIP-103 was implemented. I 
> quote the missing bit from the KIP:
> "Finally, we make it possible to provide different security (SSL and SASL) 
> settings for each listener name by adding a normalised prefix (the listener 
> name is lowercased)  to the config name. For example, if we wanted to set a 
> different keystore for the CLIENT listener, we would set a config with name 
> listener.name.client.ssl.keystore.location. If the config for the listener 
> name is not set, we will fallback to the generic config (i.e. 
> ssl.keystore.location) for compatibility and convenience. For the SASL case, 
> some configs are provided via a JAAS file, which consists of one or more 
> entries. The broker currently looks for an entry named KafkaServer. We will 
> extend this so that the broker first looks for an entry with a lowercased 
> listener name followed by a dot as a prefix to the existing name. For the 
> CLIENT listener example, the broker would first look for client.KafkaServer 
> with a fallback to KafkaServer, if necessary."
> KIP link for details:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4636) Per listener security setting overrides (KIP-103)

2017-01-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4636:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2406
[https://github.com/apache/kafka/pull/2406]

> Per listener security setting overrides (KIP-103)
> -
>
> Key: KAFKA-4636
> URL: https://issues.apache.org/jira/browse/KAFKA-4636
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: kip
> Fix For: 0.10.3.0
>
>
> This is a follow-up to KAFKA-4565 where most of KIP-103 was implemented. I 
> quote the missing bit from the KIP:
> "Finally, we make it possible to provide different security (SSL and SASL) 
> settings for each listener name by adding a normalised prefix (the listener 
> name is lowercased)  to the config name. For example, if we wanted to set a 
> different keystore for the CLIENT listener, we would set a config with name 
> listener.name.client.ssl.keystore.location. If the config for the listener 
> name is not set, we will fallback to the generic config (i.e. 
> ssl.keystore.location) for compatibility and convenience. For the SASL case, 
> some configs are provided via a JAAS file, which consists of one or more 
> entries. The broker currently looks for an entry named KafkaServer. We will 
> extend this so that the broker first looks for an entry with a lowercased 
> listener name followed by a dot as a prefix to the existing name. For the 
> CLIENT listener example, the broker would first look for client.KafkaServer 
> with a fallback to KafkaServer, if necessary."
> KIP link for details:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2406: KAFKA-4636; Per listener security settings overrid...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2406


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4705) ConfigDef should support deprecated keys as synonyms

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840787#comment-15840787
 ] 

ASF GitHub Bot commented on KAFKA-4705:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/2456

KAFKA-4705: ConfigDef should support deprecated keys as synonyms



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-4705

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2456.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2456


commit a9ee457a1e0269a5c0f2460896b262b0a81a7b46
Author: Colin P. Mccabe 
Date:   2017-01-27T01:09:50Z

KAFKA-4705: ConfigDef should support deprecated keys as synonyms




> ConfigDef should support deprecated keys as synonyms
> 
>
> Key: KAFKA-4705
> URL: https://issues.apache.org/jira/browse/KAFKA-4705
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> ConfigDef should support deprecated keys as synonyms.  That way, we can 
> change the name of configuration keys over time, and have a window of time 
> when the old names are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4705) ConfigDef should support deprecated keys as synonyms

2017-01-26 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-4705:
---
Status: Patch Available  (was: Open)

> ConfigDef should support deprecated keys as synonyms
> 
>
> Key: KAFKA-4705
> URL: https://issues.apache.org/jira/browse/KAFKA-4705
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> ConfigDef should support deprecated keys as synonyms.  That way, we can 
> change the name of configuration keys over time, and have a window of time 
> when the old names are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4705) ConfigDef should support deprecated keys as synonyms

2017-01-26 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-4705:
--

 Summary: ConfigDef should support deprecated keys as synonyms
 Key: KAFKA-4705
 URL: https://issues.apache.org/jira/browse/KAFKA-4705
 Project: Kafka
  Issue Type: Improvement
  Components: config
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


ConfigDef should support deprecated keys as synonyms.  That way, we can change 
the name of configuration keys over time, and have a window of time when the 
old names are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2456: KAFKA-4705: ConfigDef should support deprecated ke...

2017-01-26 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/2456

KAFKA-4705: ConfigDef should support deprecated keys as synonyms



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-4705

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2456.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2456


commit a9ee457a1e0269a5c0f2460896b262b0a81a7b46
Author: Colin P. Mccabe 
Date:   2017-01-27T01:09:50Z

KAFKA-4705: ConfigDef should support deprecated keys as synonyms




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2454: MINOR: Change loggin for ignored maybeAddMetric fr...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2454


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Trying to understand design decision about producer ack and min.insync.replicas

2017-01-26 Thread Luciano Afranllie
I was thinking about the situation where you have less brokers in the ISR
list than the number set in min.insync.replicas.

My idea was that if I, as an administrator, for a given topic, want to
favor durability over availability, then if that topic has less ISR than
the value set in min.insync.replicas I may want to stop producing to the
topic. In the way min.insync.replicas and ack work, I need to coordinate
with all producers in order to achieve this. There is no way (or I don't
know it) to globally enforce stop producing to a topic if it is under
replicated.

I don't see why, for the same topic, some producers might want get an error
when the number of ISR is below min.insync.replicas while other producers
don't. I think it could be more useful to be able to set that ALL producers
should get an error when a given topic is under replicated so they stop
producing, than for a single producer to get an error when ANY topic is
under replicated. I don't have a lot of experience with Kafka so I may be
missing some use cases.

But I understand your point, min.insync.replicas setting should be
understood as "if a producer wants to get an error when topics are under
replicated, then how many replicas are enough for not raising an error?"


On Thu, Jan 26, 2017 at 4:16 PM, Ewen Cheslack-Postava 
wrote:

> The acks setting for the producer doesn't affect the final durability
> guarantees. These are still enforced by the replication and min ISR
> settings. Instead, the ack setting just lets the producer control how
> durable the write is before *that producer* can consider the write
> "complete", i.e. before it gets an ack.
>
> -Ewen
>
> On Tue, Jan 24, 2017 at 12:46 PM, Luciano Afranllie <
> listas.luaf...@gmail.com> wrote:
>
> > Hi everybody
> >
> > I am trying to understand why Kafka let each individual producer, on a
> > connection per connection basis, choose the tradeoff between availability
> > and durability, honoring min.insync.replicas value only if producer uses
> > ack=all.
> >
> > I mean, for a single topic, cluster administrators can't enforce messages
> > to be stores in a minimum number of replicas without coordinating with
> all
> > producers to that topic so all of them use ack=all.
> >
> > Is there something that I am missing? Is there any other strategy to
> > overcome this situation?
> >
> > Regards
> > Luciano
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #1221

2017-01-26 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4699; Invoke producer callbacks before completing the future

--
[...truncated 19165 lines...]

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldBuildSimpleGlobalTableTopology STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldBuildSimpleGlobalTableTopology PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldNotMaterializeSourceKTableIfStateNameNotSpecified STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldNotMaterializeSourceKTableIfStateNameNotSpecified PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldAddGlobalTablesToEachGroup STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldAddGlobalTablesToEachGroup PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForHoppingWindows PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowAdvance 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowAdvance PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForTumblingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForTumblingWindows PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForBarelyOverlappingHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForBarelyOverlappingHoppingWindows PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 

[jira] [Commented] (KAFKA-4704) Group coordinator cache loading fails if groupId is used first for consumer groups and then for simple consumer

2017-01-26 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840663#comment-15840663
 ] 

Onur Karaman commented on KAFKA-4704:
-

It might be worth mentioning that I hit a similar scenario when implementing 
the new consumer migration. Rolling back from the migration-aware old consumer 
(or just new consumer) to the old consumer with kafka-based offset storage (or 
{{dual.commit.enabled}}) causes the old consumer offset commits to fail 
silently. This is because by that point, the group has been added to the 
{{GroupCoordinator}} and generation id has been incremented to be >= 0.  The 
old consumer, on the other hand, is naively sending {{OffsetCommitRequests}} 
with an empty member id and generation id of -1, so the {{GroupCoordinator}} 
will reject the request with {{UNKNOWN_MEMBER_ID}}.

I have a workaround constraint to address the problem, but I'll leave that for 
when I send out the actual proposal.

> Group coordinator cache loading fails if groupId is used first for consumer 
> groups and then for simple consumer
> ---
>
> Key: KAFKA-4704
> URL: https://issues.apache.org/jira/browse/KAFKA-4704
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.2.0
>
>
> When all the members in a consumer group have died and all of its offsets 
> have expired, we write a tombstone to __consumer_offsets so that its group 
> metadata is cleaned up. It is possible that after this happens, the same 
> groupId is then used only for offset storage (i.e. by "simple" consumers). 
> Our current cache loading logic, which is triggered when a coordinator first 
> takes over control of a partition, does not account for this scenario and 
> would currently fail.
> This is probably an unlikely scenario to hit in practice, but it reveals the 
> lack of test coverage around the cache loading logic. We should improve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-0.10.2-jdk7 #41

2017-01-26 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-4704) Group coordinator cache loading fails if groupId is used first for consumer groups and then for simple consumer

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840637#comment-15840637
 ] 

ASF GitHub Bot commented on KAFKA-4704:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2455

KAFKA-4704: Coordinator cache loading fails if groupId is reused for offset 
storage after group is removed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4704

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2455.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2455


commit 166e0d95dba89ba7ed9ff975b8353968316248bd
Author: Jason Gustafson 
Date:   2017-01-26T22:51:08Z

KAFKA-4704: Coordinator cache loading fails if groupId is reused for offset 
storage after group is removed




> Group coordinator cache loading fails if groupId is used first for consumer 
> groups and then for simple consumer
> ---
>
> Key: KAFKA-4704
> URL: https://issues.apache.org/jira/browse/KAFKA-4704
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.2.0
>
>
> When all the members in a consumer group have died and all of its offsets 
> have expired, we write a tombstone to __consumer_offsets so that its group 
> metadata is cleaned up. It is possible that after this happens, the same 
> groupId is then used only for offset storage (i.e. by "simple" consumers). 
> Our current cache loading logic, which is triggered when a coordinator first 
> takes over control of a partition, does not account for this scenario and 
> would currently fail.
> This is probably an unlikely scenario to hit in practice, but it reveals the 
> lack of test coverage around the cache loading logic. We should improve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2455: KAFKA-4704: Coordinator cache loading fails if gro...

2017-01-26 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2455

KAFKA-4704: Coordinator cache loading fails if groupId is reused for offset 
storage after group is removed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4704

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2455.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2455


commit 166e0d95dba89ba7ed9ff975b8353968316248bd
Author: Jason Gustafson 
Date:   2017-01-26T22:51:08Z

KAFKA-4704: Coordinator cache loading fails if groupId is reused for offset 
storage after group is removed




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1883

2017-01-26 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #2453: MINOR: Escape '<' and '>' symbols in quickstart st...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2453


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4704) Group coordinator cache loading fails if groupId is used first for consumer groups and then for simple consumer

2017-01-26 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4704:
--

 Summary: Group coordinator cache loading fails if groupId is used 
first for consumer groups and then for simple consumer
 Key: KAFKA-4704
 URL: https://issues.apache.org/jira/browse/KAFKA-4704
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.1, 0.10.1.0, 0.10.0.1, 0.10.0.0
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.10.2.0


When all the members in a consumer group have died and all of its offsets have 
expired, we write a tombstone to __consumer_offsets so that its group metadata 
is cleaned up. It is possible that after this happens, the same groupId is then 
used only for offset storage (i.e. by "simple" consumers). Our current cache 
loading logic, which is triggered when a coordinator first takes over control 
of a partition, does not account for this scenario and would currently fail.

This is probably an unlikely scenario to hit in practice, but it reveals the 
lack of test coverage around the cache loading logic. We should improve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4703) Test with two SASL_SSL listeners with different JAAS contexts

2017-01-26 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-4703:
--

 Summary: Test with two SASL_SSL listeners with different JAAS 
contexts
 Key: KAFKA-4703
 URL: https://issues.apache.org/jira/browse/KAFKA-4703
 Project: Kafka
  Issue Type: Test
Reporter: Ismael Juma


[~rsivaram] suggested the following in 
https://github.com/apache/kafka/pull/2406:

{quote}
I think this feature allows two SASL_SSL listeners, one for external and one 
for internal and the two can use different mechanisms and different JAAS 
contexts. That makes the multi-mechanism configuration neater. I think it will 
be useful to have an integration test for this, perhaps change 
SaslMultiMechanismConsumerTest.
{quote}

And my reply:

{quote}
I think it's a bit tricky to support multiple listeners in 
KafkaServerTestHarness. Maybe it's easier to do the test you suggest in 
MultipleListenersWithSameSecurityProtocolTest.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4700) StreamsKafkaClient drops security configs

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840563#comment-15840563
 ] 

ASF GitHub Bot commented on KAFKA-4700:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2441


> StreamsKafkaClient drops security configs
> -
>
> Key: KAFKA-4700
> URL: https://issues.apache.org/jira/browse/KAFKA-4700
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.2.0, 0.10.3.0
>
>
> `StreamsKafkaClient` takes a `StreamsConfig` and calls `values` on it to 
> create a `ChannelBuilder`. `values` creates a `Map` with parsed values for 
> _defined_ config names. `StreamsConfig` doesn't define security settings 
> itself and hence the security configs are dropped.
> For `KafkaProducer` and `KafkaConsumer` used by Streams, there is some code 
> that gets the original configs (using `originals` instead of `values`) and 
> passes them to the `KafkaConsumer` and `KafkaProducer` constructors (both of 
> which define the security configs).
> The suggested solution is to create a config definition for 
> `StreamsKafkaClient` thas includes a copy of the `StreamsConfig` definition 
> combined with the security configs definitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka Multiple Consumer Group for Same Topic

2017-01-26 Thread Joris Meijer
Hi Senthil,

You can just try it yourself, using kafka-consumer-perf-test.sh bundled
with Apache Kafka. You can start 2 in parallel with different groups and
see what happens if you tune some parameters.

Joris

On Wed, Jan 25, 2017, 08:38 Senthil Kumar  wrote:

> Thanks Sharninder!
>
> Adding dev group to know if they done some benchmarking test on  Single
> Consumer Group Vs Multiple Consumer Grp on Same Topic.
>
> Cheers,
> Senthil
>
>
> On Jan 24, 2017 10:48 PM, "Sharninder Khera"  wrote:
>
> I don't have benchmarks but multiple consumer groups are possible. For
> Kafka the performance should be similar or close to as having multiple
> consumers using a single group.
>
>
> _
> From: Senthil Kumar 
> Sent: Tuesday, January 24, 2017 10:38 PM
> Subject: Kafka Multiple Consumer Group for Same Topic
> To:  
> Cc:  
>
>
> Hi Team ,  Sorry if the same question asked already in this group !
>
> Say we have topic => ad_events ..  I want to read events from ad_events
> topic and send it to two different systems... This can be achieved by
> creating two  Consume Groups..
>
> Example :  Consumer Group SYS1 with 10 threads
>   Consume Group SYS2 with 10 threads
>
> Would like to know having two different Consumer Groups will impact
> performance of Kafka Read ??  Also want to see the *Benchmarking Result*(
> Numbers ) of   Single Topic Read with *One Consumer Group* Vs Single Topic
> with *Two/Three Consumer Group*..
>
>
> Cheers,
> Senthil
>


[GitHub] kafka pull request #2441: KAFKA-4700: Don't drop security configs in `Stream...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2441


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4700) StreamsKafkaClient drops security configs

2017-01-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4700:
-
   Resolution: Fixed
Fix Version/s: 0.10.3.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2441
[https://github.com/apache/kafka/pull/2441]

> StreamsKafkaClient drops security configs
> -
>
> Key: KAFKA-4700
> URL: https://issues.apache.org/jira/browse/KAFKA-4700
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.3.0, 0.10.2.0
>
>
> `StreamsKafkaClient` takes a `StreamsConfig` and calls `values` on it to 
> create a `ChannelBuilder`. `values` creates a `Map` with parsed values for 
> _defined_ config names. `StreamsConfig` doesn't define security settings 
> itself and hence the security configs are dropped.
> For `KafkaProducer` and `KafkaConsumer` used by Streams, there is some code 
> that gets the original configs (using `originals` instead of `values`) and 
> passes them to the `KafkaConsumer` and `KafkaProducer` constructors (both of 
> which define the security configs).
> The suggested solution is to create a config definition for 
> `StreamsKafkaClient` thas includes a copy of the `StreamsConfig` definition 
> combined with the security configs definitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2433: Update kafka-run-class.bat

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2433


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2454: MINOR: Change loggin for ignored maybeAddMetric fr...

2017-01-26 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/2454

MINOR: Change loggin for ignored maybeAddMetric from debug to trace



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
KMinor-trace-logging-add-metrics-twice

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2454.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2454


commit 265f357cf0bad4d415a87ffce20bb2de15951499
Author: Guozhang Wang 
Date:   2017-01-26T21:59:01Z

change from debug to trace




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2017-01-26 Thread Dave Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840494#comment-15840494
 ] 

Dave Thomas commented on KAFKA-2729:


Same with us, on 0.10.1.1 (following upgrade from 0.10.1.0 where we saw the 
same issue).

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2436: HOTFIX: Consumer offsets not properly loaded on co...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2436


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2453: MINOR: Escape '<' and '>' symbols in quickstart st...

2017-01-26 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/2453

MINOR: Escape '<' and '>' symbols in quickstart streams code snippet

This was missing from [an earlier 
PR](https://github.com/apache/kafka/pull/2247) that escaped these symbols in 
another section of the doc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
doc/escape_lt_gt_in_streams_code

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2453.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2453


commit c554972144fab1e927ce85b36a33f96ebc214fd6
Author: Vahid Hashemian 
Date:   2016-12-13T21:39:19Z

MINOR: Escape '<' and '>' symbols in quickstart streams code snippet




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4665) Inconsistent handling of non-existing topics in offset fetch handling

2017-01-26 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840346#comment-15840346
 ] 

Vahid Hashemian commented on KAFKA-4665:


{quote}
Note also that currently the consumer raises {{KafkaException}} when it 
encounters an UNKNOWN_TOPIC_OR_PARTITION error in the offset fetch response, 
which is inconsistent with how we usually handle this error. This probably 
doesn't cause any problems currently only because of the inconsistency 
mentioned in the first paragraph above.
{quote}

[~hachikuji] Do you mean we should avoid raising exceptions for partition level 
errors? 
([here|https://github.com/apache/kafka/blob/254e3b77d656a610f19efd1124802e073dfda4b8/core/src/main/scala/kafka/admin/AdminClient.scala#L130])

> Inconsistent handling of non-existing topics in offset fetch handling
> -
>
> Key: KAFKA-4665
> URL: https://issues.apache.org/jira/browse/KAFKA-4665
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
> Fix For: 0.10.3.0
>
>
> For version 0 of the offset fetch API, the broker returns 
> UNKNOWN_TOPIC_OR_PARTITION for any topics/partitions which do not exist at 
> the time of fetching. In later versions, we skip this check. We do, however, 
> continue to return UNKNOWN_TOPIC_OR_PARTITION for authorization errors (i.e. 
> if the principal does not have Describe access to the corresponding topic). 
> We should probably make this behavior consistent across versions.
> Note also that currently the consumer raises {{KafkaException}} when it 
> encounters an UNKNOWN_TOPIC_OR_PARTITION error in the offset fetch response, 
> which is inconsistent with how we usually handle this error. This probably 
> doesn't cause any problems currently only because of the inconsistency 
> mentioned in the first paragraph above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4665) Inconsistent handling of non-existing topics in offset fetch handling

2017-01-26 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4665 started by Vahid Hashemian.
--
> Inconsistent handling of non-existing topics in offset fetch handling
> -
>
> Key: KAFKA-4665
> URL: https://issues.apache.org/jira/browse/KAFKA-4665
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
> Fix For: 0.10.3.0
>
>
> For version 0 of the offset fetch API, the broker returns 
> UNKNOWN_TOPIC_OR_PARTITION for any topics/partitions which do not exist at 
> the time of fetching. In later versions, we skip this check. We do, however, 
> continue to return UNKNOWN_TOPIC_OR_PARTITION for authorization errors (i.e. 
> if the principal does not have Describe access to the corresponding topic). 
> We should probably make this behavior consistent across versions.
> Note also that currently the consumer raises {{KafkaException}} when it 
> encounters an UNKNOWN_TOPIC_OR_PARTITION error in the offset fetch response, 
> which is inconsistent with how we usually handle this error. This probably 
> doesn't cause any problems currently only because of the inconsistency 
> mentioned in the first paragraph above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.2-jdk7 #40

2017-01-26 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4699; Invoke producer callbacks before completing the future

--
[...truncated 4065 lines...]

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED


[jira] [Commented] (KAFKA-4517) Remove kafka-consumer-offset-checker.sh script since already deprecated in Kafka 9

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840259#comment-15840259
 ] 

Ewen Cheslack-Postava commented on KAFKA-4517:
--

oh, I missed the PR, but yeah either retitle or adjust which one we leave open 
and a blocker for 0.11.0.0. and yes, we'd want to update it to remove 
everything in one go. (you can just add a commit that does it, it'll get 
squashed during the merge process.)

> Remove kafka-consumer-offset-checker.sh script since already deprecated in 
> Kafka 9
> --
>
> Key: KAFKA-4517
> URL: https://issues.apache.org/jira/browse/KAFKA-4517
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0
>Reporter: Jeff Widman
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
>
> Kafka 9 deprecated kafka-consumer-offset-checker.sh 
> (kafka.tools.ConsumerOffsetChecker) in favor of kafka-consumer-groups.sh 
> (kafka.admin.ConsumerGroupCommand). 
> Since this was deprecated in 9, and the full functionality of the old script 
> appears to be available in the new script, can we remove the old shell script 
> in 10? 
> From an Ops perspective, it's confusing when I'm trying to check consumer 
> offsets that I open the bin directory, and see a script that seems to do 
> exactly what I want, only to later discover that I'm not supposed to use it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4667) Connect should create internal topics

2017-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4667:
-
Summary: Connect should create internal topics  (was: Connect (and Stream?) 
should create internal topics with availability or consistency properly set)

> Connect should create internal topics
> -
>
> Key: KAFKA-4667
> URL: https://issues.apache.org/jira/browse/KAFKA-4667
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Emanuele Cesena
>
> I'm reporting this as an issue but in fact it requires more investigation 
> (which unfortunately I'm not able to perform at this time).
> Repro steps:
> - configure Kafka for consistency, for example:
> default.replication.factor=3
> min.insync.replicas=2
> unclean.leader.election.enable=false
> - run Connect for the first time, which should create its internal topics
> I believe these topics are created with the broker's default, in particular:
> min.insync.replicas=2
> unclean.leader.election.enable=false
> but connect doesn't produce with acks=all, which in turn may cause the 
> cluster to go in a bad state (see, e.g., 
> https://issues.apache.org/jira/browse/KAFKA-4666).
> Solution would be to force availability mode, i.e. force:
> unclean.leader.election.enable=true
> when creating the connect topics, or viceversa detect availability vs 
> consistency mode and turn acks=all if needed.
> I assume the same happens with other kafka-based services such as streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1882

2017-01-26 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4699; Invoke producer callbacks before completing the future

--
[...truncated 4064 lines...]

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException STARTED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists STARTED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists STARTED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath STARTED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException STARTED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException STARTED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException STARTED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists STARTED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] STARTED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] STARTED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] STARTED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] STARTED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] STARTED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] STARTED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] STARTED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] STARTED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED


Re: Trying to understand design decision about producer ack and min.insync.replicas

2017-01-26 Thread Ewen Cheslack-Postava
The acks setting for the producer doesn't affect the final durability
guarantees. These are still enforced by the replication and min ISR
settings. Instead, the ack setting just lets the producer control how
durable the write is before *that producer* can consider the write
"complete", i.e. before it gets an ack.

-Ewen

On Tue, Jan 24, 2017 at 12:46 PM, Luciano Afranllie <
listas.luaf...@gmail.com> wrote:

> Hi everybody
>
> I am trying to understand why Kafka let each individual producer, on a
> connection per connection basis, choose the tradeoff between availability
> and durability, honoring min.insync.replicas value only if producer uses
> ack=all.
>
> I mean, for a single topic, cluster administrators can't enforce messages
> to be stores in a minimum number of replicas without coordinating with all
> producers to that topic so all of them use ack=all.
>
> Is there something that I am missing? Is there any other strategy to
> overcome this situation?
>
> Regards
> Luciano
>


Re: [DISCUSS] KIP-112: Handle disk failure for JBOD

2017-01-26 Thread Dong Lin
Hey Colin,

Thanks much for the comment. Please see me comment inline.

On Thu, Jan 26, 2017 at 10:15 AM, Colin McCabe  wrote:

> On Wed, Jan 25, 2017, at 13:50, Dong Lin wrote:
> > Hey Colin,
> >
> > Good point! Yeah we have actually considered and tested this solution,
> > which we call one-broker-per-disk. It would work and should require no
> > major change in Kafka as compared to this JBOD KIP. So it would be a good
> > short term solution.
> >
> > But it has a few drawbacks which makes it less desirable in the long
> > term.
> > Assume we have 10 disks on a machine. Here are the problems:
>
> Hi Dong,
>
> Thanks for the thoughtful reply.
>
> >
> > 1) Our stress test result shows that one-broker-per-disk has 15% lower
> > throughput
> >
> > 2) Controller would need to send 10X as many LeaderAndIsrRequest,
> > MetadataUpdateRequest and StopReplicaRequest. This increases the burden
> > on
> > controller which can be the performance bottleneck.
>
> Maybe I'm misunderstanding something, but there would not be 10x as many
> StopReplicaRequest RPCs, would there?  The other requests would increase
> 10x, but from a pretty low base, right?  We are not reassigning
> partitions all the time, I hope (or else we have bigger problems...)
>

I think the controller will group StopReplicaRequest per broker and send
only one StopReplicaRequest to a broker during controlled shutdown. Anyway,
we don't have to worry about this if we agree that other requests will
increase by 10X. One MetadataRequest to send to each broker in the cluster
every time there is leadership change. I am not sure this is a real
problem. But in theory this makes the overhead complexity O(number of
broker) and may be a concern in the future. Ideally we should avoid it.


>
> >
> > 3) Less efficient use of physical resource on the machine. The number of
> > socket on each machine will increase by 10X. The number of connection
> > between any two machine will increase by 100X.
> >
> > 4) Less efficient way to management memory and quota.
> >
> > 5) Rebalance between disks/brokers on the same machine will less
> > efficient
> > and less flexible. Broker has to read data from another broker on the
> > same
> > machine via socket. It is also harder to do automatic load balance
> > between
> > disks on the same machine in the future.
> >
> > I will put this and the explanation in the rejected alternative section.
> > I
> > have a few questions:
> >
> > - Can you explain why this solution can help avoid scalability
> > bottleneck?
> > I actually think it will exacerbate the scalability problem due the 2)
> > above.
> > - Why can we push more RPC with this solution?
>
> To really answer this question we'd have to take a deep dive into the
> locking of the broker and figure out how effectively it can parallelize
> truly independent requests.  Almost every multithreaded process is going
> to have shared state, like shared queues or shared sockets, that is
> going to make scaling less than linear when you add disks or processors.
>  (And clearly, another option is to improve that scalability, rather
> than going multi-process!)
>

Yeah I also think it is better to improve scalability inside kafka code if
possible. I am not sure we currently have any scalability issue inside
Kafka that can not be removed without using multi-process.


>
> > - It is true that a garbage collection in one broker would not affect
> > others. But that is after every broker only uses 1/10 of the memory. Can
> > we be sure that this will actually help performance?
>
> The big question is, how much memory do Kafka brokers use now, and how
> much will they use in the future?  Our experience in HDFS was that once
> you start getting more than 100-200GB Java heap sizes, full GCs start
> taking minutes to finish when using the standard JVMs.  That alone is a
> good reason to go multi-process or consider storing more things off the
> Java heap.
>

I see. Now I agree one-broker-per-disk should be more efficient in terms of
GC since each broker probably needs less than 1/10 of the memory available
on a typical machine nowadays. I will remove this from the reason of
rejection.


>
> Disk failure is the "easy" case.  The "hard" case, which is
> unfortunately also the much more common case, is disk misbehavior.
> Towards the end of their lives, disks tend to start slowing down
> unpredictably.  Requests that would have completed immediately before
> start taking 20, 100 500 milliseconds.  Some files may be readable and
> other files may not be.  System calls hang, sometimes forever, and the
> Java process can't abort them, because the hang is in the kernel.  It is
> not fun when threads are stuck in "D state"
> http://stackoverflow.com/questions/20423521/process-perminan
> tly-stuck-on-d-state
> .  Even kill -9 cannot abort the thread then.  Fortunately, this is
> rare.
>

I agree it is a harder problem and it is rare. We probably don't have to
worry about it in this KIP 

[jira] [Updated] (KAFKA-3429) Remove Serdes needed for repartitioning in KTable stateful operations

2017-01-26 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-3429:
---
Fix Version/s: (was: 0.10.2.0)

> Remove Serdes needed for repartitioning in KTable stateful operations
> -
>
> Key: KAFKA-3429
> URL: https://issues.apache.org/jira/browse/KAFKA-3429
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: api, newbie++
>
> Currently in KTable aggregate operations where a repartition is possibly 
> needed since the aggregation key may not be the same as the original primary 
> key, we require the users to provide serdes (default to configured ones) for 
> read / write to the internally created re-partition topic. However, these are 
> not necessary since for all KTable instances either generated from the topics 
> directly:
> {code}table = builder.table(...){code}
> or from aggregation operations:
> {code}table = stream.aggregate(...){code}
> There are already serde provided for materializing the data, and hence the 
> same serde can be re-used when the resulted KTable is involved in future 
> aggregation operations. For example:
> {code}
> table1 = stream.aggregateByKey(serde);
> table2 = table1.aggregate(aggregator, selector, originalSerde, 
> aggregateSerde);
> {code}
> We would not need to require users to specify the "originalSerde" in 
> table1.aggregate since it could always reuse the "serde" from 
> stream.aggregateByKey, which is used to materialize the table1 object.
> In order to get ride of it, implementation-wise we need to carry the serde 
> information along with the KTableImpl instance in order to re-use it in a 
> future operation that requires repartitioning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4649) Improve test coverage GlobalStateManagerImpl

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4649:
--
Status: Patch Available  (was: Open)

> Improve test coverage GlobalStateManagerImpl
> 
>
> Key: KAFKA-4649
> URL: https://issues.apache.org/jira/browse/KAFKA-4649
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths in {{initialize}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4649) Improve test coverage GlobalStateManagerImpl

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840168#comment-15840168
 ] 

ASF GitHub Bot commented on KAFKA-4649:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2452

KAFKA-4649: Improve test coverage GlobalStateManagerImpl

Add coverage for exception paths in `initialize()`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4649

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2452.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2452


commit d21209556a1e4fec7bf6f9340239e03cc64e2437
Author: Damian Guy 
Date:   2017-01-26T18:19:47Z

improve test coverage




> Improve test coverage GlobalStateManagerImpl
> 
>
> Key: KAFKA-4649
> URL: https://issues.apache.org/jira/browse/KAFKA-4649
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths in {{initialize}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2452: KAFKA-4649: Improve test coverage GlobalStateManag...

2017-01-26 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2452

KAFKA-4649: Improve test coverage GlobalStateManagerImpl

Add coverage for exception paths in `initialize()`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4649

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2452.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2452


commit d21209556a1e4fec7bf6f9340239e03cc64e2437
Author: Damian Guy 
Date:   2017-01-26T18:19:47Z

improve test coverage




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-4649) Improve test coverage GlobalStateManagerImpl

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-4649:
-

Assignee: Damian Guy

> Improve test coverage GlobalStateManagerImpl
> 
>
> Key: KAFKA-4649
> URL: https://issues.apache.org/jira/browse/KAFKA-4649
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths in {{initialize}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-112: Handle disk failure for JBOD

2017-01-26 Thread Colin McCabe
On Wed, Jan 25, 2017, at 13:50, Dong Lin wrote:
> Hey Colin,
> 
> Good point! Yeah we have actually considered and tested this solution,
> which we call one-broker-per-disk. It would work and should require no
> major change in Kafka as compared to this JBOD KIP. So it would be a good
> short term solution.
> 
> But it has a few drawbacks which makes it less desirable in the long
> term.
> Assume we have 10 disks on a machine. Here are the problems:

Hi Dong,

Thanks for the thoughtful reply.

> 
> 1) Our stress test result shows that one-broker-per-disk has 15% lower
> throughput
> 
> 2) Controller would need to send 10X as many LeaderAndIsrRequest,
> MetadataUpdateRequest and StopReplicaRequest. This increases the burden
> on
> controller which can be the performance bottleneck.

Maybe I'm misunderstanding something, but there would not be 10x as many
StopReplicaRequest RPCs, would there?  The other requests would increase
10x, but from a pretty low base, right?  We are not reassigning
partitions all the time, I hope (or else we have bigger problems...)

> 
> 3) Less efficient use of physical resource on the machine. The number of
> socket on each machine will increase by 10X. The number of connection
> between any two machine will increase by 100X.
> 
> 4) Less efficient way to management memory and quota.
> 
> 5) Rebalance between disks/brokers on the same machine will less
> efficient
> and less flexible. Broker has to read data from another broker on the
> same
> machine via socket. It is also harder to do automatic load balance
> between
> disks on the same machine in the future.
> 
> I will put this and the explanation in the rejected alternative section.
> I
> have a few questions:
> 
> - Can you explain why this solution can help avoid scalability
> bottleneck?
> I actually think it will exacerbate the scalability problem due the 2)
> above.
> - Why can we push more RPC with this solution?

To really answer this question we'd have to take a deep dive into the
locking of the broker and figure out how effectively it can parallelize
truly independent requests.  Almost every multithreaded process is going
to have shared state, like shared queues or shared sockets, that is
going to make scaling less than linear when you add disks or processors.
 (And clearly, another option is to improve that scalability, rather
than going multi-process!)

> - It is true that a garbage collection in one broker would not affect
> others. But that is after every broker only uses 1/10 of the memory. Can
> we be sure that this will actually help performance?

The big question is, how much memory do Kafka brokers use now, and how
much will they use in the future?  Our experience in HDFS was that once
you start getting more than 100-200GB Java heap sizes, full GCs start
taking minutes to finish when using the standard JVMs.  That alone is a
good reason to go multi-process or consider storing more things off the
Java heap.

Disk failure is the "easy" case.  The "hard" case, which is
unfortunately also the much more common case, is disk misbehavior. 
Towards the end of their lives, disks tend to start slowing down
unpredictably.  Requests that would have completed immediately before
start taking 20, 100 500 milliseconds.  Some files may be readable and
other files may not be.  System calls hang, sometimes forever, and the
Java process can't abort them, because the hang is in the kernel.  It is
not fun when threads are stuck in "D state"
http://stackoverflow.com/questions/20423521/process-perminantly-stuck-on-d-state
.  Even kill -9 cannot abort the thread then.  Fortunately, this is
rare.

Another approach we should consider is for Kafka to implement its own
storage layer that would stripe across multiple disks.  This wouldn't
have to be done at the block level, but could be done at the file level.
 We could use consistent hashing to determine which disks a file should
end up on, for example.

best,
Colin

> 
> Thanks,
> Dong
> 
> On Wed, Jan 25, 2017 at 11:34 AM, Colin McCabe 
> wrote:
> 
> > Hi Dong,
> >
> > Thanks for the writeup!  It's very interesting.
> >
> > I apologize in advance if this has been discussed somewhere else.  But I
> > am curious if you have considered the solution of running multiple
> > brokers per node.  Clearly there is a memory overhead with this solution
> > because of the fixed cost of starting multiple JVMs.  However, running
> > multiple JVMs would help avoid scalability bottlenecks.  You could
> > probably push more RPCs per second, for example.  A garbage collection
> > in one broker would not affect the others.  It would be interesting to
> > see this considered in the "alternate designs" design, even if you end
> > up deciding it's not the way to go.
> >
> > best,
> > Colin
> >
> >
> > On Thu, Jan 12, 2017, at 10:46, Dong Lin wrote:
> > > Hi all,
> > >
> > > We created KIP-112: Handle disk failure for JBOD. Please find the KIP
> > > wiki
> > > in the link 

[jira] [Commented] (KAFKA-4640) Improve Streams unit test coverage

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840143#comment-15840143
 ] 

ASF GitHub Bot commented on KAFKA-4640:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2451

KAFKA-4640: Improve test coverage StreamTask

Provide test coverage for exception paths in: `schedule()`, 
`closeTopology()`, and `punctuate()`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4640

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2451.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2451


commit 1f902084d0eeaf27a7fad6a1be26c42c2042599c
Author: Damian Guy 
Date:   2017-01-26T18:03:51Z

improve test coverage




> Improve Streams unit test coverage
> --
>
> Key: KAFKA-4640
> URL: https://issues.apache.org/jira/browse/KAFKA-4640
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.3.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
> Attachments: streams-coverage.zip
>
>
> There are some important methods in streams that are lacking good unit-test 
> coverage. Whilst we shouldn't strive to get 100% coverage, we should do our 
> best to ensure sure that all important code paths are covered by unit-tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4648) Improve test coverage StreamTask

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4648:
--
Status: Patch Available  (was: Open)

> Improve test coverage StreamTask
> 
>
> Key: KAFKA-4648
> URL: https://issues.apache.org/jira/browse/KAFKA-4648
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths in {{schedule}}, {{closeTopology}}, {{punctuate}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2451: KAFKA-4640: Improve test coverage StreamTask

2017-01-26 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2451

KAFKA-4640: Improve test coverage StreamTask

Provide test coverage for exception paths in: `schedule()`, 
`closeTopology()`, and `punctuate()`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4640

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2451.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2451


commit 1f902084d0eeaf27a7fad6a1be26c42c2042599c
Author: Damian Guy 
Date:   2017-01-26T18:03:51Z

improve test coverage




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4699) Transient Failure PlaintextConsumerTest.testInterceptros

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840141#comment-15840141
 ] 

ASF GitHub Bot commented on KAFKA-4699:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2440


> Transient Failure PlaintextConsumerTest.testInterceptros
> 
>
> Key: KAFKA-4699
> URL: https://issues.apache.org/jira/browse/KAFKA-4699
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Ismael Juma
> Fix For: 0.10.2.0
>
>
> Started seeing a bunch of these. Possibly related to KAFKA-4633?
> {code}
> kafka.api.PlaintextConsumerTest > testInterceptors FAILED
> java.lang.AssertionError: expected:<10> but was:<1>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> kafka.api.PlaintextConsumerTest.testInterceptors(PlaintextConsumerTest.scala:858)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2440: KAFKA-4699: Invoke producer callbacks before compl...

2017-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2440


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4699) Transient Failure PlaintextConsumerTest.testInterceptros

2017-01-26 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4699.

Resolution: Fixed

Issue resolved by pull request 2440
[https://github.com/apache/kafka/pull/2440]

> Transient Failure PlaintextConsumerTest.testInterceptros
> 
>
> Key: KAFKA-4699
> URL: https://issues.apache.org/jira/browse/KAFKA-4699
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Ismael Juma
> Fix For: 0.10.2.0
>
>
> Started seeing a bunch of these. Possibly related to KAFKA-4633?
> {code}
> kafka.api.PlaintextConsumerTest > testInterceptors FAILED
> java.lang.AssertionError: expected:<10> but was:<1>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> kafka.api.PlaintextConsumerTest.testInterceptors(PlaintextConsumerTest.scala:858)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4648) Improve test coverage StreamTask

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-4648:
-

Assignee: Damian Guy

> Improve test coverage StreamTask
> 
>
> Key: KAFKA-4648
> URL: https://issues.apache.org/jira/browse/KAFKA-4648
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths in {{schedule}}, {{closeTopology}}, {{punctuate}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-115: Enforce offsets.topic.replication.factor

2017-01-26 Thread Colin McCabe
+1 (non-binding)



On Wed, Jan 25, 2017, at 16:34, Onur Karaman wrote:
> I'd like to start the vote for KIP-115: Enforce
> offsets.topic.replication.factor
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-115%3A+Enforce+offsets.topic.replication.factor
> 
> - Onur


Re: [DISCUSS] KIP-115: Enforce offsets.topic.replication.factor

2017-01-26 Thread Colin McCabe
Hi all,

+1 (non-binding) for KIP-115.

On Thu, Jan 26, 2017, at 04:26, Stevo Slavić wrote:
> If I understood well, this KIP is trying to solve for the problem of
> offsets.topic.replication.factor not being enforced, particularly in
> context of  "when you have clients or tooling running as the cluster is
> getting setup". Assuming that this problem was observed in production, so
> in non-testing only conditions, would it make sense to introduce
> additional
> property - min number of alive brokers before offsets topic is allowed to
> be created?

It's an interesting idea, but... is there a user use-case for a property
like this?  I'm having a hard time thinking of one, but maybe I missed
something.

cheers,
Colin

> 
> Currently offsets.topic.replication.factor is used for that purpose, so
> with offsets.topic.replication.factor set to 3 it's enough to have just 3
> brokers up for offsets topic to be created. Then all replicas of all (by
> default 50) partitions of this topic would be spread out over just these
> 3
> brokers, while eventually entire cluster might be much larger in size and
> would benefit from wider spread of consumer offsets topic partitions
> leadership.
> 
> One can achieve wider spread later, manually. But that would first have
> to
> be detected, and then use provided CLI/scripts to change replica
> assignment. IMO it would be better if it was possible to configure
> desired
> spread, even if just indirectly through configuring min number of alive
> brokers. If not overriden in server.properties, this new property can
> default to offsets.topic.replication.factor
> 
> I've been bitten by problem of offsets.topic.replication.factor not being
> enforced but only in testing, integration tests, it was almost
> unpredictable when offsets topic is ready, test cluster initialized,
> would
> get lots of false failures, unstable tests, but eventually got to
> predictable deterministic test behavior, found ways to fully initialize
> test cluster. If this problem of offsets.topic.replication.factor not
> being
> enforced others also observed only in their tests only, than I don't like
> the KIP proposed change, of setting offsets.topic.replication.factor to 1
> by default. I understand backward compatibility goals of this, but I can
> imagine late discovered production issues as consequences of this change.
> So I wouldn't like to trade off production issues probability for testing
> convenience.
> 
> Current Kafka documentation has nice note about
> offsets.topic.replication.factor and related behavior. New note about new
> default would have to be a warning in bold and red in docs, and every
> broker should output proper warning in log if configuration for
> offsets.topic.replication.factor is on new proposed default of 1.
> 
> Kind regards,
> Stevo Slavic.
> 
> On Thu, Jan 26, 2017 at 8:43 AM, James Cheng 
> wrote:
> 
> >
> > > On Jan 25, 2017, at 9:26 PM, Joel Koshy  wrote:
> > >
> > > already voted, but one thing worth considering (since this KIP speaks of
> > > *enforcement*) is desired behavior if the topic already exists and the
> > > config != existing RF.
> > >
> >
> > Yeah, I'm curious about this too.
> >
> > -James
> >
> > > On Wed, Jan 25, 2017 at 4:30 PM, Dong Lin  wrote:
> > >
> > >> +1
> > >>
> > >> On Wed, Jan 25, 2017 at 4:22 PM, Ismael Juma  wrote:
> > >>
> > >>> An important question is if this needs to wait for a major release or
> > >> not.
> > >>>
> > >>> Ismael
> > >>>
> > >>> On Thu, Jan 26, 2017 at 12:19 AM, Ismael Juma 
> > wrote:
> > >>>
> >  +1 from me too.
> > 
> >  Ismael
> > 
> >  On Thu, Jan 26, 2017 at 12:07 AM, Ewen Cheslack-Postava <
> > >>> e...@confluent.io
> > > wrote:
> > 
> > > +1
> > >
> > > Since this is an unusual one, I think it's worth pointing out that
> > the
> > >>> KIP
> > > notes it is really a bug fix, but since it has compatibility
> > >>> implications
> > > the KIP was worth it. It was a sort of intentional bug, but confusing
> > >>> and
> > > dangerous.
> > >
> > > Seems important to fix this ASAP since people are hitting this in
> > >>> practice
> > > and would have to go out of their way to set up monitoring to catch
> > >> the
> > > issue.
> > >
> > > -Ewen
> > >
> > > On Wed, Jan 25, 2017 at 4:02 PM, Jason Gustafson  > >
> > > wrote:
> > >
> > >> +1 from me. The current behavior seems both surprising and
> > >> dangerous.
> > >>
> > >> -Jason
> > >>
> > >> On Wed, Jan 25, 2017 at 3:58 PM, Onur Karaman <
> > >> onurkaraman.apa...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hey everyone.
> > >>>
> > >>> I made a bug-fix KIP-115 to enforce offsets.topic.replication.
> > >>> factor:
> > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >>> 

[jira] [Commented] (KAFKA-4647) Improve test coverage of GlobalStreamThread

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840055#comment-15840055
 ] 

ASF GitHub Bot commented on KAFKA-4647:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2450

KAFKA-4647: Improve test coverage of GlobalStreamThread

Add a test to ensure a `StreamsException` is thrown when an exception other 
than `StreamsException` is caught

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka KAFKA-4647

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2450.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2450






> Improve test coverage of GlobalStreamThread
> ---
>
> Key: KAFKA-4647
> URL: https://issues.apache.org/jira/browse/KAFKA-4647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception path in {{initialize}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4647) Improve test coverage of GlobalStreamThread

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4647:
--
Status: Patch Available  (was: Open)

> Improve test coverage of GlobalStreamThread
> ---
>
> Key: KAFKA-4647
> URL: https://issues.apache.org/jira/browse/KAFKA-4647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception path in {{initialize}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2450: KAFKA-4647: Improve test coverage of GlobalStreamT...

2017-01-26 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2450

KAFKA-4647: Improve test coverage of GlobalStreamThread

Add a test to ensure a `StreamsException` is thrown when an exception other 
than `StreamsException` is caught

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka KAFKA-4647

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2450.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2450






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4676) Kafka consumers gets stuck for some partitions

2017-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840024#comment-15840024
 ] 

Ismael Juma commented on KAFKA-4676:


[~hachikuji], is this something that we need to fix for 0.10.2.0?

> Kafka consumers gets stuck for some partitions
> --
>
> Key: KAFKA-4676
> URL: https://issues.apache.org/jira/browse/KAFKA-4676
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Vishal Shukla
>Priority: Critical
>  Labels: consumer, reliability
> Attachments: restart-node2-consumer-node-1.log, 
> restart-node2-consumer-node-2.log, stuck-case2.log, 
> stuck-consumer-node-1.log, stuck-consumer-node-2.log, 
> stuck-topic-thread-dump.log
>
>
> We recently upgraded to Kafka 0.10.1.0. We are frequently facing issue that 
> Kafka consumers get stuck suddenly for some partitions.
> Attached thread dump.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4647) Improve test coverage of GlobalStreamThread

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-4647:
-

Assignee: Damian Guy

> Improve test coverage of GlobalStreamThread
> ---
>
> Key: KAFKA-4647
> URL: https://issues.apache.org/jira/browse/KAFKA-4647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception path in {{initialize}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4702) Parametrize streams benchmarks to run at scale

2017-01-26 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4702 started by Eno Thereska.
---
> Parametrize streams benchmarks to run at scale
> --
>
> Key: KAFKA-4702
> URL: https://issues.apache.org/jira/browse/KAFKA-4702
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.3.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>
> The streams benchmarks (in SimpleBenchmark.java and triggered through 
> kafka/tests/kafkatest/benchmarks/streams/streams_simple_benchmark_test.py) 
> run as single-instance, with a simple 1 broker Kafka cluster. 
> We need to parametrize the tests so they can run at scale, e.g., with 10-100 
> KafkaStreams instances and similar number of brokers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4702) Parametrize streams benchmarks to run at scale

2017-01-26 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-4702:
---

 Summary: Parametrize streams benchmarks to run at scale
 Key: KAFKA-4702
 URL: https://issues.apache.org/jira/browse/KAFKA-4702
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.3.0
Reporter: Eno Thereska
Assignee: Eno Thereska


The streams benchmarks (in SimpleBenchmark.java and triggered through 
kafka/tests/kafkatest/benchmarks/streams/streams_simple_benchmark_test.py) run 
as single-instance, with a simple 1 broker Kafka cluster. 

We need to parametrize the tests so they can run at scale, e.g., with 10-100 
KafkaStreams instances and similar number of brokers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4557) ConcurrentModificationException in KafkaProducer event loop

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839994#comment-15839994
 ] 

ASF GitHub Bot commented on KAFKA-4557:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2449

KAFKA-4557: Handle Producer.send correctly in expiry callbacks

When iterating deque for expiring record batches, delay the actual 
completion of the batch until iteration is complete since callbacks invoked 
during expiry may send more records, modifying the deque, resulting in 
ConcurrentModificationException in the iterator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4557

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2449.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2449


commit dae191b0dc14bed39b8f77589c74a2394ed4ea48
Author: Rajini Sivaram 
Date:   2017-01-26T16:23:49Z

KAFKA-4557: Handle Producer.send correctly in expiry callbacks




> ConcurrentModificationException in KafkaProducer event loop
> ---
>
> Key: KAFKA-4557
> URL: https://issues.apache.org/jira/browse/KAFKA-4557
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
>Reporter: Sergey Alaev
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: reliability
> Fix For: 0.10.2.0
>
>
> Under heavy load, Kafka producer can stop publishing events. Logs below.
> [2016-12-19T15:01:28.779Z] [sgs] [kafka-producer-network-thread | producer-3] 
> [NetworkClient] [] [] [] [DEBUG]: Disconnecting from node 2 due to 
> request timeout.
> [2016-12-19T15:01:28.793Z] [sgs] [kafka-producer-network-thread | producer-3] 
> [KafkaProducerClient] [] [] [1B2M2Y8Asg] [WARN]: Error sending message 
> to Kafka
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> [2016-12-19T15:01:28.838Z] [sgs] [kafka-producer-network-thread | producer-3] 
> [KafkaProducerClient] [] [] [1B2M2Y8Asg] [WARN]: Error sending message 
> to Kafka
>   org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received. (#2 from 2016-12-19T15:01:28.793Z)
> 
> [2016-12-19T15:01:28.956Z] [sgs] [kafka-producer-network-thread | producer-3] 
> [KafkaProducerClient] [] [] [1B2M2Y8Asg] [WARN]: Error sending message 
> to Kafka
>   org.apache.kafka.common.errors.TimeoutException: Expiring 46 record(s) for 
> events-deadletter-0 due to 30032 ms has passed since batch creation plus 
> linger time (#285 from 2016-12-19
> T15:01:28.793Z)
> [2016-12-19T15:01:28.956Z] [sgs] [kafka-producer-network-thread | producer-3] 
> [SgsService] [] [] [1B2M2Y8Asg] [WARN]: Error writing signal to Kafka 
> deadletter queue
>   org.apache.kafka.common.errors.TimeoutException: Expiring 46 record(s) for 
> events-deadletter-0 due to 30032 ms has passed since batch creation plus 
> linger time (#286 from 2016-12-19
> T15:01:28.793Z)
> [2016-12-19T15:01:28.960Z] [sgs] [kafka-producer-network-thread | producer-3] 
> [Sender] [] [] [1B2M2Y8Asg] [ERROR]: Uncaught error in kafka producer 
> I/O thread:
> java.util.ConcurrentModificationException: null
> at java.util.ArrayDeque$DeqIterator.next(ArrayDeque.java:643) 
> ~[na:1.8.0_45]
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortExpiredBatches(RecordAccumulator.java:242)
>  ~[kafka-clients-0.10.1.0.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:212) 
> ~[kafka-clients-0.10.1.0.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135) 
> ~[kafka-clients-0.10.1.0.jar:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> [2016-12-19T15:01:28.981Z] [sgs] [kafka-producer-network-thread | producer-3] 
> [NetworkClient] [] [] [1B2M2Y8Asg] [WARN]: Error while fetching 
> metadata with correlation id 28711 : {events-deadletter=LEADER_NOT_AVAILABLE}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2449: KAFKA-4557: Handle Producer.send correctly in expi...

2017-01-26 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2449

KAFKA-4557: Handle Producer.send correctly in expiry callbacks

When iterating deque for expiring record batches, delay the actual 
completion of the batch until iteration is complete since callbacks invoked 
during expiry may send more records, modifying the deque, resulting in 
ConcurrentModificationException in the iterator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4557

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2449.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2449


commit dae191b0dc14bed39b8f77589c74a2394ed4ea48
Author: Rajini Sivaram 
Date:   2017-01-26T16:23:49Z

KAFKA-4557: Handle Producer.send correctly in expiry callbacks




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4644) Improve test coverage of StreamsPartitionAssignor

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839991#comment-15839991
 ] 

ASF GitHub Bot commented on KAFKA-4644:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2448

KAFKA-4644: Improve test coverage of StreamsPartitionAssignor

Some exception paths not previously covered. Extracted 
`ensureCopartitioning` into a static class.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka KAFKA-4644

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2448.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2448


commit de61b8fc299d95425a8eb003a2b532e2d73cf91d
Author: Damian Guy 
Date:   2017-01-26T16:26:03Z

improve test coverage




> Improve test coverage of StreamsPartitionAssignor
> -
>
> Key: KAFKA-4644
> URL: https://issues.apache.org/jira/browse/KAFKA-4644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.3.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths not covered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4694) Streams smoke tests fails when there is only 1 broker

2017-01-26 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-4694.
-
Resolution: Fixed

> Streams smoke tests fails when there is only 1 broker
> -
>
> Key: KAFKA-4694
> URL: https://issues.apache.org/jira/browse/KAFKA-4694
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Ewen Cheslack-Postava
>Assignee: Eno Thereska
>
> The streams smoke test fails when we have a single broker since by default it 
> requires a replication factor of 2. So in StreamsKafkaClient:createTopics we 
> can get an INVALID_REPLICATION_FACTOR code.
> As part of this commit, it appears we do not check the number of brokers 
> available and thus fail if there aren't enough brokers (check for 
> getBrokers() in here: 
> https://github.com/apache/kafka/commit/4b71c0bdc1acf244e3c96fa809a1a0e48471d586#diff-4320e27e72244cae71428533cf3582ef)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2448: KAFKA-4644: Improve test coverage of StreamsPartit...

2017-01-26 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2448

KAFKA-4644: Improve test coverage of StreamsPartitionAssignor

Some exception paths not previously covered. Extracted 
`ensureCopartitioning` into a static class.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka KAFKA-4644

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2448.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2448


commit de61b8fc299d95425a8eb003a2b532e2d73cf91d
Author: Damian Guy 
Date:   2017-01-26T16:26:03Z

improve test coverage




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4644) Improve test coverage of StreamsPartitionAssignor

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4644:
--
Status: Patch Available  (was: Open)

> Improve test coverage of StreamsPartitionAssignor
> -
>
> Key: KAFKA-4644
> URL: https://issues.apache.org/jira/browse/KAFKA-4644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.3.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths not covered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839945#comment-15839945
 ] 

Ismael Juma commented on KAFKA-4569:


[~umesh9...@gmail.com], have you tried running the test in a loop to see if it 
fails?

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 0.10.3.0
>
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> 

[jira] [Updated] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4569:
---
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 0.10.3.0
>
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> 

[jira] [Updated] (KAFKA-2334) Prevent HW from going back during leader failover

2017-01-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2334:
---
Labels: reliability  (was: )

> Prevent HW from going back during leader failover 
> --
>
> Key: KAFKA-2334
> URL: https://issues.apache.org/jira/browse/KAFKA-2334
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Guozhang Wang
>Assignee: Neha Narkhede
>  Labels: reliability
> Fix For: 0.10.3.0
>
>
> Consider the following scenario:
> 0. Kafka use replication factor of 2, with broker B1 as the leader, and B2 as 
> the follower. 
> 1. A producer keep sending to Kafka with ack=-1.
> 2. A consumer repeat issuing ListOffset request to Kafka.
> And the following sequence:
> 0. B1 current log-end-offset (LEO) 0, HW-offset 0; and same with B2.
> 1. B1 receive a ProduceRequest of 100 messages, append to local log (LEO 
> becomes 100) and hold the request in purgatory.
> 2. B1 receive a FetchRequest starting at offset 0 from follower B2, and 
> returns the 100 messages.
> 3. B2 append its received message to local log (LEO becomes 100).
> 4. B1 receive another FetchRequest starting at offset 100 from B2, knowing 
> that B2's LEO has caught up to 100, and hence update its own HW, and 
> satisfying the ProduceRequest in purgatory, and sending the FetchResponse 
> with HW 100 back to B2 ASYNCHRONOUSLY.
> 5. B1 successfully sends the ProduceResponse to the producer, and then fails, 
> hence the FetchResponse did not reach B2, whose HW remains 0.
> From the consumer's point of view, it could first see the latest offset of 
> 100 (from B1), and then see the latest offset of 0 (from B2), and then the 
> latest offset gradually catch up to 100.
> This is because we use HW to guard the ListOffset and 
> Fetch-from-ordinary-consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4637) Update system test(s) to use multiple listeners for the same security protocol

2017-01-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4637:
---
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Update system test(s) to use multiple listeners for the same security protocol
> --
>
> Key: KAFKA-4637
> URL: https://issues.apache.org/jira/browse/KAFKA-4637
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.3.0
>
>
> Even though this is tested via the JUnit tests introduced by KAFKA-4565, it 
> would be good to have at least one system test exercising this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2334) Prevent HW from going back during leader failover

2017-01-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2334:
---
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Prevent HW from going back during leader failover 
> --
>
> Key: KAFKA-2334
> URL: https://issues.apache.org/jira/browse/KAFKA-2334
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Guozhang Wang
>Assignee: Neha Narkhede
>  Labels: reliability
> Fix For: 0.10.3.0
>
>
> Consider the following scenario:
> 0. Kafka use replication factor of 2, with broker B1 as the leader, and B2 as 
> the follower. 
> 1. A producer keep sending to Kafka with ack=-1.
> 2. A consumer repeat issuing ListOffset request to Kafka.
> And the following sequence:
> 0. B1 current log-end-offset (LEO) 0, HW-offset 0; and same with B2.
> 1. B1 receive a ProduceRequest of 100 messages, append to local log (LEO 
> becomes 100) and hold the request in purgatory.
> 2. B1 receive a FetchRequest starting at offset 0 from follower B2, and 
> returns the 100 messages.
> 3. B2 append its received message to local log (LEO becomes 100).
> 4. B1 receive another FetchRequest starting at offset 100 from B2, knowing 
> that B2's LEO has caught up to 100, and hence update its own HW, and 
> satisfying the ProduceRequest in purgatory, and sending the FetchResponse 
> with HW 100 back to B2 ASYNCHRONOUSLY.
> 5. B1 successfully sends the ProduceResponse to the producer, and then fails, 
> hence the FetchResponse did not reach B2, whose HW remains 0.
> From the consumer's point of view, it could first see the latest offset of 
> 100 (from B1), and then see the latest offset of 0 (from B2), and then the 
> latest offset gradually catch up to 100.
> This is because we use HW to guard the ListOffset and 
> Fetch-from-ordinary-consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3779) Add the LRU cache for KTable.to() operator

2017-01-26 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-3779:

Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Add the LRU cache for KTable.to() operator
> --
>
> Key: KAFKA-3779
> URL: https://issues.apache.org/jira/browse/KAFKA-3779
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
> Fix For: 0.10.3.0
>
>
> The KTable.to operator currently does not use a cache. We can add a cache to 
> this operator to deduplicate and reduce data traffic as well. This is to be 
> done after KAFKA-3777.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3779) Add the LRU cache for KTable.to() operator

2017-01-26 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839942#comment-15839942
 ] 

Eno Thereska commented on KAFKA-3779:
-

[~ijuma] Done, thanks.

> Add the LRU cache for KTable.to() operator
> --
>
> Key: KAFKA-3779
> URL: https://issues.apache.org/jira/browse/KAFKA-3779
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
> Fix For: 0.10.3.0
>
>
> The KTable.to operator currently does not use a cache. We can add a cache to 
> this operator to deduplicate and reduce data traffic as well. This is to be 
> done after KAFKA-3777.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3429) Remove Serdes needed for repartitioning in KTable stateful operations

2017-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839941#comment-15839941
 ] 

Ismael Juma commented on KAFKA-3429:


Should we move this to a future release version (or none) given that the code 
freeze for 0.10.2 is imminent?

> Remove Serdes needed for repartitioning in KTable stateful operations
> -
>
> Key: KAFKA-3429
> URL: https://issues.apache.org/jira/browse/KAFKA-3429
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: api, newbie++
> Fix For: 0.10.2.0
>
>
> Currently in KTable aggregate operations where a repartition is possibly 
> needed since the aggregation key may not be the same as the original primary 
> key, we require the users to provide serdes (default to configured ones) for 
> read / write to the internally created re-partition topic. However, these are 
> not necessary since for all KTable instances either generated from the topics 
> directly:
> {code}table = builder.table(...){code}
> or from aggregation operations:
> {code}table = stream.aggregate(...){code}
> There are already serde provided for materializing the data, and hence the 
> same serde can be re-used when the resulted KTable is involved in future 
> aggregation operations. For example:
> {code}
> table1 = stream.aggregateByKey(serde);
> table2 = table1.aggregate(aggregator, selector, originalSerde, 
> aggregateSerde);
> {code}
> We would not need to require users to specify the "originalSerde" in 
> table1.aggregate since it could always reuse the "serde" from 
> stream.aggregateByKey, which is used to materialize the table1 object.
> In order to get ride of it, implementation-wise we need to carry the serde 
> information along with the KTableImpl instance in order to re-use it in a 
> future operation that requires repartitioning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3779) Add the LRU cache for KTable.to() operator

2017-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839939#comment-15839939
 ] 

Ismael Juma commented on KAFKA-3779:


Should we move this to a future release version (or none) given that the code 
freeze for 0.10.2 is imminent?

> Add the LRU cache for KTable.to() operator
> --
>
> Key: KAFKA-3779
> URL: https://issues.apache.org/jira/browse/KAFKA-3779
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
> Fix For: 0.10.2.0
>
>
> The KTable.to operator currently does not use a cache. We can add a cache to 
> this operator to deduplicate and reduce data traffic as well. This is to be 
> done after KAFKA-3777.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1368) Upgrade log4j

2017-01-26 Thread Mickael Maison (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison reassigned KAFKA-1368:
-

Assignee: Mickael Maison

> Upgrade log4j
> -
>
> Key: KAFKA-1368
> URL: https://issues.apache.org/jira/browse/KAFKA-1368
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Vladislav Pernin
>Assignee: Mickael Maison
>
> Upgrade log4j to at least 1.2.16 ou 1.2.17.
> Usage of EnhancedPatternLayout will be possible.
> It allows to set delimiters around the full log, stacktrace included, making 
> log messages collection easier with tools like Logstash.
> Example : <[%d{}]...[%t] %m%throwable>%n
> <[2014-04-08 11:07:20,360] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [X,6] offset 700 from consumer with correlation id 
> 0 (kafka.server.KafkaApis)
> kafka.common.OffsetOutOfRangeException: Request for offset 700 but we only 
> have log segments in the range 16021 to 16021.
> at kafka.log.Log.read(Log.scala:429)
> ...
> at java.lang.Thread.run(Thread.java:744)>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4644) Improve test coverage of StreamsPartitionAssignor

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-4644:
-

Assignee: Damian Guy

> Improve test coverage of StreamsPartitionAssignor
> -
>
> Key: KAFKA-4644
> URL: https://issues.apache.org/jira/browse/KAFKA-4644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.3.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Exception paths not covered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4646) Improve test coverage AbstractProcessorContext

2017-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839831#comment-15839831
 ] 

ASF GitHub Bot commented on KAFKA-4646:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2447

KAFKA-4646: Improve test coverage AbstractProcessorContext

Exception paths in `register()`, `topic()`, `partition()`, `offset()`, and 
`timestamp()`, were not covered by any existing tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka KAFKA-4646

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2447.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2447


commit fec637dcc40e04c78bf0c52c5571f91a8c4f8f32
Author: Damian Guy 
Date:   2017-01-26T15:08:49Z

improve test coverage




> Improve test coverage AbstractProcessorContext
> --
>
> Key: KAFKA-4646
> URL: https://issues.apache.org/jira/browse/KAFKA-4646
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4646) Improve test coverage AbstractProcessorContext

2017-01-26 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4646:
--
Status: Patch Available  (was: Reopened)

> Improve test coverage AbstractProcessorContext
> --
>
> Key: KAFKA-4646
> URL: https://issues.apache.org/jira/browse/KAFKA-4646
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2447: KAFKA-4646: Improve test coverage AbstractProcesso...

2017-01-26 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2447

KAFKA-4646: Improve test coverage AbstractProcessorContext

Exception paths in `register()`, `topic()`, `partition()`, `offset()`, and 
`timestamp()`, were not covered by any existing tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka KAFKA-4646

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2447.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2447


commit fec637dcc40e04c78bf0c52c5571f91a8c4f8f32
Author: Damian Guy 
Date:   2017-01-26T15:08:49Z

improve test coverage




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2445: KAFKA-4578: Upgrade notes for 0.10.2.0

2017-01-26 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2445

KAFKA-4578: Upgrade notes for 0.10.2.0



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-4578-upgrade-notes-0.10.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2445.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2445


commit e2e563e07e9f570e2aea395df326075576b865b5
Author: Ismael Juma 
Date:   2017-01-26T14:49:10Z

KAFKA-4578: Upgrade notes for 0.10.2.0




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >