[jira] [Commented] (KAFKA-3144) report members with no assigned partitions in ConsumerGroupCommand

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472806#comment-15472806
 ] 

ASF GitHub Bot commented on KAFKA-3144:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1336

KAFKA-3144: Report members with no assigned partitions in 
ConsumerGroupCommand

This PR makes a couple of enhancements to the `--describe` option of 
`ConsumerGroupCommand`:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when `--new-consumer` is 
used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3144

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1336


commit 5e3550a238fd735081eb88ec564a726c8009f017
Author: Vahid Hashemian 
Date:   2016-09-07T23:06:00Z

This PR makes a few enhancements to the --describe option of 
ConsumerGroupCommand:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when --new-consumer is used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).




> report members with no assigned partitions in ConsumerGroupCommand
> --
>
> Key: KAFKA-3144
> URL: https://issues.apache.org/jira/browse/KAFKA-3144
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>  Labels: newbie
>
> A couple of suggestions on improving ConsumerGroupCommand. 
> 1. It would be useful to list members with no assigned partitions when doing 
> describe in ConsumerGroupCommand.
> 2. Currently, we show the client.id of each member when doing describe in 
> ConsumerGroupCommand. Since client.id is supposed to be the logical 
> application id, all members in the same group are supposed to set the same 
> client.id. So, it would be clearer if we show the client id as well as the 
> member id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3144) report members with no assigned partitions in ConsumerGroupCommand

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472804#comment-15472804
 ] 

ASF GitHub Bot commented on KAFKA-3144:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1336


> report members with no assigned partitions in ConsumerGroupCommand
> --
>
> Key: KAFKA-3144
> URL: https://issues.apache.org/jira/browse/KAFKA-3144
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>  Labels: newbie
>
> A couple of suggestions on improving ConsumerGroupCommand. 
> 1. It would be useful to list members with no assigned partitions when doing 
> describe in ConsumerGroupCommand.
> 2. Currently, we show the client.id of each member when doing describe in 
> ConsumerGroupCommand. Since client.id is supposed to be the logical 
> application id, all members in the same group are supposed to set the same 
> client.id. So, it would be clearer if we show the client id as well as the 
> member id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1336: KAFKA-3144: Report members with no assigned partit...

2016-09-07 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1336

KAFKA-3144: Report members with no assigned partitions in 
ConsumerGroupCommand

This PR makes a couple of enhancements to the `--describe` option of 
`ConsumerGroupCommand`:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when `--new-consumer` is 
used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3144

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1336


commit 5e3550a238fd735081eb88ec564a726c8009f017
Author: Vahid Hashemian 
Date:   2016-09-07T23:06:00Z

This PR makes a few enhancements to the --describe option of 
ConsumerGroupCommand:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when --new-consumer is used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1336: KAFKA-3144: Report members with no assigned partit...

2016-09-07 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1336


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-07 Thread Gwen Shapira
+1 (binding)

On Tue, Sep 6, 2016 at 7:46 PM, Ismael Juma  wrote:

> Hi all,
>
> I would like to (re)initiate[1] the voting process for KIP-78 Cluster Id:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
>
> As explained in the KIP and discussion thread, we see this as a good first
> step that can serve as a foundation for future improvements.
>
> Thanks,
> Ismael
>
> [1] Even though I created a new vote thread, Gmail placed the messages in
> the discuss thread, making it not as visible as required. It's important to
> mention that two +1s were cast by Gwen and Sriram:
>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%2BZT2t34e0WRUsCT1ErhtKOg1w%
> 40mail.gmail.com%3E
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



[jira] [Commented] (KAFKA-4033) KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription Change

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472599#comment-15472599
 ] 

ASF GitHub Bot commented on KAFKA-4033:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1726

KAFKA-4033: KIP-70: Revise Partition Assignment Semantics on New Consumer's 
Subscription Change

This PR changes topic subscription semantics so a change in subscription 
does not immediately cause a rebalance.
Instead, the next poll or the next scheduled metadata refresh will update 
the assigned partitions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4033

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1726.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1726


commit c84ca373a1f9257483273b99119b70cb274c257d
Author: Vahid Hashemian 
Date:   2016-08-22T18:45:52Z

KAFKA-4033: KIP-70: Revise Partition Assignment Semantics on New Consumer's 
Subscription Change

This PR changes topic subscription semantics so a change in subscription 
does not immediately cause a rebalance.
Instead, the next poll or the next scheduled metadata refresh will update 
the assigned partitions.

commit 6533ece037b31ac76e94b736f89485d145c5db15
Author: Vahid Hashemian 
Date:   2016-08-22T23:29:48Z

Unit test for subscription change

commit 024ff0ad523cac9178e3006d62e1afc37a997ec3
Author: Vahid Hashemian 
Date:   2016-08-24T06:22:05Z

Clean up KafkaConsumerTest.java and add reusable methods

commit ce7e220d98ffd7b336fc0a6c76066150ad1a0ad4
Author: Vahid Hashemian 
Date:   2016-08-26T00:23:22Z

Fix unsubscribe semantics and add necessary unit tests




> KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription 
> Change
> ---
>
> Key: KAFKA-4033
> URL: https://issues.apache.org/jira/browse/KAFKA-4033
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>
> Modify the new consumer's implementation of topics subscribe and unsubscribe 
> interfaces so that they do not cause an immediate assignment update (this is 
> how the regex subscribe interface is implemented). Instead, the assignment 
> remains valid until it has been revoked in the next rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1726: KAFKA-4033: KIP-70: Revise Partition Assignment Se...

2016-09-07 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1726

KAFKA-4033: KIP-70: Revise Partition Assignment Semantics on New Consumer's 
Subscription Change

This PR changes topic subscription semantics so a change in subscription 
does not immediately cause a rebalance.
Instead, the next poll or the next scheduled metadata refresh will update 
the assigned partitions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4033

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1726.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1726


commit c84ca373a1f9257483273b99119b70cb274c257d
Author: Vahid Hashemian 
Date:   2016-08-22T18:45:52Z

KAFKA-4033: KIP-70: Revise Partition Assignment Semantics on New Consumer's 
Subscription Change

This PR changes topic subscription semantics so a change in subscription 
does not immediately cause a rebalance.
Instead, the next poll or the next scheduled metadata refresh will update 
the assigned partitions.

commit 6533ece037b31ac76e94b736f89485d145c5db15
Author: Vahid Hashemian 
Date:   2016-08-22T23:29:48Z

Unit test for subscription change

commit 024ff0ad523cac9178e3006d62e1afc37a997ec3
Author: Vahid Hashemian 
Date:   2016-08-24T06:22:05Z

Clean up KafkaConsumerTest.java and add reusable methods

commit ce7e220d98ffd7b336fc0a6c76066150ad1a0ad4
Author: Vahid Hashemian 
Date:   2016-08-26T00:23:22Z

Fix unsubscribe semantics and add necessary unit tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1726: KAFKA-4033: KIP-70: Revise Partition Assignment Se...

2016-09-07 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1726


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4033) KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription Change

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472598#comment-15472598
 ] 

ASF GitHub Bot commented on KAFKA-4033:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1726


> KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription 
> Change
> ---
>
> Key: KAFKA-4033
> URL: https://issues.apache.org/jira/browse/KAFKA-4033
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>
> Modify the new consumer's implementation of topics subscribe and unsubscribe 
> interfaces so that they do not cause an immediate assignment update (this is 
> how the regex subscribe interface is implemented). Instead, the assignment 
> remains valid until it has been revoked in the next rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1525

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4123: Queryable State returning null for key before all stores 
in

[wangguoz] KAFKA-3595: window stores use compact,delete config for changelogs

--
[...truncated 1303 lines...]
kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic STARTED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics STARTED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK STARTED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign STARTED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists STARTED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware STARTED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion STARTED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
STARTED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists STARTED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists STARTED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection STARTED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData STARTED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition STARTED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testGetBrokerMetadatas STARTED

kafka.admin.AdminTest > testGetBrokerMetadatas PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig STARTED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment STARTED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange STARTED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted STARTED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment STARTED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testShutdownBroker STARTED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision STARTED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK STARTED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner STARTED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover STARTED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower STARTED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted STARTED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic STARTED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic STARTED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion STARTED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic STARTED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #868

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Document that Connect topics should use compaction

[wangguoz] KAFKA-4123: Queryable State returning null for key before all stores 
in

[wangguoz] KAFKA-3595: window stores use compact,delete config for changelogs

--
[...truncated 4961 lines...]
kafka.api.SaslSslConsumerTest > testSimpleConsumption PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown STARTED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.ProducerBounceTest > testBrokerFailure STARTED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose STARTED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush STARTED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition STARTED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset STARTED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic STARTED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > 

[jira] [Work started] (KAFKA-4131) Multiple Regex KStream-Consumers cause Null pointer exception in addRawRecords in RecordQueue class

2016-09-07 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4131 started by Bill Bejeck.
--
> Multiple Regex KStream-Consumers cause Null pointer exception in 
> addRawRecords in RecordQueue class
> ---
>
> Key: KAFKA-4131
> URL: https://issues.apache.org/jira/browse/KAFKA-4131
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
> Environment: Servers: Confluent Distribution 3.0.0 (i.e. kafka 0.10.0 
> release)
> Client: Kafka-streams and Kafka-client... commit: 
> 6fb33afff976e467bfa8e0b29eb827
> 70a2a3aaec
>Reporter: David J. Garcia
>Assignee: Bill Bejeck
>
> When you start two consumer processes with a regex topic (with 2 or more
> partitions for the matching topics), the second (i.e. nonleader) consumer
> will fail with a null pointer exception.
> Exception in thread "StreamThread-4" java.lang.NullPointerException
>  at org.apache.kafka.streams.processor.internals.
> RecordQueue.addRawRecords(RecordQueue.java:78)
>  at org.apache.kafka.streams.processor.internals.
> PartitionGroup.addRawRecords(PartitionGroup.java:117)
>  at org.apache.kafka.streams.processor.internals.
> StreamTask.addRecords(StreamTask.java:139)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.runLoop(StreamThread.java:299)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:208)
> The issue may be in the TopologyBuilder line 832:
> String[] topics = (sourceNodeFactory.pattern != null) ?
> sourceNodeFactory.getTopics(subscriptionUpdates.getUpdates()) :
> sourceNodeFactory.getTopics();
> Because the 2nd consumer joins as a follower, “getUpdates” returns an
> empty collection and the regular expression doesn’t get applied to any
> topics.
> Steps to Reproduce:
> 1.) Create at least two topics with at least 2 partitions each.  And start 
> sending messages to them.
> 2.) Start a single threaded Regex KStream-Consumer (i.e. becomes the leader)
> 3)  Start a new instance of this consumer (i.e. it should receive some of the 
> partitions)
> The second consumer will die with the above exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1524

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Document that Connect topics should use compaction

--
[...truncated 6845 lines...]

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED


[jira] [Commented] (KAFKA-1006) Consumer loses messages of a new topic with auto.offset.reset = largest

2016-09-07 Thread Patrick Te Tau (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472427#comment-15472427
 ] 

Patrick Te Tau commented on KAFKA-1006:
---

Hi [~guozhang], we are also having trouble with our integration tests. 
I can manually set the offset for new topics but this will break my 
subscription to pre-existing topics. Because have no way of telling whether the 
topic is a new one or an old one, I have no way to switch my strategy.
I have considered storing a list of topics on my client but this solution fails 
when I run multiple clients. 
Any suggestions?


> Consumer loses messages of a new topic with auto.offset.reset = largest
> ---
>
> Key: KAFKA-1006
> URL: https://issues.apache.org/jira/browse/KAFKA-1006
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>  Labels: usability
>
> Consumer currently uses auto.offset.reset = largest by default. If a new 
> topic is created, consumer's topic watcher is fired. The consumer will first 
> finish partition reassignment as part of rebalance and then start consuming 
> from the tail of each partition. Until the partition reassignment is over, 
> the server may have appended new messages to the new topic, consumer won't 
> consume these messages. Thus, multiple batches of messages may be lost when a 
> topic is newly created. 
> The fix is to start consuming from the earliest offset for newly created 
> topics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3144) report members with no assigned partitions in ConsumerGroupCommand

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472341#comment-15472341
 ] 

ASF GitHub Bot commented on KAFKA-3144:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1336

KAFKA-3144: Report members with no assigned partitions in 
ConsumerGroupCommand

This PR makes a couple of enhancements to the `--describe` option of 
`ConsumerGroupCommand`:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when `--new-consumer` is 
used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3144

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1336


commit 5e3550a238fd735081eb88ec564a726c8009f017
Author: Vahid Hashemian 
Date:   2016-09-07T23:06:00Z

This PR makes a few enhancements to the --describe option of 
ConsumerGroupCommand:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when --new-consumer is used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).




> report members with no assigned partitions in ConsumerGroupCommand
> --
>
> Key: KAFKA-3144
> URL: https://issues.apache.org/jira/browse/KAFKA-3144
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>  Labels: newbie
>
> A couple of suggestions on improving ConsumerGroupCommand. 
> 1. It would be useful to list members with no assigned partitions when doing 
> describe in ConsumerGroupCommand.
> 2. Currently, we show the client.id of each member when doing describe in 
> ConsumerGroupCommand. Since client.id is supposed to be the logical 
> application id, all members in the same group are supposed to set the same 
> client.id. So, it would be clearer if we show the client id as well as the 
> member id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1336: KAFKA-3144: Report members with no assigned partit...

2016-09-07 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1336


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1336: KAFKA-3144: Report members with no assigned partit...

2016-09-07 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1336

KAFKA-3144: Report members with no assigned partitions in 
ConsumerGroupCommand

This PR makes a couple of enhancements to the `--describe` option of 
`ConsumerGroupCommand`:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when `--new-consumer` is 
used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3144

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1336


commit 5e3550a238fd735081eb88ec564a726c8009f017
Author: Vahid Hashemian 
Date:   2016-09-07T23:06:00Z

This PR makes a few enhancements to the --describe option of 
ConsumerGroupCommand:
1. Listing members with no assigned partitions.
2. Showing the member id along with the owner of each partition (owner is 
supposed to be the logical application id and all members in the same group are 
supposed to set the same owner).
3. Reporting broker id of the group coordinator when --new-consumer is used.
4. Printing a warning indicating whether ZooKeeper based or new consumer 
API based information is being reported.

It also adds unit tests to verify the added functionality.

Note: The third request on the corresponding JIRA (listing active offsets 
for empty groups of new consumers) is not implemented as part of this PR, and 
has been moved to its own JIRA (KAFKA-3853).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3144) report members with no assigned partitions in ConsumerGroupCommand

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472339#comment-15472339
 ] 

ASF GitHub Bot commented on KAFKA-3144:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1336


> report members with no assigned partitions in ConsumerGroupCommand
> --
>
> Key: KAFKA-3144
> URL: https://issues.apache.org/jira/browse/KAFKA-3144
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>  Labels: newbie
>
> A couple of suggestions on improving ConsumerGroupCommand. 
> 1. It would be useful to list members with no assigned partitions when doing 
> describe in ConsumerGroupCommand.
> 2. Currently, we show the client.id of each member when doing describe in 
> ConsumerGroupCommand. Since client.id is supposed to be the logical 
> application id, all members in the same group are supposed to set the same 
> client.id. So, it would be clearer if we show the client id as well as the 
> member id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-07 Thread Guozhang Wang
+1.

On Wed, Sep 7, 2016 at 11:18 AM, Jason Gustafson  wrote:

> +1 and thanks for the excellent write-up!
>
> On Wed, Sep 7, 2016 at 10:41 AM, Neha Narkhede  wrote:
>
> > +1 (binding)
> >
> > On Wed, Sep 7, 2016 at 9:49 AM Grant Henke  wrote:
> >
> > > +1 (non-binding)
> > >
> > > On Wed, Sep 7, 2016 at 6:55 AM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com
> > > > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > On Wed, Sep 7, 2016 at 4:09 AM, Sriram Subramanian  >
> > > > wrote:
> > > >
> > > > > +1 binding
> > > > >
> > > > > > On Sep 6, 2016, at 7:46 PM, Ismael Juma 
> wrote:
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I would like to (re)initiate[1] the voting process for KIP-78
> > Cluster
> > > > Id:
> > > > > >
> > > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
> > > > > >
> > > > > > As explained in the KIP and discussion thread, we see this as a
> > good
> > > > > first
> > > > > > step that can serve as a foundation for future improvements.
> > > > > >
> > > > > > Thanks,
> > > > > > Ismael
> > > > > >
> > > > > > [1] Even though I created a new vote thread, Gmail placed the
> > > messages
> > > > in
> > > > > > the discuss thread, making it not as visible as required. It's
> > > > important
> > > > > to
> > > > > > mention that two +1s were cast by Gwen and Sriram:
> > > > > >
> > > > > > http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> > > > > mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%
> 2BZT2t34e0WRUsCT1ErhtKOg1w%
> > > > > 40mail.gmail.com%3E
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > >
> > >
> > >
> > > --
> > > Grant Henke
> > > Software Engineer | Cloudera
> > > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> > >
> > --
> > Thanks,
> > Neha
> >
>



-- 
-- Guozhang


[jira] [Commented] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472329#comment-15472329
 ] 

ASF GitHub Bot commented on KAFKA-3595:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1792


> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Damian Guy
>Priority: Minor
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently state store replication always go through a compact kafka topic. 
> For some state stores, e.g. JoinWindow, there are no duplicates in the store, 
> there is not much benefit using a compacted topic.
> The problem of using compacted topic is the records can stay in kafka broker 
> forever. In my use case, my key is ad_id, it's incrementing all the time, not 
> bounded, I am worried the disk space on broker for that topic will go forever.
> I think we either need the capability to purge the compacted records on 
> broker, or allow us to specify different compact option for state store 
> replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-09-07 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3595:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1792
[https://github.com/apache/kafka/pull/1792]

> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Damian Guy
>Priority: Minor
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently state store replication always go through a compact kafka topic. 
> For some state stores, e.g. JoinWindow, there are no duplicates in the store, 
> there is not much benefit using a compacted topic.
> The problem of using compacted topic is the records can stay in kafka broker 
> forever. In my use case, my key is ad_id, it's incrementing all the time, not 
> bounded, I am worried the disk space on broker for that topic will go forever.
> I think we either need the capability to purge the compacted records on 
> broker, or allow us to specify different compact option for state store 
> replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1792: KAFKA-3595: window stores use compact,delete confi...

2016-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1792


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4123) Queryable State returning null for key before all stores in instance have been initialized

2016-09-07 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4123.
--
Resolution: Fixed

Issue resolved by pull request 1824
[https://github.com/apache/kafka/pull/1824]

> Queryable State returning null for key before all stores in instance have 
> been initialized
> --
>
> Key: KAFKA-4123
> URL: https://issues.apache.org/jira/browse/KAFKA-4123
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> A couple of problems:
> 1. A RocksDBStore instance is currently marked as open before the store has 
> been initialized from its changelog. This can result in reading old/invalid 
> data when querying.
> 2. In the case of multiple partitions and the tasks are being initialized it 
> is always possible that 1 or more StateStores will be intialized before the 
> complete set of stores in the Streams Instance are initialized. Currently 
> when this happens a query can return null because it will look in the 
> existing initialized stores. However, the key may exist in one of the 
> non-initialized instances. We need to wait for all Stores in the instance to 
> be initialized before allowing queries to progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1824: KAFKA-4123: Queryable State returning null for key...

2016-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1824


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4123) Queryable State returning null for key before all stores in instance have been initialized

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472282#comment-15472282
 ] 

ASF GitHub Bot commented on KAFKA-4123:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1824


> Queryable State returning null for key before all stores in instance have 
> been initialized
> --
>
> Key: KAFKA-4123
> URL: https://issues.apache.org/jira/browse/KAFKA-4123
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> A couple of problems:
> 1. A RocksDBStore instance is currently marked as open before the store has 
> been initialized from its changelog. This can result in reading old/invalid 
> data when querying.
> 2. In the case of multiple partitions and the tasks are being initialized it 
> is always possible that 1 or more StateStores will be intialized before the 
> complete set of stores in the Streams Instance are initialized. Currently 
> when this happens a query can return null because it will look in the 
> existing initialized stores. However, the key may exist in one of the 
> non-initialized instances. We need to wait for all Stores in the instance to 
> be initialized before allowing queries to progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4143) Transient failure in kafka.server.SaslSslReplicaFetchTest.testReplicaFetcherThread

2016-09-07 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4143:


 Summary: Transient failure in 
kafka.server.SaslSslReplicaFetchTest.testReplicaFetcherThread
 Key: KAFKA-4143
 URL: https://issues.apache.org/jira/browse/KAFKA-4143
 Project: Kafka
  Issue Type: Sub-task
  Components: unit tests
Reporter: Guozhang Wang


Example: 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/5471/testReport/junit/kafka.server/SaslSslReplicaFetchTest/testReplicaFetcherThread/

{code}
java.lang.AssertionError: Partition [foo,0] metadata not propagated after 15000 
ms
at org.junit.Assert.fail(Assert.java:88)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
at 
kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:794)
at 
kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:228)
at 
kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:227)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Range.foreach(Range.scala:141)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at kafka.utils.TestUtils$.createTopic(TestUtils.scala:227)
at 
kafka.server.BaseReplicaFetchTest$$anonfun$testReplicaFetcherThread$2.apply(BaseReplicaFetchTest.scala:62)
at 
kafka.server.BaseReplicaFetchTest$$anonfun$testReplicaFetcherThread$2.apply(BaseReplicaFetchTest.scala:61)
at scala.collection.immutable.List.foreach(List.scala:318)
at 
kafka.server.BaseReplicaFetchTest.testReplicaFetcherThread(BaseReplicaFetchTest.scala:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 

[jira] [Resolved] (KAFKA-4043) User-defined handler for topology restart

2016-09-07 Thread Greg Fodor (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Fodor resolved KAFKA-4043.
---
Resolution: Not A Problem

> User-defined handler for topology restart
> -
>
> Key: KAFKA-4043
> URL: https://issues.apache.org/jira/browse/KAFKA-4043
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Greg Fodor
>Assignee: Guozhang Wang
>
> Since Kafka Streams is just a library, there's a lot of cool stuff we've been 
> able to do that would be trickier if it were part of a larger 
> cluster-oriented job execution system that had assumptions about the 
> semantics of a job. One of the jobs we have uses Kafka Streams to do top 
> level data flow, and then one of our processors actually will kick off 
> background threads to do work based upon the data flow state. Happy to fill 
> in more details of our use-case, but fundamentally the model is that we have 
> a Kafka Streams data flow that is reading state from upstream, and that state 
> dictates that work needs to be done, which results in a dedicated work thread 
> to be spawned by our job.
> This works great, but we're running into an issue when there is partition 
> reassignment, since we have no way to detect this and cleanly shut down these 
> threads. In our case, we'd like to shut down the background worker threads if 
> there is a partition rebalance or if the job raises an exception and attempts 
> to restart. In practice what is happening is we are getting duplicate threads 
> for the same work on a partition rebalance.
> Implementation-wise, this seems like some type of event handler that can be 
> attached to the topology at build time that can will be called when the data 
> flow needs to rebalance or rebuild its task threads in general (ideally 
> passing as much information about the reason along.) I could imagine this 
> being factored similarly to the KafkaStreams#setUncaughtExceptionHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4043) User-defined handler for topology restart

2016-09-07 Thread Greg Fodor (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472271#comment-15472271
 ] 

Greg Fodor commented on KAFKA-4043:
---

Ah, that should work for us. Thanks!

> User-defined handler for topology restart
> -
>
> Key: KAFKA-4043
> URL: https://issues.apache.org/jira/browse/KAFKA-4043
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Greg Fodor
>Assignee: Guozhang Wang
>
> Since Kafka Streams is just a library, there's a lot of cool stuff we've been 
> able to do that would be trickier if it were part of a larger 
> cluster-oriented job execution system that had assumptions about the 
> semantics of a job. One of the jobs we have uses Kafka Streams to do top 
> level data flow, and then one of our processors actually will kick off 
> background threads to do work based upon the data flow state. Happy to fill 
> in more details of our use-case, but fundamentally the model is that we have 
> a Kafka Streams data flow that is reading state from upstream, and that state 
> dictates that work needs to be done, which results in a dedicated work thread 
> to be spawned by our job.
> This works great, but we're running into an issue when there is partition 
> reassignment, since we have no way to detect this and cleanly shut down these 
> threads. In our case, we'd like to shut down the background worker threads if 
> there is a partition rebalance or if the job raises an exception and attempts 
> to restart. In practice what is happening is we are getting duplicate threads 
> for the same work on a partition rebalance.
> Implementation-wise, this seems like some type of event handler that can be 
> attached to the topology at build time that can will be called when the data 
> flow needs to rebalance or rebuild its task threads in general (ideally 
> passing as much information about the reason along.) I could imagine this 
> being factored similarly to the KafkaStreams#setUncaughtExceptionHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #867

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fixes javadoc of Windows, fixes typo in parameter name of

--
[...truncated 6875 lines...]

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 

[GitHub] kafka pull request #1832: MINOR: Document that Connect topics should use com...

2016-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1832


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4118) StreamsSmokeTest.test_streams started failing since 18 August build

2016-09-07 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4118:
-
Assignee: Eno Thereska  (was: Guozhang Wang)

> StreamsSmokeTest.test_streams started failing since 18 August build
> ---
>
> Key: KAFKA-4118
> URL: https://issues.apache.org/jira/browse/KAFKA-4118
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Ismael Juma
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Link to the first failure on 18 August: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-08-18--001.1471540190--apache--trunk--40b1dd3/report.html
> The commit corresponding to the 18 August build was 
> https://github.com/apache/kafka/commit/40b1dd3f495a59ab, which is KIP-62 (and 
> before KIP-33)
> KAFKA-3807 tracks another test that started failing at the same time and 
> there's a possibility that the PR for that JIRA fixes this one too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4118) StreamsSmokeTest.test_streams started failing since 18 August build

2016-09-07 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472260#comment-15472260
 ] 

Guozhang Wang commented on KAFKA-4118:
--

[~enothereska] Could you take a look at this issue? Thanks!

> StreamsSmokeTest.test_streams started failing since 18 August build
> ---
>
> Key: KAFKA-4118
> URL: https://issues.apache.org/jira/browse/KAFKA-4118
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Ismael Juma
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Link to the first failure on 18 August: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-08-18--001.1471540190--apache--trunk--40b1dd3/report.html
> The commit corresponding to the 18 August build was 
> https://github.com/apache/kafka/commit/40b1dd3f495a59ab, which is KIP-62 (and 
> before KIP-33)
> KAFKA-3807 tracks another test that started failing at the same time and 
> there's a possibility that the PR for that JIRA fixes this one too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1523

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fixes javadoc of Windows, fixes typo in parameter name of

--
[...truncated 3416 lines...]

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 

[jira] [Created] (KAFKA-4142) Log files in /data dir date modified keeps being updated?

2016-09-07 Thread Clint Hillerman (JIRA)
Clint Hillerman created KAFKA-4142:
--

 Summary: Log files in /data dir date modified keeps being updated?
 Key: KAFKA-4142
 URL: https://issues.apache.org/jira/browse/KAFKA-4142
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.0
 Environment: CentOS release 6.8 (Final)

uname -a
Linux 2.6.32-642.1.1.el6.x86_64 #1 SMP Tue May 31 21:57:07 UTC 2016 x86_64 
x86_64 x86_64 GNU/Linux

Reporter: Clint Hillerman
Priority: Minor


The date modified of the kafka logs (the main ones specified by logs.dirs in 
the config) keep getting updated and set to the exact same time.

For example:

Say I had two log and index files ( date modified - file name):

20160901:10:00:01 - 0001.log
20160901:10:00:01 -0001.index
20160902:10:00:01 -0002.log
20160902:10:00:01 -0002.index

Later I notice the logs are getting way to old for the retention time. I then 
go look at the log dir and I see this:

20160903:10:00:01 - 0001.log
2016090310:00:01 -0001.index
20160903:10:00:01 -0002.log
20160903:10:00:01 -0002.index
20160903:10:00:01 -0003.log
20160903:10:00:01 -0003.index
20160904:10:00:01 -0004.log
20160904:10:00:01 -0004.index

The first two log files had there date modified moved forward for some reason. 
They were updated from 0901 and 0902 to 0903. 

It seems to happen periodically. The new logs that kafka writes out have the 
correct time stamp. 

This causes the logs to not be deleted. Right now I just touch the log files to 
an older date and they are deleted right away. 

Any help would be appreciated. Also, I'll explain the problem better if this 
doesn't make sense.

Thanks,




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1823: Fixes javadoc of Windows, fixes typo in parameter ...

2016-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1823


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3184) Add Checkpoint for In-memory State Store

2016-09-07 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov reassigned KAFKA-3184:
-

Assignee: Jeyhun Karimov

> Add Checkpoint for In-memory State Store
> 
>
> Key: KAFKA-3184
> URL: https://issues.apache.org/jira/browse/KAFKA-3184
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Jeyhun Karimov
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently Kafka Streams does not make a checkpoint of the persistent state 
> store upon committing, which would be expensive since it is "stopping the 
> world" and write on disks: for example, RocksDB would require you to copy the 
> file directory to make a copy naively. 
> However, for in-memory stores checkpointing maybe doable in an asynchronous 
> manner hence it can be done quickly. And the benefit of having intermediate 
> checkpoint is to avoid restoring from scratch if standby tasks are not 
> present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-07 Thread Becket Qin
That sounds reasonable to me. I'll update the KIP wiki page.

On Wed, Sep 7, 2016 at 1:34 PM, Jason Gustafson  wrote:

> Hey Becket,
>
> I don't have a super strong preference, but I think this
>
> earliestOffset(singleton(partition));
>
> captures the intent more clearly than this:
>
> offsetsForTimes(singletonMap(partition, -1));
>
> I can understand the desire to keep the API footprint small, but I think
> the use case is common enough to justify separate APIs. A couple additional
> points:
>
> 1. If we had separate methods, it might make sense to treat negative
> timestamps as illegal in offsetsForTimes. That seems safer from the user
> perspective since legitimate timestamps should always be positive.
> 2. The expected behavior of offsetsForTimes is to return the earliest
> offset which is greater than or equal to the passed offset, so having
> Long.MAX_VALUE return the latest value doesn't seem very intuitive to me. I
> would actually expect it to return null.
>
> Given that, I think I prefer having the custom methods. What do you think?
>
> Thanks,
> Jason
>
> On Wed, Sep 7, 2016 at 1:00 PM, Becket Qin  wrote:
>
> > Hi Jason,
> >
> > Thanks for the feedback. That is a good point. For the -1 and -2
> semantics,
> > I was just thinking we will preserve the semantics in the wire protocol.
> > For the user facing API, I agree that is not intuitive. We can do one of
> > the following:
> > 1. Add two separate methods: earliestOffsets() and latestOffsets().
> > 2. just have offsetsForTimes() and return the earliest if the timestamp
> is
> > negative and the latest if the timestamp is Long.MAX_VALUE.
> >
> > The good thing about doing (1) is that we kind of have symmetric function
> > signatures like seekToBeginning() and seekToEnd(). However, even if we do
> > (1), we may still need to do (2) to handle the negative timestamp and the
> > Long.MAX_VALUE timestamp in offsetsForTimes(). Then they essentially
> become
> > redundant to earliestOffsets() and latestOffsets().
> >
> > Personally I prefer option (2) because of the conciseness and it seems
> > intuitive enough. But I am open to option (1) as well.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Wed, Sep 7, 2016 at 11:06 AM, Jason Gustafson 
> > wrote:
> >
> > > Hey Becket,
> > >
> > > Thanks for the KIP. As I understand, the intention is to preserve the
> > > current behavior with a timestamp of -1 indicating latest timestamp and
> > -2
> > > indicating earliest timestamp. So users can query these offsets using
> the
> > > offsetsForTimes API if they know the magic values. I'm wondering if it
> > > would make the usage a little nicer to have a separate API instead for
> > > these special cases? Sort of in the way that we expose a generic seek()
> > and
> > > a seekToBeginning(), maybe we could have an earliestOffset() in
> addition
> > to
> > > offsetsForTimes()?
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Wed, Sep 7, 2016 at 10:04 AM, Becket Qin 
> > wrote:
> > >
> > > > Thanks everyone for all the feedback.
> > > >
> > > > If there is no further concerns or comments I will start a voting
> > thread
> > > on
> > > > this KIP tomorrow.
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > > On Tue, Sep 6, 2016 at 9:48 AM, Becket Qin 
> > wrote:
> > > >
> > > > > Hi Magnus,
> > > > >
> > > > > Thanks for the comments. I agree that querying messages within a
> time
> > > > > range is a valid use case (actually this is an example use case in
> my
> > > > > previous email). The current proposal can achieve this by having
> two
> > > > > ListOffsetRequest, right? I think the current API already supports
> > the
> > > > use
> > > > > cases that require the offsets for multiple timestamps. The
> question
> > is
> > > > > that whether it is worth adding more complexity to the protocol to
> > make
> > > > it
> > > > > easier for multiple timestamp query. Personally I think given that
> > > query
> > > > > multiple timestamps is likely an infrequent operation, there is no
> > need
> > > > to
> > > > > optimize for it and complicates the protocol.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > > > On Mon, Sep 5, 2016 at 11:21 PM, Magnus Edenhill <
> mag...@edenhill.se
> > >
> > > > > wrote:
> > > > >
> > > > >> Good write-up Qin, the API looks promising.
> > > > >>
> > > > >> I have one comment:
> > > > >>
> > > > >> 2016-09-03 5:20 GMT+02:00 Becket Qin :
> > > > >>
> > > > >> > The currently offsetsForTimes() API obviously does not support
> > > > querying
> > > > >> > multiple timestamps for the same partition. It doesn't seems a
> > > feature
> > > > >> for
> > > > >> > ListOffsetRequest v0 either (sounds more like a bug). My
> intuition
> > > is
> > > > >> that
> > > > >> > it's a rare use case. Given it does not exist before and we
> don't
> > > see
> > > > 

[jira] [Updated] (KAFKA-4141) 2x increase in cpu usage on new Producer API

2016-09-07 Thread Andrew Jorgensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Jorgensen updated KAFKA-4141:

Description: 
We are seeing about a 2x increase in CPU usage for the new kafka producer 
compared to the 0.8.1.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and 
noticed that the cpu usage for the new producer had increased pretty 
significantly compared to the old producer. This has caused us to need more 
resources to do the same amount of work we were doing before. I did some quick 
profiling and it looks like during sending half of the cpu cycles are spent in 
org.apache.kafka.common.record.Compressor.putRecord and the other half are in 
org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of 
the cpu cycles for that method). I know its not apples to apples but the old 
producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

h2. old producer:
!http://imgur.com/1xS34Dl.jpg!

h2. new producer:
!http://imgur.com/0w0G5b1.jpg!

  was:
We are seeing about a 2x increase in CPU usage for the new kafka producer 
compared to the 0.8.1.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and 
noticed that the cpu usage for the new producer had increased pretty 
significantly compared to the old producer. This has caused us to need more 
resources to do the same amount of work we were doing before. I did some quick 
profiling and it looks like during sending half of the cpu cycles are sped in 
org.apache.kafka.common.record.Compressor.putRecord and the other half is in 
org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of 
the cpu cycles for that method). I know its not apples to apples but the old 
producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

h2. old producer:
!http://imgur.com/1xS34Dl.jpg!

h2. new producer:
!http://imgur.com/0w0G5b1.jpg!


> 2x increase in cpu usage on new Producer API
> 
>
> Key: KAFKA-4141
> URL: https://issues.apache.org/jira/browse/KAFKA-4141
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew Jorgensen
>
> We are seeing about a 2x increase in CPU usage for the new kafka producer 
> compared to the 0.8.1.1 producer. We are currently using gzip compression.
> We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 
> and noticed that the cpu usage for the new producer had increased pretty 
> significantly compared to the old producer. This has caused us to need more 
> resources to do the same amount of work we were doing before. I did some 
> quick profiling and it looks like during sending half of the cpu cycles are 
> spent in org.apache.kafka.common.record.Compressor.putRecord and the other 
> half are in org.apache.kafka.common.record.Record.computeChecksum (both are 
> around 5.8% of the cpu cycles for that method). I know its not apples to 
> apples but the old producer did not seem to have this overhead or at least it 
> was greatly reduced.
> Is this a known performance degradation compared to the old producer? 
> h2. old producer:
> !http://imgur.com/1xS34Dl.jpg!
> h2. new producer:
> !http://imgur.com/0w0G5b1.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4141) 2x increase in cpu usage on new Producer API

2016-09-07 Thread Andrew Jorgensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Jorgensen updated KAFKA-4141:

Description: 
We are seeing about a 2x increase in CPU usage for the new kafka producer 
compared to the 0.8.1.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and 
noticed that the cpu usage for the new producer had increased pretty 
significantly compared to the old producer. This has caused us to need more 
resources to do the same amount of work we were doing before. I did some quick 
profiling and it looks like during sending half of the cpu cycles are sped in 
org.apache.kafka.common.record.Compressor.putRecord and the other half is in 
org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of 
the cpu cycles for that method). I know its not apples to apples but the old 
producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

h2. old producer:
!http://imgur.com/1xS34Dl.jpg!

h2. new producer:
!http://imgur.com/0w0G5b1.jpg!

  was:
We are seeing about a 2x increase in CPU usage for the new kafka producer 
compared to the 0.8.0.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and 
noticed that the cpu usage for the new producer had increased pretty 
significantly compared to the old producer. This has caused us to need more 
resources to do the same amount of work we were doing before. I did some quick 
profiling and it looks like during sending half of the cpu cycles are sped in 
org.apache.kafka.common.record.Compressor.putRecord and the other half is in 
org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of 
the cpu cycles for that method). I know its not apples to apples but the old 
producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

h2. old producer:
!http://imgur.com/1xS34Dl.jpg!

h2. new producer:
!http://imgur.com/0w0G5b1.jpg!


> 2x increase in cpu usage on new Producer API
> 
>
> Key: KAFKA-4141
> URL: https://issues.apache.org/jira/browse/KAFKA-4141
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew Jorgensen
>
> We are seeing about a 2x increase in CPU usage for the new kafka producer 
> compared to the 0.8.1.1 producer. We are currently using gzip compression.
> We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 
> and noticed that the cpu usage for the new producer had increased pretty 
> significantly compared to the old producer. This has caused us to need more 
> resources to do the same amount of work we were doing before. I did some 
> quick profiling and it looks like during sending half of the cpu cycles are 
> sped in org.apache.kafka.common.record.Compressor.putRecord and the other 
> half is in org.apache.kafka.common.record.Record.computeChecksum (both are 
> around 5.8% of the cpu cycles for that method). I know its not apples to 
> apples but the old producer did not seem to have this overhead or at least it 
> was greatly reduced.
> Is this a known performance degradation compared to the old producer? 
> h2. old producer:
> !http://imgur.com/1xS34Dl.jpg!
> h2. new producer:
> !http://imgur.com/0w0G5b1.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4141) 2x increase in cpu usage on new Producer API

2016-09-07 Thread Andrew Jorgensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Jorgensen updated KAFKA-4141:

Description: 
We are seeing about a 2x increase in CPU usage for the new kafka producer 
compared to the 0.8.0.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and 
noticed that the cpu usage for the new producer had increased pretty 
significantly compared to the old producer. This has caused us to need more 
resources to do the same amount of work we were doing before. I did some quick 
profiling and it looks like during sending half of the cpu cycles are sped in 
org.apache.kafka.common.record.Compressor.putRecord and the other half is in 
org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of 
the cpu cycles for that method). I know its not apples to apples but the old 
producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

h2. old producer:
!http://imgur.com/1xS34Dl.jpg!

h2. new producer:
!http://imgur.com/0w0G5b1.jpg!

  was:
We are seeing about a 2x increase in CPU usage for the new kafka producer 
compared to the 0.8.0.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and 
noticed that the cpu usage for the new producer had increased pretty 
significantly compared to the old producer. This has caused us to need more 
resources to do the same amount of work we were doing before. I did some quick 
profiling and it looks like during sending half of the cpu cycles are sped in 
org.apache.kafka.common.record.Compressor.putRecord and the other half is in 
org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of 
the cpu cycles for that method). I know its not apples to apples but the old 
producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

Here is the trace from the old producer:
!http://imgur.com/1xS34Dl.jpg!

New producer:
!http://imgur.com/0w0G5b1.jpg!


> 2x increase in cpu usage on new Producer API
> 
>
> Key: KAFKA-4141
> URL: https://issues.apache.org/jira/browse/KAFKA-4141
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew Jorgensen
>
> We are seeing about a 2x increase in CPU usage for the new kafka producer 
> compared to the 0.8.0.1 producer. We are currently using gzip compression.
> We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 
> and noticed that the cpu usage for the new producer had increased pretty 
> significantly compared to the old producer. This has caused us to need more 
> resources to do the same amount of work we were doing before. I did some 
> quick profiling and it looks like during sending half of the cpu cycles are 
> sped in org.apache.kafka.common.record.Compressor.putRecord and the other 
> half is in org.apache.kafka.common.record.Record.computeChecksum (both are 
> around 5.8% of the cpu cycles for that method). I know its not apples to 
> apples but the old producer did not seem to have this overhead or at least it 
> was greatly reduced.
> Is this a known performance degradation compared to the old producer? 
> h2. old producer:
> !http://imgur.com/1xS34Dl.jpg!
> h2. new producer:
> !http://imgur.com/0w0G5b1.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4141) 2x increase in cp usage on new Producer API

2016-09-07 Thread Andrew Jorgensen (JIRA)
Andrew Jorgensen created KAFKA-4141:
---

 Summary: 2x increase in cp usage on new Producer API
 Key: KAFKA-4141
 URL: https://issues.apache.org/jira/browse/KAFKA-4141
 Project: Kafka
  Issue Type: Bug
Reporter: Andrew Jorgensen


We are seeing about a 2x increase in CPU usage for the new kafka producer 
compared to the 0.8.0.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and 
noticed that the cpu usage for the new producer had increased pretty 
significantly compared to the old producer. This has caused us to need more 
resources to do the same amount of work we were doing before. I did some quick 
profiling and it looks like during sending half of the cpu cycles are sped in 
org.apache.kafka.common.record.Compressor.putRecord and the other half is in 
org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of 
the cpu cycles for that method). I know its not apples to apples but the old 
producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

Here is the trace from the old producer:
!http://imgur.com/1xS34Dl.jpg!

New producer:
!http://imgur.com/0w0G5b1.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4141) 2x increase in cpu usage on new Producer API

2016-09-07 Thread Andrew Jorgensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Jorgensen updated KAFKA-4141:

Summary: 2x increase in cpu usage on new Producer API  (was: 2x increase in 
cp usage on new Producer API)

> 2x increase in cpu usage on new Producer API
> 
>
> Key: KAFKA-4141
> URL: https://issues.apache.org/jira/browse/KAFKA-4141
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew Jorgensen
>
> We are seeing about a 2x increase in CPU usage for the new kafka producer 
> compared to the 0.8.0.1 producer. We are currently using gzip compression.
> We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 
> and noticed that the cpu usage for the new producer had increased pretty 
> significantly compared to the old producer. This has caused us to need more 
> resources to do the same amount of work we were doing before. I did some 
> quick profiling and it looks like during sending half of the cpu cycles are 
> sped in org.apache.kafka.common.record.Compressor.putRecord and the other 
> half is in org.apache.kafka.common.record.Record.computeChecksum (both are 
> around 5.8% of the cpu cycles for that method). I know its not apples to 
> apples but the old producer did not seem to have this overhead or at least it 
> was greatly reduced.
> Is this a known performance degradation compared to the old producer? 
> Here is the trace from the old producer:
> !http://imgur.com/1xS34Dl.jpg!
> New producer:
> !http://imgur.com/0w0G5b1.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-07 Thread Jason Gustafson
Hey Becket,

I don't have a super strong preference, but I think this

earliestOffset(singleton(partition));

captures the intent more clearly than this:

offsetsForTimes(singletonMap(partition, -1));

I can understand the desire to keep the API footprint small, but I think
the use case is common enough to justify separate APIs. A couple additional
points:

1. If we had separate methods, it might make sense to treat negative
timestamps as illegal in offsetsForTimes. That seems safer from the user
perspective since legitimate timestamps should always be positive.
2. The expected behavior of offsetsForTimes is to return the earliest
offset which is greater than or equal to the passed offset, so having
Long.MAX_VALUE return the latest value doesn't seem very intuitive to me. I
would actually expect it to return null.

Given that, I think I prefer having the custom methods. What do you think?

Thanks,
Jason

On Wed, Sep 7, 2016 at 1:00 PM, Becket Qin  wrote:

> Hi Jason,
>
> Thanks for the feedback. That is a good point. For the -1 and -2 semantics,
> I was just thinking we will preserve the semantics in the wire protocol.
> For the user facing API, I agree that is not intuitive. We can do one of
> the following:
> 1. Add two separate methods: earliestOffsets() and latestOffsets().
> 2. just have offsetsForTimes() and return the earliest if the timestamp is
> negative and the latest if the timestamp is Long.MAX_VALUE.
>
> The good thing about doing (1) is that we kind of have symmetric function
> signatures like seekToBeginning() and seekToEnd(). However, even if we do
> (1), we may still need to do (2) to handle the negative timestamp and the
> Long.MAX_VALUE timestamp in offsetsForTimes(). Then they essentially become
> redundant to earliestOffsets() and latestOffsets().
>
> Personally I prefer option (2) because of the conciseness and it seems
> intuitive enough. But I am open to option (1) as well.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Wed, Sep 7, 2016 at 11:06 AM, Jason Gustafson 
> wrote:
>
> > Hey Becket,
> >
> > Thanks for the KIP. As I understand, the intention is to preserve the
> > current behavior with a timestamp of -1 indicating latest timestamp and
> -2
> > indicating earliest timestamp. So users can query these offsets using the
> > offsetsForTimes API if they know the magic values. I'm wondering if it
> > would make the usage a little nicer to have a separate API instead for
> > these special cases? Sort of in the way that we expose a generic seek()
> and
> > a seekToBeginning(), maybe we could have an earliestOffset() in addition
> to
> > offsetsForTimes()?
> >
> > Thanks,
> > Jason
> >
> > On Wed, Sep 7, 2016 at 10:04 AM, Becket Qin 
> wrote:
> >
> > > Thanks everyone for all the feedback.
> > >
> > > If there is no further concerns or comments I will start a voting
> thread
> > on
> > > this KIP tomorrow.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Tue, Sep 6, 2016 at 9:48 AM, Becket Qin 
> wrote:
> > >
> > > > Hi Magnus,
> > > >
> > > > Thanks for the comments. I agree that querying messages within a time
> > > > range is a valid use case (actually this is an example use case in my
> > > > previous email). The current proposal can achieve this by having two
> > > > ListOffsetRequest, right? I think the current API already supports
> the
> > > use
> > > > cases that require the offsets for multiple timestamps. The question
> is
> > > > that whether it is worth adding more complexity to the protocol to
> make
> > > it
> > > > easier for multiple timestamp query. Personally I think given that
> > query
> > > > multiple timestamps is likely an infrequent operation, there is no
> need
> > > to
> > > > optimize for it and complicates the protocol.
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > > On Mon, Sep 5, 2016 at 11:21 PM, Magnus Edenhill  >
> > > > wrote:
> > > >
> > > >> Good write-up Qin, the API looks promising.
> > > >>
> > > >> I have one comment:
> > > >>
> > > >> 2016-09-03 5:20 GMT+02:00 Becket Qin :
> > > >>
> > > >> > The currently offsetsForTimes() API obviously does not support
> > > querying
> > > >> > multiple timestamps for the same partition. It doesn't seems a
> > feature
> > > >> for
> > > >> > ListOffsetRequest v0 either (sounds more like a bug). My intuition
> > is
> > > >> that
> > > >> > it's a rare use case. Given it does not exist before and we don't
> > see
> > > a
> > > >> > strong need from the community either, maybe it is better to keep
> it
> > > >> simple
> > > >> > for ListOffsetRequest v1. We can add it later if it turns out to
> be
> > a
> > > >> > useful feature (that may need a interface change, but I honestly
> do
> > > not
> > > >> > think people would frequently query many different timestamps for
> > the
> > > >> same
> > > >> > partition)
> > > >> >
> > > >>
> > 

[jira] [Commented] (KAFKA-4074) Deleting a topic can make it unavailable even if delete.topic.enable is false

2016-09-07 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471686#comment-15471686
 ] 

Joel Koshy commented on KAFKA-4074:
---

Yes - totally missed KAFKA-3175

> Deleting a topic can make it unavailable even if delete.topic.enable is false
> -
>
> Key: KAFKA-4074
> URL: https://issues.apache.org/jira/browse/KAFKA-4074
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
> Fix For: 0.10.1.0
>
>
> The {{delete.topic.enable}} configuration does not completely block the 
> effects of delete topic since the controller may (indirectly) query the list 
> of topics under the delete-topic znode.
> To reproduce:
> * Delete topic X
> * Force a controller move (either by bouncing or removing the controller 
> znode)
> * The new controller will send out UpdateMetadataRequests with leader=-2 for 
> the partitions of X
> * Producers eventually stop producing to that topic
> The reason for this is that when ControllerChannelManager adds 
> UpdateMetadataRequests for brokers, we directly use the partitionsToBeDeleted 
> field of the DeleteTopicManager (which is set to the partitions of the topics 
> under the delete-topic znode on controller startup).
> In order to get out of the situation you have to remove X from the znode and 
> then force another controller move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-07 Thread Becket Qin
Hi Jason,

Thanks for the feedback. That is a good point. For the -1 and -2 semantics,
I was just thinking we will preserve the semantics in the wire protocol.
For the user facing API, I agree that is not intuitive. We can do one of
the following:
1. Add two separate methods: earliestOffsets() and latestOffsets().
2. just have offsetsForTimes() and return the earliest if the timestamp is
negative and the latest if the timestamp is Long.MAX_VALUE.

The good thing about doing (1) is that we kind of have symmetric function
signatures like seekToBeginning() and seekToEnd(). However, even if we do
(1), we may still need to do (2) to handle the negative timestamp and the
Long.MAX_VALUE timestamp in offsetsForTimes(). Then they essentially become
redundant to earliestOffsets() and latestOffsets().

Personally I prefer option (2) because of the conciseness and it seems
intuitive enough. But I am open to option (1) as well.

Thanks,

Jiangjie (Becket) Qin

On Wed, Sep 7, 2016 at 11:06 AM, Jason Gustafson  wrote:

> Hey Becket,
>
> Thanks for the KIP. As I understand, the intention is to preserve the
> current behavior with a timestamp of -1 indicating latest timestamp and -2
> indicating earliest timestamp. So users can query these offsets using the
> offsetsForTimes API if they know the magic values. I'm wondering if it
> would make the usage a little nicer to have a separate API instead for
> these special cases? Sort of in the way that we expose a generic seek() and
> a seekToBeginning(), maybe we could have an earliestOffset() in addition to
> offsetsForTimes()?
>
> Thanks,
> Jason
>
> On Wed, Sep 7, 2016 at 10:04 AM, Becket Qin  wrote:
>
> > Thanks everyone for all the feedback.
> >
> > If there is no further concerns or comments I will start a voting thread
> on
> > this KIP tomorrow.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Tue, Sep 6, 2016 at 9:48 AM, Becket Qin  wrote:
> >
> > > Hi Magnus,
> > >
> > > Thanks for the comments. I agree that querying messages within a time
> > > range is a valid use case (actually this is an example use case in my
> > > previous email). The current proposal can achieve this by having two
> > > ListOffsetRequest, right? I think the current API already supports the
> > use
> > > cases that require the offsets for multiple timestamps. The question is
> > > that whether it is worth adding more complexity to the protocol to make
> > it
> > > easier for multiple timestamp query. Personally I think given that
> query
> > > multiple timestamps is likely an infrequent operation, there is no need
> > to
> > > optimize for it and complicates the protocol.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Mon, Sep 5, 2016 at 11:21 PM, Magnus Edenhill 
> > > wrote:
> > >
> > >> Good write-up Qin, the API looks promising.
> > >>
> > >> I have one comment:
> > >>
> > >> 2016-09-03 5:20 GMT+02:00 Becket Qin :
> > >>
> > >> > The currently offsetsForTimes() API obviously does not support
> > querying
> > >> > multiple timestamps for the same partition. It doesn't seems a
> feature
> > >> for
> > >> > ListOffsetRequest v0 either (sounds more like a bug). My intuition
> is
> > >> that
> > >> > it's a rare use case. Given it does not exist before and we don't
> see
> > a
> > >> > strong need from the community either, maybe it is better to keep it
> > >> simple
> > >> > for ListOffsetRequest v1. We can add it later if it turns out to be
> a
> > >> > useful feature (that may need a interface change, but I honestly do
> > not
> > >> > think people would frequently query many different timestamps for
> the
> > >> same
> > >> > partition)
> > >> >
> > >>
> > >> I argue that the current behaviour of OffsetRequest with regards to
> > >> duplicate partitions is a bug
> > >> and think it would be a mistake to move the same semantics over to
> thew
> > >> new
> > >> ListOffset API.
> > >> One use case is that an application may want to know the offset range
> > >> between two timestamps,
> > >> e.g., for reprocessing, batching, searching, etc.
> > >>
> > >>
> > >> Thanks,
> > >> Magnus
> > >>
> > >>
> > >>
> > >> >
> > >> > Have a good long weekend!
> > >> >
> > >> > Thanks,
> > >> >
> > >> > Jiangjie (Becket) Qin
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > On Fri, Sep 2, 2016 at 6:10 PM, Ismael Juma 
> > wrote:
> > >> >
> > >> > > Thanks for the proposal Becket. Looks good overall, a few
> comments:
> > >> > >
> > >> > > ListOffsetResponse => [TopicName [PartitionOffsets]]
> > >> > > >   PartitionOffsets => Partition ErrorCode Timestamp [Offset]
> > >> > > >   Partition => int32
> > >> > > >   ErrorCode => int16
> > >> > > >   Timestamp => int64
> > >> > > >   Offset => int
> > >> > >
> > >> > >
> > >> > > It should be int64 for `Offset` right?
> > >> > >
> > >> > > Implementation wise, we will migrate to o.a.k.common.requests.
> > >> > > 

[jira] [Commented] (KAFKA-4140) Update system tests to allow running tests in parallel

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471552#comment-15471552
 ] 

ASF GitHub Bot commented on KAFKA-4140:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/1834

[WIP] KAFKA-4140: make system tests parallel friendly

Updates to take advantage of soon-to-be-released ducktape features.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka systest-parallel-friendly

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1834.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1834


commit 7f8d49d32d35e0980c84c895b6384d9d4f9e15e8
Author: Geoff Anderson 
Date:   2016-08-05T01:46:51Z

Added state check in stop() method in zookeeper.py

commit 4ac3aa410e9a1fd692e063d9bd4d238cf63299cb
Author: Geoff Anderson 
Date:   2016-08-05T01:47:24Z

Added wait_until for line_count check in console consumer sanity check test

commit 9f727d6afe6e5a77c6d272b2a1cc876c64b80101
Author: Geoff Anderson 
Date:   2016-08-08T06:10:02Z

Tweak system tests/services for running in parallel

commit c049cdaed50b31d28da4fc7bdc1d255657c8654d
Author: Geoff Anderson 
Date:   2016-08-08T17:55:52Z

Save keystore/truststore in local scratch to make parallel safe

commit a7b1530597b04508ebe78aeb7fcc89fea75ce597
Author: Geoff Anderson 
Date:   2016-08-08T23:08:16Z

More informative error message

commit 7471460ebf7359bd63e9ce90d63289163d6758bb
Author: Geoff Anderson 
Date:   2016-08-08T23:12:26Z

Use stdout instead of stderr

commit ff1221862f11fff8daf82cb88c5b7ff5ab6f9ae7
Author: Geoff Anderson 
Date:   2016-08-09T19:54:01Z

Some adjustments to expected cluster usage

commit 051825dfd47d92a0f6856faa53a80db8e3680a76
Author: Geoff Anderson 
Date:   2016-08-11T17:03:25Z

Cluster size adjustments and other fixes

commit 7f447ae50dd1518a0f6d48daa13688c22dde8cad
Author: Geoff Anderson 
Date:   2016-08-11T21:52:26Z

More fixes to tests

commit 150e8438ca4b1ec2ad2d198ba954c09a04fdc11b
Author: Geoff Anderson 
Date:   2016-08-30T17:58:16Z

Minor cleanup

commit 3eb185307f31d02034dcf3d4f8e684b8324baf0c
Author: Geoff Anderson 
Date:   2016-08-30T23:27:45Z

Use per-test local scratch directory in minikdc

commit b1cca95b78004275fe8bf0602b7c2566abd87105
Author: Geoff Anderson 
Date:   2016-08-31T23:20:23Z

Removed a few uses of scp_to in system tests

commit b29e1fa179be201cf201a11e92fc1a4c21fa18d1
Author: Geoff Anderson 
Date:   2016-09-04T07:46:18Z

Tabs to spaces, scp_to -> copy_to after rebase

commit 9ebb4c35b29796e1d3d0ab92b9ae4e8bad6bf2c3
Author: Geoff Anderson 
Date:   2016-09-06T06:07:13Z

Updated scratch_dir -> local_scratch_dir per changes in ducktape




> Update system tests to allow running tests in parallel
> --
>
> Key: KAFKA-4140
> URL: https://issues.apache.org/jira/browse/KAFKA-4140
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> The framework used to run system tests will soon have the capability to run 
> tests in parallel. In our validations, we've found significant speedup with 
> modest increase in the size of the worker cluster, as well as much better 
> usage of the cluster resources.
> A few updates to the kafka system test services and tests are needed to take 
> full advantage of this:
> 1) cluster usage annotation - this provides a hint to the framework about 
> what cluster resources to set aside for a given test, and lets the driver 
> efficiently use the worker cluster.
> 2) eliminate a few canonical paths on the test driver. This is fine when 
> tests are run serially, but in parallel, different tests end up colliding on 
> these paths. The primary culprits here are security_config.py, and minikdc.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1834: [WIP] KAFKA-4140: make system tests parallel frien...

2016-09-07 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/1834

[WIP] KAFKA-4140: make system tests parallel friendly

Updates to take advantage of soon-to-be-released ducktape features.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka systest-parallel-friendly

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1834.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1834


commit 7f8d49d32d35e0980c84c895b6384d9d4f9e15e8
Author: Geoff Anderson 
Date:   2016-08-05T01:46:51Z

Added state check in stop() method in zookeeper.py

commit 4ac3aa410e9a1fd692e063d9bd4d238cf63299cb
Author: Geoff Anderson 
Date:   2016-08-05T01:47:24Z

Added wait_until for line_count check in console consumer sanity check test

commit 9f727d6afe6e5a77c6d272b2a1cc876c64b80101
Author: Geoff Anderson 
Date:   2016-08-08T06:10:02Z

Tweak system tests/services for running in parallel

commit c049cdaed50b31d28da4fc7bdc1d255657c8654d
Author: Geoff Anderson 
Date:   2016-08-08T17:55:52Z

Save keystore/truststore in local scratch to make parallel safe

commit a7b1530597b04508ebe78aeb7fcc89fea75ce597
Author: Geoff Anderson 
Date:   2016-08-08T23:08:16Z

More informative error message

commit 7471460ebf7359bd63e9ce90d63289163d6758bb
Author: Geoff Anderson 
Date:   2016-08-08T23:12:26Z

Use stdout instead of stderr

commit ff1221862f11fff8daf82cb88c5b7ff5ab6f9ae7
Author: Geoff Anderson 
Date:   2016-08-09T19:54:01Z

Some adjustments to expected cluster usage

commit 051825dfd47d92a0f6856faa53a80db8e3680a76
Author: Geoff Anderson 
Date:   2016-08-11T17:03:25Z

Cluster size adjustments and other fixes

commit 7f447ae50dd1518a0f6d48daa13688c22dde8cad
Author: Geoff Anderson 
Date:   2016-08-11T21:52:26Z

More fixes to tests

commit 150e8438ca4b1ec2ad2d198ba954c09a04fdc11b
Author: Geoff Anderson 
Date:   2016-08-30T17:58:16Z

Minor cleanup

commit 3eb185307f31d02034dcf3d4f8e684b8324baf0c
Author: Geoff Anderson 
Date:   2016-08-30T23:27:45Z

Use per-test local scratch directory in minikdc

commit b1cca95b78004275fe8bf0602b7c2566abd87105
Author: Geoff Anderson 
Date:   2016-08-31T23:20:23Z

Removed a few uses of scp_to in system tests

commit b29e1fa179be201cf201a11e92fc1a4c21fa18d1
Author: Geoff Anderson 
Date:   2016-09-04T07:46:18Z

Tabs to spaces, scp_to -> copy_to after rebase

commit 9ebb4c35b29796e1d3d0ab92b9ae4e8bad6bf2c3
Author: Geoff Anderson 
Date:   2016-09-06T06:07:13Z

Updated scratch_dir -> local_scratch_dir per changes in ducktape




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4140) Update system tests to allow running tests in parallel

2016-09-07 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-4140:
-

 Summary: Update system tests to allow running tests in parallel
 Key: KAFKA-4140
 URL: https://issues.apache.org/jira/browse/KAFKA-4140
 Project: Kafka
  Issue Type: Improvement
Reporter: Geoff Anderson
Assignee: Geoff Anderson


The framework used to run system tests will soon have the capability to run 
tests in parallel. In our validations, we've found significant speedup with 
modest increase in the size of the worker cluster, as well as much better usage 
of the cluster resources.

A few updates to the kafka system test services and tests are needed to take 
full advantage of this:

1) cluster usage annotation - this provides a hint to the framework about what 
cluster resources to set aside for a given test, and lets the driver 
efficiently use the worker cluster.
2) eliminate a few canonical paths on the test driver. This is fine when tests 
are run serially, but in parallel, different tests end up colliding on these 
paths. The primary culprits here are security_config.py, and minikdc.py




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-07 Thread Jason Gustafson
+1 and thanks for the excellent write-up!

On Wed, Sep 7, 2016 at 10:41 AM, Neha Narkhede  wrote:

> +1 (binding)
>
> On Wed, Sep 7, 2016 at 9:49 AM Grant Henke  wrote:
>
> > +1 (non-binding)
> >
> > On Wed, Sep 7, 2016 at 6:55 AM, Rajini Sivaram <
> > rajinisiva...@googlemail.com
> > > wrote:
> >
> > > +1 (non-binding)
> > >
> > > On Wed, Sep 7, 2016 at 4:09 AM, Sriram Subramanian 
> > > wrote:
> > >
> > > > +1 binding
> > > >
> > > > > On Sep 6, 2016, at 7:46 PM, Ismael Juma  wrote:
> > > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to (re)initiate[1] the voting process for KIP-78
> Cluster
> > > Id:
> > > > >
> > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
> > > > >
> > > > > As explained in the KIP and discussion thread, we see this as a
> good
> > > > first
> > > > > step that can serve as a foundation for future improvements.
> > > > >
> > > > > Thanks,
> > > > > Ismael
> > > > >
> > > > > [1] Even though I created a new vote thread, Gmail placed the
> > messages
> > > in
> > > > > the discuss thread, making it not as visible as required. It's
> > > important
> > > > to
> > > > > mention that two +1s were cast by Gwen and Sriram:
> > > > >
> > > > > http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> > > > mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%2BZT2t34e0WRUsCT1ErhtKOg1w%
> > > > 40mail.gmail.com%3E
> > > >
> > >
> > >
> > >
> > > --
> > > Regards,
> > >
> > > Rajini
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
> --
> Thanks,
> Neha
>


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-07 Thread Jason Gustafson
Hey Becket,

Thanks for the KIP. As I understand, the intention is to preserve the
current behavior with a timestamp of -1 indicating latest timestamp and -2
indicating earliest timestamp. So users can query these offsets using the
offsetsForTimes API if they know the magic values. I'm wondering if it
would make the usage a little nicer to have a separate API instead for
these special cases? Sort of in the way that we expose a generic seek() and
a seekToBeginning(), maybe we could have an earliestOffset() in addition to
offsetsForTimes()?

Thanks,
Jason

On Wed, Sep 7, 2016 at 10:04 AM, Becket Qin  wrote:

> Thanks everyone for all the feedback.
>
> If there is no further concerns or comments I will start a voting thread on
> this KIP tomorrow.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Tue, Sep 6, 2016 at 9:48 AM, Becket Qin  wrote:
>
> > Hi Magnus,
> >
> > Thanks for the comments. I agree that querying messages within a time
> > range is a valid use case (actually this is an example use case in my
> > previous email). The current proposal can achieve this by having two
> > ListOffsetRequest, right? I think the current API already supports the
> use
> > cases that require the offsets for multiple timestamps. The question is
> > that whether it is worth adding more complexity to the protocol to make
> it
> > easier for multiple timestamp query. Personally I think given that query
> > multiple timestamps is likely an infrequent operation, there is no need
> to
> > optimize for it and complicates the protocol.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Mon, Sep 5, 2016 at 11:21 PM, Magnus Edenhill 
> > wrote:
> >
> >> Good write-up Qin, the API looks promising.
> >>
> >> I have one comment:
> >>
> >> 2016-09-03 5:20 GMT+02:00 Becket Qin :
> >>
> >> > The currently offsetsForTimes() API obviously does not support
> querying
> >> > multiple timestamps for the same partition. It doesn't seems a feature
> >> for
> >> > ListOffsetRequest v0 either (sounds more like a bug). My intuition is
> >> that
> >> > it's a rare use case. Given it does not exist before and we don't see
> a
> >> > strong need from the community either, maybe it is better to keep it
> >> simple
> >> > for ListOffsetRequest v1. We can add it later if it turns out to be a
> >> > useful feature (that may need a interface change, but I honestly do
> not
> >> > think people would frequently query many different timestamps for the
> >> same
> >> > partition)
> >> >
> >>
> >> I argue that the current behaviour of OffsetRequest with regards to
> >> duplicate partitions is a bug
> >> and think it would be a mistake to move the same semantics over to thew
> >> new
> >> ListOffset API.
> >> One use case is that an application may want to know the offset range
> >> between two timestamps,
> >> e.g., for reprocessing, batching, searching, etc.
> >>
> >>
> >> Thanks,
> >> Magnus
> >>
> >>
> >>
> >> >
> >> > Have a good long weekend!
> >> >
> >> > Thanks,
> >> >
> >> > Jiangjie (Becket) Qin
> >> >
> >> >
> >> >
> >> >
> >> > On Fri, Sep 2, 2016 at 6:10 PM, Ismael Juma 
> wrote:
> >> >
> >> > > Thanks for the proposal Becket. Looks good overall, a few comments:
> >> > >
> >> > > ListOffsetResponse => [TopicName [PartitionOffsets]]
> >> > > >   PartitionOffsets => Partition ErrorCode Timestamp [Offset]
> >> > > >   Partition => int32
> >> > > >   ErrorCode => int16
> >> > > >   Timestamp => int64
> >> > > >   Offset => int
> >> > >
> >> > >
> >> > > It should be int64 for `Offset` right?
> >> > >
> >> > > Implementation wise, we will migrate to o.a.k.common.requests.
> >> > > ListOffsetRequest
> >> > > > class on the broker side.
> >> > >
> >> > >
> >> > > Could you clarify what you mean here? We already
> >> > > use o.a.k.common.requests.ListOffsetRequest in KafkaApis.
> >> > >
> >> > > long offset = consumer.offsetForTime(Collections.singletonMap(
> >> > > topicPartition,
> >> > > > targetTime)).offset;
> >> > >
> >> > >
> >> > > The result of `offsetForTime` is a Map, so we can't just call
> >> `offset` on
> >> > > it. You probably meant something like:
> >> > >
> >> > > long offset = consumer.offsetForTime(Collections.singletonMap(
> >> > > topicPartition,
> >> > > targetTime)).get(topicPartition).offset;
> >> > >
> >> > > Test searchByTimestamp with CreateTime and LogAppendTime
> >> > > >
> >> > >
> >> > > Do you mean `Test offsetForTime`?
> >> > >
> >> > > And:
> >> > >
> >> > > 1. In KAFKA-1588, the following issue was described "When performing
> >> an
> >> > > OffsetRequest, if you request the same topic and partition
> combination
> >> > in a
> >> > > single request more than once (for example, if you want to get both
> >> the
> >> > > head and tail offsets for a partition in the same request), you will
> >> get
> >> > a
> >> > > response for both, but they will be the same offset". Will the new
> >> > request
> >> > > version 

Build failed in Jenkins: kafka-trunk-jdk7 #1522

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3129; Console producer issue when request-required-acks=0

--
[...truncated 12204 lines...]
org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs 

Re: [DISCUSS] Time-based releases for Apache Kafka

2016-09-07 Thread Neha Narkhede
Thanks for volunteering, Jason!

On Wed, Sep 7, 2016 at 1:59 AM Ismael Juma  wrote:

> Thanks for volunteering Jason. Sounds good to me,
>
> Ismael
>
> On Wed, Sep 7, 2016 at 4:22 AM, Jason Gustafson 
> wrote:
>
> > Hey All,
> >
> > It sounds like the general consensus is in favor of time-based releases.
> We
> > can continue the discussion about LTS, but I wanted to go ahead and get
> > things moving forward by volunteering to manage the next release, which
> is
> > currently slated for October. If that sounds OK, I'll draft a release
> plan
> > and send it out to the community for feedback and a vote.
> >
> > Thanks,
> > Jason
> >
> > On Thu, Aug 25, 2016 at 2:03 PM, Ofir Manor 
> wrote:
> >
> > > I happily agree that Kafka is a solid and the community is great :)
> > > But I think there is a gap in perception here.
> > > For me, LTS means that someone is actively taking care of a release -
> > > actively backporting critical fixes (security, stability, data loss,
> > > corruption, hangs etc) from trunk to that LTS version periodically for
> an
> > > extended period of time, for example 18-36 months... So people can
> really
> > > rely on the same Kafka version for a long time.
> > > Is someone doing it today for 0.9.0? When is 0.9.0.2 expected? When is
> > > 0.8.2.3 expected? Will they cover all known critical issues for whoever
> > > relies on them in production?
> > > In other words, what is the scope of support that the community want to
> > > commit for older versions? (upgrade compatibility? investigating bug
> > > reports? proactively backporting fixes?)
> > > BTW, another legit option is that the Apache Kafka project won't commit
> > to
> > > LTS releases. It could let commercial vendors compete on supporting
> very
> > > old versions. I find that actually quite reasonable as well.
> > >
> > > Ofir Manor
> > >
> > > Co-Founder & CTO | Equalum
> > >
> > > Mobile: +972-54-7801286 | Email: ofir.ma...@equalum.io
> > >
> > > On Thu, Aug 25, 2016 at 8:19 PM, Andrew Schofield <
> > > andrew_schofield_j...@outlook.com> wrote:
> > >
> > > > I agree that the Kafka community has managed to maintain a very high
> > > > quality level, so I'm not concerned
> > > > about the quality of non-LTS releases. If the principle is that every
> > > > release is supported for 2 years, that
> > > > would be good. I suppose that if the burden of having that many
> > > in-support
> > > > releases proves too heavy,
> > > > as you say we could reconsider.
> > > >
> > > > Andrew Schofield
> > > >
> > > > 
> > > > > From: g...@confluent.io
> > > > > Date: Thu, 25 Aug 2016 09:57:30 -0700
> > > > > Subject: Re: [DISCUSS] Time-based releases for Apache Kafka
> > > > > To: dev@kafka.apache.org
> > > > >
> > > > > I prefer Ismael's suggestion for supporting 2-years (6 releases)
> > > > > rather than have designated LTS releases.
> > > > >
> > > > > The LTS model seems to work well when some releases are high
> quality
> > > > > (LTS) and the rest are a bit more questionable. It is great for
> > > > > companies like Redhat, where they have to invest less to support
> few
> > > > > releases and let the community deal with everything else.
> > > > >
> > > > > Until now the Kafka community has managed to maintain very high
> > > > > quality level. Not just for releases, our trunk is often of better
> > > > > quality than other project's releases - we don't think of stability
> > as
> > > > > something you tuck into a release (and just some releases) but
> rather
> > > > > as an on-going concern. There are costs to doing things that way,
> but
> > > > > in general, I think it has served us well - allowing even
> > conservative
> > > > > companies to run on the latest released version.
> > > > >
> > > > > I hope we can agree to at least try maintaining last 6 releases as
> > LTS
> > > > > (i.e. every single release is supported for 2 years) rather than
> > > > > designate some releases as better than others. Of course, if this
> > > > > totally fails, we can reconsider.
> > > > >
> > > > > Gwen
> > > > >
> > > > > On Thu, Aug 25, 2016 at 9:51 AM, Andrew Schofield
> > > > >  wrote:
> > > > >> The proposal sounds pretty good, but the main thing currently
> > missing
> > > > is a proper long-term support release.
> > > > >>
> > > > >> Having 3 releases a year sounds OK, but if they're all equivalent
> > and
> > > > bugfix releases are produced for the most
> > > > >> recent 2 or 3 releases, anyone wanting to run on an "in support"
> > > > release of Kafka has to upgrade every 8-12 months.
> > > > >> If you don't actually want anything specific from the newer
> > releases,
> > > > it's just unnecessary churn.
> > > > >>
> > > > >> Wouldn't it be better to designate one release every 12-18 months
> > as a
> > > > long-term support release with bugfix releases
> > > > >> produced for those for a longer period 

Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-07 Thread Neha Narkhede
+1 (binding)

On Wed, Sep 7, 2016 at 9:49 AM Grant Henke  wrote:

> +1 (non-binding)
>
> On Wed, Sep 7, 2016 at 6:55 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com
> > wrote:
>
> > +1 (non-binding)
> >
> > On Wed, Sep 7, 2016 at 4:09 AM, Sriram Subramanian 
> > wrote:
> >
> > > +1 binding
> > >
> > > > On Sep 6, 2016, at 7:46 PM, Ismael Juma  wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I would like to (re)initiate[1] the voting process for KIP-78 Cluster
> > Id:
> > > >
> > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
> > > >
> > > > As explained in the KIP and discussion thread, we see this as a good
> > > first
> > > > step that can serve as a foundation for future improvements.
> > > >
> > > > Thanks,
> > > > Ismael
> > > >
> > > > [1] Even though I created a new vote thread, Gmail placed the
> messages
> > in
> > > > the discuss thread, making it not as visible as required. It's
> > important
> > > to
> > > > mention that two +1s were cast by Gwen and Sriram:
> > > >
> > > > http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> > > mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%2BZT2t34e0WRUsCT1ErhtKOg1w%
> > > 40mail.gmail.com%3E
> > >
> >
> >
> >
> > --
> > Regards,
> >
> > Rajini
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>
-- 
Thanks,
Neha


Build failed in Jenkins: kafka-trunk-jdk8 #866

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3129; Console producer issue when request-required-acks=0

--
[...truncated 3469 lines...]
kafka.coordinator.GroupMetadataManagerTest > testStoreNonEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreNonEmptyGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols STARTED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata STARTED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
STARTED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol STARTED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
STARTED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure STARTED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure PASSED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration PASSED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocol STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocol PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
PASSED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync STARTED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync PASSED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers 
STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers PASSED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
PASSED


[jira] [Assigned] (KAFKA-4135) Inconsistent javadoc for KafkaConsumer.poll behavior when there is no subscription

2016-09-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-4135:
--

Assignee: Vahid Hashemian

> Inconsistent javadoc for KafkaConsumer.poll behavior when there is no 
> subscription
> --
>
> Key: KAFKA-4135
> URL: https://issues.apache.org/jira/browse/KAFKA-4135
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Minor
>
> Currently, the javadoc for {{KafkaConsumer.poll}} says the following: 
> "It is an error to not have subscribed to any topics or partitions before 
> polling for data." However, we don't actually raise an exception if this is 
> the case. Perhaps we should?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-09-07 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote reopened KAFKA-3129:


Reopening per [~ijuma]'s recommendation as acks=0 in general still has a 
problem somewhere along the way that isn't fully understood.

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-09-07 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote resolved KAFKA-3129.

Resolution: Fixed

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-79 - ListOffsetRequest v1 and offsetForTime() method in new consumer.

2016-09-07 Thread Becket Qin
Thanks everyone for all the feedback.

If there is no further concerns or comments I will start a voting thread on
this KIP tomorrow.

Thanks,

Jiangjie (Becket) Qin

On Tue, Sep 6, 2016 at 9:48 AM, Becket Qin  wrote:

> Hi Magnus,
>
> Thanks for the comments. I agree that querying messages within a time
> range is a valid use case (actually this is an example use case in my
> previous email). The current proposal can achieve this by having two
> ListOffsetRequest, right? I think the current API already supports the use
> cases that require the offsets for multiple timestamps. The question is
> that whether it is worth adding more complexity to the protocol to make it
> easier for multiple timestamp query. Personally I think given that query
> multiple timestamps is likely an infrequent operation, there is no need to
> optimize for it and complicates the protocol.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Sep 5, 2016 at 11:21 PM, Magnus Edenhill 
> wrote:
>
>> Good write-up Qin, the API looks promising.
>>
>> I have one comment:
>>
>> 2016-09-03 5:20 GMT+02:00 Becket Qin :
>>
>> > The currently offsetsForTimes() API obviously does not support querying
>> > multiple timestamps for the same partition. It doesn't seems a feature
>> for
>> > ListOffsetRequest v0 either (sounds more like a bug). My intuition is
>> that
>> > it's a rare use case. Given it does not exist before and we don't see a
>> > strong need from the community either, maybe it is better to keep it
>> simple
>> > for ListOffsetRequest v1. We can add it later if it turns out to be a
>> > useful feature (that may need a interface change, but I honestly do not
>> > think people would frequently query many different timestamps for the
>> same
>> > partition)
>> >
>>
>> I argue that the current behaviour of OffsetRequest with regards to
>> duplicate partitions is a bug
>> and think it would be a mistake to move the same semantics over to thew
>> new
>> ListOffset API.
>> One use case is that an application may want to know the offset range
>> between two timestamps,
>> e.g., for reprocessing, batching, searching, etc.
>>
>>
>> Thanks,
>> Magnus
>>
>>
>>
>> >
>> > Have a good long weekend!
>> >
>> > Thanks,
>> >
>> > Jiangjie (Becket) Qin
>> >
>> >
>> >
>> >
>> > On Fri, Sep 2, 2016 at 6:10 PM, Ismael Juma  wrote:
>> >
>> > > Thanks for the proposal Becket. Looks good overall, a few comments:
>> > >
>> > > ListOffsetResponse => [TopicName [PartitionOffsets]]
>> > > >   PartitionOffsets => Partition ErrorCode Timestamp [Offset]
>> > > >   Partition => int32
>> > > >   ErrorCode => int16
>> > > >   Timestamp => int64
>> > > >   Offset => int
>> > >
>> > >
>> > > It should be int64 for `Offset` right?
>> > >
>> > > Implementation wise, we will migrate to o.a.k.common.requests.
>> > > ListOffsetRequest
>> > > > class on the broker side.
>> > >
>> > >
>> > > Could you clarify what you mean here? We already
>> > > use o.a.k.common.requests.ListOffsetRequest in KafkaApis.
>> > >
>> > > long offset = consumer.offsetForTime(Collections.singletonMap(
>> > > topicPartition,
>> > > > targetTime)).offset;
>> > >
>> > >
>> > > The result of `offsetForTime` is a Map, so we can't just call
>> `offset` on
>> > > it. You probably meant something like:
>> > >
>> > > long offset = consumer.offsetForTime(Collections.singletonMap(
>> > > topicPartition,
>> > > targetTime)).get(topicPartition).offset;
>> > >
>> > > Test searchByTimestamp with CreateTime and LogAppendTime
>> > > >
>> > >
>> > > Do you mean `Test offsetForTime`?
>> > >
>> > > And:
>> > >
>> > > 1. In KAFKA-1588, the following issue was described "When performing
>> an
>> > > OffsetRequest, if you request the same topic and partition combination
>> > in a
>> > > single request more than once (for example, if you want to get both
>> the
>> > > head and tail offsets for a partition in the same request), you will
>> get
>> > a
>> > > response for both, but they will be the same offset". Will the new
>> > request
>> > > version support the use case where multiple timestamps are passed for
>> the
>> > > same topic partition? And if we do support it at the protocol level,
>> do
>> > we
>> > > also want to support it at the API level or do we think the additional
>> > > complexity is not worth it?
>> > >
>> > > 2. Is `offsetForTime` the right method name given that we are getting
>> > > multiple offsets? Maybe it should be `offsetsForTimes` or something
>> like
>> > > that.
>> > >
>> > > Ismael
>> > >
>> > > On Wed, Aug 31, 2016 at 4:38 AM, Becket Qin 
>> > wrote:
>> > >
>> > > > Hi Kafka devs,
>> > > >
>> > > > I created KIP-79 to allow consumer to precisely query the offsets
>> based
>> > > on
>> > > > timestamp.
>> > > >
>> > > > In short we propose to :
>> > > > 1. add a ListOffsetRequest/ListOffsetResponse v1, and
>> > > > 2. add an offsetForTime() method in new consumer.
>> > > >
>> > > > The KIP 

Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-07 Thread Grant Henke
+1 (non-binding)

On Wed, Sep 7, 2016 at 6:55 AM, Rajini Sivaram  wrote:

> +1 (non-binding)
>
> On Wed, Sep 7, 2016 at 4:09 AM, Sriram Subramanian 
> wrote:
>
> > +1 binding
> >
> > > On Sep 6, 2016, at 7:46 PM, Ismael Juma  wrote:
> > >
> > > Hi all,
> > >
> > > I would like to (re)initiate[1] the voting process for KIP-78 Cluster
> Id:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
> > >
> > > As explained in the KIP and discussion thread, we see this as a good
> > first
> > > step that can serve as a foundation for future improvements.
> > >
> > > Thanks,
> > > Ismael
> > >
> > > [1] Even though I created a new vote thread, Gmail placed the messages
> in
> > > the discuss thread, making it not as visible as required. It's
> important
> > to
> > > mention that two +1s were cast by Gwen and Sriram:
> > >
> > > http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> > mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%2BZT2t34e0WRUsCT1ErhtKOg1w%
> > 40mail.gmail.com%3E
> >
>
>
>
> --
> Regards,
>
> Rajini
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-07 Thread Grant Henke
Congratulations and thank you for all of your contributions to Apache
Kafka Jason!

On Wed, Sep 7, 2016 at 10:12 AM, Mayuresh Gharat  wrote:

> congrats Jason !
>
> Thanks,
>
> Mayuresh
>
> On Wed, Sep 7, 2016 at 5:16 AM, Eno Thereska 
> wrote:
>
> > Congrats Jason!
> >
> > Eno
> > > On 7 Sep 2016, at 10:00, Rajini Sivaram 
> > wrote:
> > >
> > > Congrats, Jason!
> > >
> > > On Wed, Sep 7, 2016 at 8:29 AM, Flavio P JUNQUEIRA 
> > wrote:
> > >
> > >> Congrats, Jason. Well done and great to see this project inviting new
> > >> committers.
> > >>
> > >> -Flavio
> > >>
> > >> On 7 Sep 2016 04:58, "Ashish Singh"  wrote:
> > >>
> > >>> Congrats, Jason!
> > >>>
> > >>> On Tuesday, September 6, 2016, Jason Gustafson 
> > >> wrote:
> > >>>
> >  Thanks all!
> > 
> >  On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin  >  > wrote:
> > 
> > > Congrats, Jason!
> > >
> > > On Tue, Sep 6, 2016 at 5:09 PM, Onur Karaman
> >   > >>
> > > wrote:
> > >
> > >> congrats jason!
> > >>
> > >> On Tue, Sep 6, 2016 at 4:12 PM, Sriram Subramanian <
> > >> r...@confluent.io
> >  >
> > >> wrote:
> > >>
> > >>> Congratulations Jason!
> > >>>
> > >>> On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian <
> > >>> vahidhashem...@us.ibm.com 
> >  wrote:
> > >>>
> >  Congratulations Jason on this very well deserved recognition.
> > 
> >  --Vahid
> > 
> > 
> > 
> >  From:   Neha Narkhede >
> >  To: "dev@kafka.apache.org " <
> >  dev@kafka.apache.org >,
> >  "us...@kafka.apache.org " <
> > >> us...@kafka.apache.org
> >  >
> >  Cc: "priv...@kafka.apache.org " <
> >  priv...@kafka.apache.org >
> >  Date:   09/06/2016 03:26 PM
> >  Subject:[ANNOUNCE] New committer: Jason Gustafson
> > 
> > 
> > 
> >  The PMC for Apache Kafka has invited Jason Gustafson to join
> > >> as a
> >  committer and
> >  we are pleased to announce that he has accepted!
> > 
> >  Jason has contributed numerous patches to a wide range of
> > >> areas,
> > >> notably
> >  within the new consumer and the Kafka Connect layers. He has
> > > displayed
> >  great taste and judgement which has been apparent through his
> > >> involvement
> >  across the board from mailing lists, JIRA, code reviews to
> > > contributing
> >  features, bug fixes and code and documentation improvements.
> > 
> >  Thank you for your contribution and welcome to Apache Kafka,
> > >>> Jason!
> >  --
> >  Thanks,
> >  Neha
> > 
> > 
> > 
> > 
> > 
> > >>>
> > >>
> > >
> > 
> > >>>
> > >>>
> > >>> --
> > >>> Ashish h
> > >>>
> > >>
> > >
> > >
> > >
> > > --
> > > Regards,
> > >
> > > Rajini
> >
> >
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471028#comment-15471028
 ] 

ASF GitHub Bot commented on KAFKA-3129:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1795


> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1795: KAFKA-3129: Console producer issue when request-re...

2016-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1795


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1833: MINOR: fix transient QueryableStateIntegration tes...

2016-09-07 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1833

MINOR: fix transient QueryableStateIntegration test failure

The verification in verifyGreaterOrEqual was incorrect. It was failing when 
a new key was found.
Set the TimeWindow to a large value so all windowed results fall in a 
single window

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-test-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1833.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1833


commit 82e32d76569e586dbeb3015fb1bf0c2d8baae4a5
Author: Damian Guy 
Date:   2016-09-07T15:30:54Z

fix transient test failure




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-07 Thread Mayuresh Gharat
congrats Jason !

Thanks,

Mayuresh

On Wed, Sep 7, 2016 at 5:16 AM, Eno Thereska  wrote:

> Congrats Jason!
>
> Eno
> > On 7 Sep 2016, at 10:00, Rajini Sivaram 
> wrote:
> >
> > Congrats, Jason!
> >
> > On Wed, Sep 7, 2016 at 8:29 AM, Flavio P JUNQUEIRA 
> wrote:
> >
> >> Congrats, Jason. Well done and great to see this project inviting new
> >> committers.
> >>
> >> -Flavio
> >>
> >> On 7 Sep 2016 04:58, "Ashish Singh"  wrote:
> >>
> >>> Congrats, Jason!
> >>>
> >>> On Tuesday, September 6, 2016, Jason Gustafson 
> >> wrote:
> >>>
>  Thanks all!
> 
>  On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin   > wrote:
> 
> > Congrats, Jason!
> >
> > On Tue, Sep 6, 2016 at 5:09 PM, Onur Karaman
>   >>
> > wrote:
> >
> >> congrats jason!
> >>
> >> On Tue, Sep 6, 2016 at 4:12 PM, Sriram Subramanian <
> >> r...@confluent.io
>  >
> >> wrote:
> >>
> >>> Congratulations Jason!
> >>>
> >>> On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian <
> >>> vahidhashem...@us.ibm.com 
>  wrote:
> >>>
>  Congratulations Jason on this very well deserved recognition.
> 
>  --Vahid
> 
> 
> 
>  From:   Neha Narkhede >
>  To: "dev@kafka.apache.org " <
>  dev@kafka.apache.org >,
>  "us...@kafka.apache.org " <
> >> us...@kafka.apache.org
>  >
>  Cc: "priv...@kafka.apache.org " <
>  priv...@kafka.apache.org >
>  Date:   09/06/2016 03:26 PM
>  Subject:[ANNOUNCE] New committer: Jason Gustafson
> 
> 
> 
>  The PMC for Apache Kafka has invited Jason Gustafson to join
> >> as a
>  committer and
>  we are pleased to announce that he has accepted!
> 
>  Jason has contributed numerous patches to a wide range of
> >> areas,
> >> notably
>  within the new consumer and the Kafka Connect layers. He has
> > displayed
>  great taste and judgement which has been apparent through his
> >> involvement
>  across the board from mailing lists, JIRA, code reviews to
> > contributing
>  features, bug fixes and code and documentation improvements.
> 
>  Thank you for your contribution and welcome to Apache Kafka,
> >>> Jason!
>  --
>  Thanks,
>  Neha
> 
> 
> 
> 
> 
> >>>
> >>
> >
> 
> >>>
> >>>
> >>> --
> >>> Ashish h
> >>>
> >>
> >
> >
> >
> > --
> > Regards,
> >
> > Rajini
>
>


-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: [DISCUSS] KIP-63: Unify store and downstream caching in streams

2016-09-07 Thread Damian Guy
Gouzhang,

Some points about what you have mentioned:
1. You can't just call context.forward() on the flush listener. You have to
set some other contextual information (currently ProcessorRecordContext)
prior to doing this otherwise the nodes you are forwarding to are
undetermined, i.e, this can be called at any point during the topology or
on commit.
2. It is a bytes cache, so the Processors would need to have the Serdes in
order to use this pattern.
3. the namespace of the cache can't just be processorName or even
processorName-stateStoreName, it also will need to have something like
taskId along with it.

Thanks,
Damian


On Wed, 7 Sep 2016 at 00:39 Guozhang Wang  wrote:

> Hi Matthias,
>
> I agree with your concerns of coupling with record forwarding with record
> storing in the state store, and my understanding is that this can (and
> should) be resolved with the current interface. Here are my thoughts:
>
> 1. The global cache, MemoryLRUCacheBytes, although is currently defined as
> internal class, since it is exposed in ProcessorContext anyways, should
> really be a public class anyways that users can access to (I have some
> other comments about the names, but will rather leave them in the PR).
>
> 2. In the processor API, the users can choose to use the cache to store the
> intermediate results in the cache, and register the flush listener via
> addDirtyEntryFlushListener (again some naming suggestions in PR but use it
> for discussion for now). And as a result, if the old processor code looks
> like this:
>
> 
>
> process(...) {
>
>   state.put(...);
>   context.forward(...);
> }
> 
>
> Users can now leverage the cache on some of the processors by modifying the
> code as:
>
> 
>
> init(...) {
>
>   context.getCache().addDirtyEntyFlushLisener(processorName,
> {state.put(...); context.forward(...)})
> }
>
> process(...) {
>
>   context.getCache().put(processorName, ..);
> }
>
> 
>
> 3. Note whether or not to apply caching is optional for each processor node
> now, and is decoupled with its logic of forwarding / storing in persistent
> state stores.
>
> One may argue that now if users want to make use of the cache, he will need
> to make code changes; but I think this is a reasonable requirement to users
> actually, since that 1) currently we do one update-per-incoming-record, and
> without code changes this behavior will be preserved, and 2) for DSL
> implementation, we can just follow the above pattern to abstract it from
> users, so they can pick up these changes automatically.
>
>
> Guozhang
>
>
> On Tue, Sep 6, 2016 at 7:41 AM, Eno Thereska 
> wrote:
>
> > A small update to the KIP: the deduping of records using the cache does
> > not affect the .to operator since we'd have already deduped the KTable
> > before the operator. Adjusting KIP.
> >
> > Thanks
> > Eno
> >
> > > On 5 Sep 2016, at 12:43, Eno Thereska  wrote:
> > >
> > > Hi Matthias,
> > >
> > > The motivation for KIP-63 was primarily aggregates and reducing the
> load
> > on "both" state stores and downstream. I think there is agreement that
> for
> > the DSL the motivation and design make sense.
> > >
> > > For the Processor API: caching is a major component in any system, and
> > it is difficult to continue to operate as before, without fully
> > understanding the consequences. Hence, I think this is mostly a case of
> > educating users to understand the boundaries of the solution.
> > >
> > > Introducing a cache, either for the state store only, or for downstream
> > forwarding only, or for both, leads to moving from a model where we
> process
> > each request end-to-end (today) to one where a request is temporarily
> > buffered in a cache. In all the cases, this opens up the question of what
> > to do next once the request then leaves the cache, and how to express
> that
> > (future) behaviour. E.g., even when the cache is just for downstream
> > forwarding (i.e., decoupled from any state store), the processor API user
> > might be surprised that context.forward() does not immediately do
> anything.
> > >
> > > I agree that for ultra-flexibility, a processor API user should be able
> > to choose whether the dedup cache is put 1) on top of a store only, 2) on
> > forward only, 3) on both store and forward, but given the motivation for
> > KIP-63 (aggregates), I believe a decoupled store-forward dedup cache is a
> > reasonable choice that provides good default behaviour, without prodding
> > the user to specify the combinations.
> > >
> > > We need to educate users that if a cache is used in the Processor API,
> > the forwarding will happen in the future.
> > >
> > > -Eno
> > >
> > >
> > >
> > >> On 4 Sep 2016, at 19:11, Matthias J. Sax 
> wrote:
> > >>
> > >>> Processor code should always work; independently if caching is
> enabled
> > >> or not.
> > >>
> > >> If we want to get 

[GitHub] kafka pull request #1832: MINOR: Document that Connect topics should use com...

2016-09-07 Thread mfenniak
GitHub user mfenniak opened a pull request:

https://github.com/apache/kafka/pull/1832

MINOR: Document that Connect topics should use compaction

Update documentation for Kafka Connect distributed’s 
config.storage.topic, offset.storage.topic, and status.storage.topic 
configuration values to indicate that all three should refer to compacted 
topics.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mfenniak/kafka kafka-connect-topic-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1832.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1832






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-07 Thread Eno Thereska
Congrats Jason!

Eno
> On 7 Sep 2016, at 10:00, Rajini Sivaram  wrote:
> 
> Congrats, Jason!
> 
> On Wed, Sep 7, 2016 at 8:29 AM, Flavio P JUNQUEIRA  wrote:
> 
>> Congrats, Jason. Well done and great to see this project inviting new
>> committers.
>> 
>> -Flavio
>> 
>> On 7 Sep 2016 04:58, "Ashish Singh"  wrote:
>> 
>>> Congrats, Jason!
>>> 
>>> On Tuesday, September 6, 2016, Jason Gustafson 
>> wrote:
>>> 
 Thanks all!
 
 On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin > wrote:
 
> Congrats, Jason!
> 
> On Tue, Sep 6, 2016 at 5:09 PM, Onur Karaman
 > 
> wrote:
> 
>> congrats jason!
>> 
>> On Tue, Sep 6, 2016 at 4:12 PM, Sriram Subramanian <
>> r...@confluent.io
 >
>> wrote:
>> 
>>> Congratulations Jason!
>>> 
>>> On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian <
>>> vahidhashem...@us.ibm.com 
 wrote:
>>> 
 Congratulations Jason on this very well deserved recognition.
 
 --Vahid
 
 
 
 From:   Neha Narkhede >
 To: "dev@kafka.apache.org " <
 dev@kafka.apache.org >,
 "us...@kafka.apache.org " <
>> us...@kafka.apache.org
 >
 Cc: "priv...@kafka.apache.org " <
 priv...@kafka.apache.org >
 Date:   09/06/2016 03:26 PM
 Subject:[ANNOUNCE] New committer: Jason Gustafson
 
 
 
 The PMC for Apache Kafka has invited Jason Gustafson to join
>> as a
 committer and
 we are pleased to announce that he has accepted!
 
 Jason has contributed numerous patches to a wide range of
>> areas,
>> notably
 within the new consumer and the Kafka Connect layers. He has
> displayed
 great taste and judgement which has been apparent through his
>> involvement
 across the board from mailing lists, JIRA, code reviews to
> contributing
 features, bug fixes and code and documentation improvements.
 
 Thank you for your contribution and welcome to Apache Kafka,
>>> Jason!
 --
 Thanks,
 Neha
 
 
 
 
 
>>> 
>> 
> 
 
>>> 
>>> 
>>> --
>>> Ashish h
>>> 
>> 
> 
> 
> 
> -- 
> Regards,
> 
> Rajini



[jira] [Commented] (KAFKA-4137) Refactor multi-threaded consumer for safer network layer access

2016-09-07 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470445#comment-15470445
 ] 

Ismael Juma commented on KAFKA-4137:


KAFKA-1935 may be a duplicate of this one (with less detail, so we maybe we can 
close that one).

> Refactor multi-threaded consumer for safer network layer access
> ---
>
> Key: KAFKA-4137
> URL: https://issues.apache.org/jira/browse/KAFKA-4137
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>
> In KIP-62, we added a background thread to send heartbeats while the user is 
> processing fetched data from a call to poll(). In the implementation, we 
> elected to share the instance of {{NetworkClient}} between the foreground 
> thread and this background thread. After working with the system test failure 
> in KAFKA-3807, we've realized that this probably wasn't a good decision. It 
> is very tricky to get the synchronization correct with respect to response 
> callbacks and reasoning about the multi-threaded behavior is very difficult. 
> For example, a common pattern is to send a request and then call 
> {{NetworkClient.poll()}} to await its return. With another thread also 
> potentially calling poll(), the response can actually return before the 
> sending thread itself invokes poll(). This can cause unnecessary (and 
> potentially unbounded) blocking, and avoiding it is quite complex. 
> A different approach we've discussed would be to use two instances of 
> NetworkClient, one dedicated to fetching, and one dedicated to coordinator 
> communication. The fetching NetworkClient can continue to work exclusively in 
> the foreground thread and we can confine the coordinator NetworkClient to the 
> background thread. This provides much better isolation and avoids all of the 
> race conditions with calling poll() from two threads. The main complication 
> is in how to expose blocking APIs to interact with the background thread. For 
> example, in the current consumer API, rebalance are completed in the 
> foreground thread, so we would need to coordinate with the background thread 
> to preserve this (e.g. by using a Future abstraction).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-451) follower replica may need to backoff the fetching if leader is not ready yet

2016-09-07 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470408#comment-15470408
 ] 

Ismael Juma commented on KAFKA-451:
---

We have the following code in AbstractFetcherThread:

{code}
  warn(s"Error in fetch $fetchRequest", t)
  inLock(partitionMapLock) {
partitionsWithError ++= partitionMap.keys
// there is an error occurred while fetching partitions, sleep a 
while
partitionMapCond.await(fetchBackOffMs, TimeUnit.MILLISECONDS)
  }
{code}

Is that what was intended in this JIRA?

> follower replica may need to backoff the fetching if leader is not ready yet
> 
>
> Key: KAFKA-451
> URL: https://issues.apache.org/jira/browse/KAFKA-451
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Prashanth Menon
>  Labels: optimization
> Fix For: 0.10.1.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Currently, when a follower starts fetching from a new leader, it just keeps 
> sending fetch requests even if the requests fail because the leader is not 
> ready yet. We probably should let the follower backoff a bit in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-78 Cluster Id (second attempt)

2016-09-07 Thread Rajini Sivaram
+1 (non-binding)

On Wed, Sep 7, 2016 at 4:09 AM, Sriram Subramanian  wrote:

> +1 binding
>
> > On Sep 6, 2016, at 7:46 PM, Ismael Juma  wrote:
> >
> > Hi all,
> >
> > I would like to (re)initiate[1] the voting process for KIP-78 Cluster Id:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id
> >
> > As explained in the KIP and discussion thread, we see this as a good
> first
> > step that can serve as a foundation for future improvements.
> >
> > Thanks,
> > Ismael
> >
> > [1] Even though I created a new vote thread, Gmail placed the messages in
> > the discuss thread, making it not as visible as required. It's important
> to
> > mention that two +1s were cast by Gwen and Sriram:
> >
> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201609.
> mbox/%3CCAD5tkZbLv7fvH4q%2BKe%2B%3DJMgGq%2BZT2t34e0WRUsCT1ErhtKOg1w%
> 40mail.gmail.com%3E
>



-- 
Regards,

Rajini


[jira] [Updated] (KAFKA-4034) Consumer need not lookup coordinator when using manual assignment

2016-09-07 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4034:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Consumer need not lookup coordinator when using manual assignment
> -
>
> Key: KAFKA-4034
> URL: https://issues.apache.org/jira/browse/KAFKA-4034
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0, 0.10.0.2
>
>
> Currently, we lookup the coordinator even if the user is using manual 
> assignment and not storing offsets in Kafka. This seems unnecessary and can 
> lead to surprising group authorization failures if the user does not have 
> Describe access to the group. We should be able to change the code to only 
> lookup the coordinator when we know we are going to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4139) Kafka consumer stuck in ensureCoordinatorReady after broker failure

2016-09-07 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-4139:
--
Status: Patch Available  (was: Open)

> Kafka consumer stuck in ensureCoordinatorReady after broker failure
> ---
>
> Key: KAFKA-4139
> URL: https://issues.apache.org/jira/browse/KAFKA-4139
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> In one of our tests with a single broker, consumer is stuck waiting to find 
> coordinator after the broker is restarted. {{findCoordinatorFuture}} is never 
> reset if {{sendGroupCoordinatorRequest()}} returns 
> {{RequestFuture.noBrokersAvailable()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4139) Kafka consumer stuck in ensureCoordinatorReady after broker failure

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470168#comment-15470168
 ] 

ASF GitHub Bot commented on KAFKA-4139:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1831

KAFKA-4139: Reset findCoordinatorFuture when brokers are unavailable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4139

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1831.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1831


commit 1f02ef1159f9876f48e075723ce17c738715e827
Author: Rajini Sivaram 
Date:   2016-09-07T09:26:39Z

KAFKA-4139: Reset findCoordinatorFuture when brokers are unavailable




> Kafka consumer stuck in ensureCoordinatorReady after broker failure
> ---
>
> Key: KAFKA-4139
> URL: https://issues.apache.org/jira/browse/KAFKA-4139
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> In one of our tests with a single broker, consumer is stuck waiting to find 
> coordinator after the broker is restarted. {{findCoordinatorFuture}} is never 
> reset if {{sendGroupCoordinatorRequest()}} returns 
> {{RequestFuture.noBrokersAvailable()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1831: KAFKA-4139: Reset findCoordinatorFuture when broke...

2016-09-07 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1831

KAFKA-4139: Reset findCoordinatorFuture when brokers are unavailable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4139

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1831.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1831


commit 1f02ef1159f9876f48e075723ce17c738715e827
Author: Rajini Sivaram 
Date:   2016-09-07T09:26:39Z

KAFKA-4139: Reset findCoordinatorFuture when brokers are unavailable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1521

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4058: Failure in

--
[...truncated 6092 lines...]
kafka.log.FileMessageSetTest > testFileSize STARTED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits STARTED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testWriteToChannelThatConsumesPartially STARTED

kafka.log.FileMessageSetTest > testWriteToChannelThatConsumesPartially PASSED

kafka.log.FileMessageSetTest > testTruncateNotCalledIfSizeIsSameAsTargetSize 
STARTED

kafka.log.FileMessageSetTest > testTruncateNotCalledIfSizeIsSameAsTargetSize 
PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue STARTED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent STARTED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testTruncateIfSizeIsDifferentToTargetSize STARTED

kafka.log.FileMessageSetTest > testTruncateIfSizeIsDifferentToTargetSize PASSED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage STARTED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition STARTED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead STARTED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo STARTED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse STARTED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown STARTED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion STARTED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch STARTED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes STARTED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid STARTED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED


[DISCUSS] KIP-68 Add a consumed log retention before log retention

2016-09-07 Thread Pengwei (L)
Hi All,
   I have made a KIP to enhance the log retention, details as follows:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-68+Add+a+consumed+log+retention+before+log+retention
   Now start a discuss thread for this KIP , looking forward to the feedback.

Thanks,
David



[jira] [Created] (KAFKA-4139) Kafka consumer stuck in ensureCoordinatorReady after broker failure

2016-09-07 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4139:
-

 Summary: Kafka consumer stuck in ensureCoordinatorReady after 
broker failure
 Key: KAFKA-4139
 URL: https://issues.apache.org/jira/browse/KAFKA-4139
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.10.0.1
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


In one of our tests with a single broker, consumer is stuck waiting to find 
coordinator after the broker is restarted. {{findCoordinatorFuture}} is never 
reset if {{sendGroupCoordinatorRequest()}} returns 
{{RequestFuture.noBrokersAvailable()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4119) Get topic offset with Kafka SSL

2016-09-07 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470107#comment-15470107
 ] 

Rajini Sivaram commented on KAFKA-4119:
---

Only the new consumer supports security. So if you have consumer groups with 
offsets using the old consumer, you should migrate the offsets from Zookeeper 
to Kafka as described in 
http://kafka.apache.org/documentation.html#offsetmigration. For the new 
consumer offsets stored in Kafka, you can list offsets using 
kafka-consumer-groups.sh with the --new-consumer option.

> Get topic offset with Kafka SSL
> ---
>
> Key: KAFKA-4119
> URL: https://issues.apache.org/jira/browse/KAFKA-4119
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.1
> Environment: Linux
>Reporter: zhang shuai
>  Labels: kafka-acl, offset, ssl
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I have a kafka cluster with SSL when i trun off ACL,i can get offset form 
> kafka-consumer-offset-checker.sh
> [root@node128 kafka_2.11-0.10.0.1]# bin/kafka-consumer-offset-checker.sh 
> --zookeeper localhost:2181/kafka-test --group consumer-group-1 --topic 
> testtopic
> [2016-09-03 14:03:51,722] WARN WARNING: ConsumerOffsetChecker is deprecated 
> and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand 
> instead. (kafka.tools.ConsumerOffsetChecker$)
> Group   Topic  Pid Offset  logSize
>  Lag Owner
> consumer-group-1 testtopic  0   99981   99981 
>   0   none
> Configuration like this:
> ssl.keystore.location=/opt/ssl_key/server.keystore.jks
> ssl.keystore.password=xdata123
> ssl.key.password=xdata123
> ssl.truststore.location=/opt/ssl_key/server.truststore.jks
> ssl.truststore.password=xdata123
> ssl.client.auth=required
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> ssl.keystore.type=JKS
> ssl.truststore.type=JKS
> security.inter.broker.protocol=SSL
> But when i want to use ACL and add configuration
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> super.users=User:CN=node128,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown;User:CN=node129,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> Then I use kafka-acls.sh to add principal for topic testtopic and group 
> consumer-group-1,
> I cannot get result of kafka offset from kafka-consumer-offset-checker.sh. Is 
> there something deferent in ACL? How can I get topics` offset in kafka ACL 
> model?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-07 Thread Rajini Sivaram
Congrats, Jason!

On Wed, Sep 7, 2016 at 8:29 AM, Flavio P JUNQUEIRA  wrote:

> Congrats, Jason. Well done and great to see this project inviting new
> committers.
>
> -Flavio
>
> On 7 Sep 2016 04:58, "Ashish Singh"  wrote:
>
> > Congrats, Jason!
> >
> > On Tuesday, September 6, 2016, Jason Gustafson 
> wrote:
> >
> > > Thanks all!
> > >
> > > On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin  > > > wrote:
> > >
> > > > Congrats, Jason!
> > > >
> > > > On Tue, Sep 6, 2016 at 5:09 PM, Onur Karaman
> > >  > > > >
> > > > wrote:
> > > >
> > > > > congrats jason!
> > > > >
> > > > > On Tue, Sep 6, 2016 at 4:12 PM, Sriram Subramanian <
> r...@confluent.io
> > > >
> > > > > wrote:
> > > > >
> > > > > > Congratulations Jason!
> > > > > >
> > > > > > On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian <
> > > > > > vahidhashem...@us.ibm.com 
> > > > > > > wrote:
> > > > > >
> > > > > > > Congratulations Jason on this very well deserved recognition.
> > > > > > >
> > > > > > > --Vahid
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > From:   Neha Narkhede >
> > > > > > > To: "dev@kafka.apache.org " <
> > > dev@kafka.apache.org >,
> > > > > > > "us...@kafka.apache.org " <
> us...@kafka.apache.org
> > > >
> > > > > > > Cc: "priv...@kafka.apache.org " <
> > > priv...@kafka.apache.org >
> > > > > > > Date:   09/06/2016 03:26 PM
> > > > > > > Subject:[ANNOUNCE] New committer: Jason Gustafson
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > The PMC for Apache Kafka has invited Jason Gustafson to join
> as a
> > > > > > > committer and
> > > > > > > we are pleased to announce that he has accepted!
> > > > > > >
> > > > > > > Jason has contributed numerous patches to a wide range of
> areas,
> > > > > notably
> > > > > > > within the new consumer and the Kafka Connect layers. He has
> > > > displayed
> > > > > > > great taste and judgement which has been apparent through his
> > > > > involvement
> > > > > > > across the board from mailing lists, JIRA, code reviews to
> > > > contributing
> > > > > > > features, bug fixes and code and documentation improvements.
> > > > > > >
> > > > > > > Thank you for your contribution and welcome to Apache Kafka,
> > Jason!
> > > > > > > --
> > > > > > > Thanks,
> > > > > > > Neha
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > Ashish h
> >
>



-- 
Regards,

Rajini


[jira] [Commented] (KAFKA-3703) Selector.close() doesn't complete outgoing writes

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470071#comment-15470071
 ] 

ASF GitHub Bot commented on KAFKA-3703:
---

Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/1817


> Selector.close() doesn't complete outgoing writes
> -
>
> Key: KAFKA-3703
> URL: https://issues.apache.org/jira/browse/KAFKA-3703
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Outgoing writes may be discarded when a connection is closed. For instance, 
> when running a producer with acks=0, a producer that writes data and closes 
> the producer would expect to see all writes to complete if there are no 
> errors. But close() simply closes the channel and socket which could result 
> in outgoing data being discarded.
> This is also an issue in consumers which use commitAsync to commit offsets. 
> Closing the consumer may result in commits being discarded because writes 
> have not completed before close().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Time-based releases for Apache Kafka

2016-09-07 Thread Ismael Juma
Thanks for volunteering Jason. Sounds good to me,

Ismael

On Wed, Sep 7, 2016 at 4:22 AM, Jason Gustafson  wrote:

> Hey All,
>
> It sounds like the general consensus is in favor of time-based releases. We
> can continue the discussion about LTS, but I wanted to go ahead and get
> things moving forward by volunteering to manage the next release, which is
> currently slated for October. If that sounds OK, I'll draft a release plan
> and send it out to the community for feedback and a vote.
>
> Thanks,
> Jason
>
> On Thu, Aug 25, 2016 at 2:03 PM, Ofir Manor  wrote:
>
> > I happily agree that Kafka is a solid and the community is great :)
> > But I think there is a gap in perception here.
> > For me, LTS means that someone is actively taking care of a release -
> > actively backporting critical fixes (security, stability, data loss,
> > corruption, hangs etc) from trunk to that LTS version periodically for an
> > extended period of time, for example 18-36 months... So people can really
> > rely on the same Kafka version for a long time.
> > Is someone doing it today for 0.9.0? When is 0.9.0.2 expected? When is
> > 0.8.2.3 expected? Will they cover all known critical issues for whoever
> > relies on them in production?
> > In other words, what is the scope of support that the community want to
> > commit for older versions? (upgrade compatibility? investigating bug
> > reports? proactively backporting fixes?)
> > BTW, another legit option is that the Apache Kafka project won't commit
> to
> > LTS releases. It could let commercial vendors compete on supporting very
> > old versions. I find that actually quite reasonable as well.
> >
> > Ofir Manor
> >
> > Co-Founder & CTO | Equalum
> >
> > Mobile: +972-54-7801286 | Email: ofir.ma...@equalum.io
> >
> > On Thu, Aug 25, 2016 at 8:19 PM, Andrew Schofield <
> > andrew_schofield_j...@outlook.com> wrote:
> >
> > > I agree that the Kafka community has managed to maintain a very high
> > > quality level, so I'm not concerned
> > > about the quality of non-LTS releases. If the principle is that every
> > > release is supported for 2 years, that
> > > would be good. I suppose that if the burden of having that many
> > in-support
> > > releases proves too heavy,
> > > as you say we could reconsider.
> > >
> > > Andrew Schofield
> > >
> > > 
> > > > From: g...@confluent.io
> > > > Date: Thu, 25 Aug 2016 09:57:30 -0700
> > > > Subject: Re: [DISCUSS] Time-based releases for Apache Kafka
> > > > To: dev@kafka.apache.org
> > > >
> > > > I prefer Ismael's suggestion for supporting 2-years (6 releases)
> > > > rather than have designated LTS releases.
> > > >
> > > > The LTS model seems to work well when some releases are high quality
> > > > (LTS) and the rest are a bit more questionable. It is great for
> > > > companies like Redhat, where they have to invest less to support few
> > > > releases and let the community deal with everything else.
> > > >
> > > > Until now the Kafka community has managed to maintain very high
> > > > quality level. Not just for releases, our trunk is often of better
> > > > quality than other project's releases - we don't think of stability
> as
> > > > something you tuck into a release (and just some releases) but rather
> > > > as an on-going concern. There are costs to doing things that way, but
> > > > in general, I think it has served us well - allowing even
> conservative
> > > > companies to run on the latest released version.
> > > >
> > > > I hope we can agree to at least try maintaining last 6 releases as
> LTS
> > > > (i.e. every single release is supported for 2 years) rather than
> > > > designate some releases as better than others. Of course, if this
> > > > totally fails, we can reconsider.
> > > >
> > > > Gwen
> > > >
> > > > On Thu, Aug 25, 2016 at 9:51 AM, Andrew Schofield
> > > >  wrote:
> > > >> The proposal sounds pretty good, but the main thing currently
> missing
> > > is a proper long-term support release.
> > > >>
> > > >> Having 3 releases a year sounds OK, but if they're all equivalent
> and
> > > bugfix releases are produced for the most
> > > >> recent 2 or 3 releases, anyone wanting to run on an "in support"
> > > release of Kafka has to upgrade every 8-12 months.
> > > >> If you don't actually want anything specific from the newer
> releases,
> > > it's just unnecessary churn.
> > > >>
> > > >> Wouldn't it be better to designate one release every 12-18 months
> as a
> > > long-term support release with bugfix releases
> > > >> produced for those for a longer period of say 24 months. That halves
> > > the upgrade work for people just wanting to keep
> > > >> "in support". Now that adoption is increasing, there are plenty of
> > > users that just want a dependable messaging system
> > > >> without having to be deeply knowledgeable about its innards.
> > > >>
> > > >> LTS works nicely for plenty of 

[GitHub] kafka pull request #1817: KAFKA-3703: Flush outgoing writes before closing c...

2016-09-07 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/1817


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4138) Producer data write network traffic to kafka brokers is increased for 50%

2016-09-07 Thread Zane Zhang (JIRA)
Zane Zhang created KAFKA-4138:
-

 Summary: Producer data write network traffic to kafka brokers is 
increased for 50%
 Key: KAFKA-4138
 URL: https://issues.apache.org/jira/browse/KAFKA-4138
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 0.9.0.1
 Environment: Redhat Enterprise 6.5
Reporter: Zane Zhang


After we upgraded kafka from 0.8.2 to 0.9.0.1, it is observed that network 
output traffic from producers to kafka is increased for 50% with same message 
payload and same data pressure to producers. 
We are using snappy compression, StringSerializer, other producer parameters 
are listed as below:
request.required.acks=1
retries=3
retry.backoff.ms=300
batch.size=2*1024*1024
max.request.size=20971520
request.timeout.ms=2000



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1830: [WIP] KIP-78 : Cluster Identifier (KAFKA-4093)

2016-09-07 Thread arrawatia
GitHub user arrawatia opened a pull request:

https://github.com/apache/kafka/pull/1830

[WIP] KIP-78 : Cluster Identifier (KAFKA-4093)

This is a first draft that includes the following changes:
1. Changes to the broker code. 
  - generation of cluster id and storing it in Zookeeper
  - update protocol to add cluster id to metadata request and response
  - add ClusterResourceListener interface and ClusterMetadataListeners 
utility class
  - update broker code to sending ClusterResource events to the metric 
reporters
2. Changes to client code.
  - update Cluster and Metadata code
  - update clients for sending ClusterResource events to interceptors, 
de(serializers) and metric reporters
  - update tests for interceptors

Changes that still need to be made:
1. Update / create integration tests for Serializers and MetricReporter for 
clients. Integration tests for protocol changes and MetricReporters for broker.
2. Upgrade system tests.

@ijuma Can you please take a look ?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arrawatia/kafka kip-78

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1830.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1830


commit 880f903e161a29d6daec5ad0e85e190b14023310
Author: arrawatia 
Date:   2016-08-29T23:13:26Z

First cut for cluster id.

1. Add v2 for Metadata Request and Response in Protocol to include
cluster_id
2. Updated the MetadataRequest and MetadataResponse to add cluster_id
and Update tests
3. Add cluster_id to KafkaServer and KafkaApis
4. cluster_id = dummy for now.

commit e76de82ab12b6d4b2f77905536a79c1178b3b378
Author: arrawatia 
Date:   2016-08-30T01:24:24Z

Broker Id generation.

commit 4bd44b94112c9cf09fe1819618adc81072f81ad6
Author: arrawatia 
Date:   2016-08-30T05:35:02Z

Change ZK representation to JSON from string.

commit 6efc32f3ec1e91c88428847a91fd5fc7da312e01
Author: arrawatia 
Date:   2016-08-30T14:52:52Z

Add tests and change from using internal sun classes to JAXB for base64 
en(de)coding

commit 2a417a8da218a5113ef4261ba2e429007a6818bd
Author: arrawatia 
Date:   2016-08-31T17:53:07Z

Make cluster_id nullable and change NO_CLUSTER_ID to null.

commit e91da1e8b33ff7fb716a5151c194d2192768ec23
Author: arrawatia 
Date:   2016-09-01T16:58:11Z

First pass for client changes.

commit 93beebac65bce7a2f9ce0ab07b628db2b2131e9e
Author: arrawatia 
Date:   2016-09-02T08:07:39Z

Add cluster.id to meta.properties

commit 7823bc14f26d685adbdac0fdb6c6ae86615addd2
Author: arrawatia 
Date:   2016-09-02T08:11:56Z

Change to ClusterListerners utility class.

commit 170ce9272bda1b5e6d0a90395e6a9cfdb6b41c58
Author: arrawatia 
Date:   2016-09-02T17:54:37Z

Merge remote-tracking branch 'confluentinc/trunk' into kip-78

commit eca2ab1381cbf5b9597031a5f9879298e67a85a1
Author: arrawatia 
Date:   2016-09-02T18:38:17Z

Revert change for adding cluster.id to meta.properties.

Wait for it to be included in the KIP.

commit 0753422d1ab0f99f25047ecbcc8ae6c126015de7
Author: arrawatia 
Date:   2016-09-02T18:38:44Z

Make check style tests pass.

commit fe0e5571d60a997705f694db75437afd4c069a65
Author: arrawatia 
Date:   2016-09-02T18:50:20Z

Rename ClusterListener -> ClusterResourceListener and ClusterResourceMeta 
-> ClusterResource.

commit 83a4b01d1a4373603ae5a46195029602c3384cef
Author: arrawatia 
Date:   2016-09-03T06:38:31Z

Add ClusterListeners on KafkaServer to send events to MetricReporters(both 
Codahale and KafkaMetricrReporter).

commit 6782cd5ebf71de3c1c09542494f9e27064da3c0c
Author: arrawatia 
Date:   2016-09-07T00:57:25Z

Update consumer tests to test for cluster id in the correct order.

commit 29e9c83a3a0c6b6ff0588a562d5f9f9712ffd35c
Author: arrawatia 
Date:   2016-09-07T00:59:19Z

Make check style tests pass.

commit 71df287d48a4a26422f7fb45227abd4267bc362d
Author: arrawatia 
Date:   2016-09-07T00:59:29Z

Merge remote-tracking branch 'confluentinc/trunk' into kip-78

commit 1ee9666bfcce9c4a6a1ff8bff2c46a88c3e20dc5
Author: arrawatia 
Date:   2016-09-07T01:30:17Z

Merge branch 'trunk' of github.com:apache/kafka into kip-78




---
If your project is set up for it, you can reply to this email and have your
reply appear 

[jira] [Commented] (KAFKA-4111) broker compress data of certain size instead on a produce request

2016-09-07 Thread julien1987 (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469890#comment-15469890
 ] 

julien1987 commented on KAFKA-4111:
---

Thanks.On our scene, message qps of every producer is very small, if use 
snappy, actual size becomes bigger. I do not known whether broker can compress 
messages when flushing logs.

> broker compress data of certain size instead on a produce request
> -
>
> Key: KAFKA-4111
> URL: https://issues.apache.org/jira/browse/KAFKA-4111
> Project: Kafka
>  Issue Type: Improvement
>  Components: compression
>Affects Versions: 0.10.0.1
>Reporter: julien1987
>
> When "compression.type" is set on broker config, broker compress data on 
> every produce request. But on our sences, produce requst is very many, and 
> data of every request is not so much. So compression result is not good. Can 
> Broker compress data of every certain size from many produce requests?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-07 Thread Flavio P JUNQUEIRA
Congrats, Jason. Well done and great to see this project inviting new
committers.

-Flavio

On 7 Sep 2016 04:58, "Ashish Singh"  wrote:

> Congrats, Jason!
>
> On Tuesday, September 6, 2016, Jason Gustafson  wrote:
>
> > Thanks all!
> >
> > On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin  > > wrote:
> >
> > > Congrats, Jason!
> > >
> > > On Tue, Sep 6, 2016 at 5:09 PM, Onur Karaman
> >  > > >
> > > wrote:
> > >
> > > > congrats jason!
> > > >
> > > > On Tue, Sep 6, 2016 at 4:12 PM, Sriram Subramanian  > >
> > > > wrote:
> > > >
> > > > > Congratulations Jason!
> > > > >
> > > > > On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian <
> > > > > vahidhashem...@us.ibm.com 
> > > > > > wrote:
> > > > >
> > > > > > Congratulations Jason on this very well deserved recognition.
> > > > > >
> > > > > > --Vahid
> > > > > >
> > > > > >
> > > > > >
> > > > > > From:   Neha Narkhede >
> > > > > > To: "dev@kafka.apache.org " <
> > dev@kafka.apache.org >,
> > > > > > "us...@kafka.apache.org "  > >
> > > > > > Cc: "priv...@kafka.apache.org " <
> > priv...@kafka.apache.org >
> > > > > > Date:   09/06/2016 03:26 PM
> > > > > > Subject:[ANNOUNCE] New committer: Jason Gustafson
> > > > > >
> > > > > >
> > > > > >
> > > > > > The PMC for Apache Kafka has invited Jason Gustafson to join as a
> > > > > > committer and
> > > > > > we are pleased to announce that he has accepted!
> > > > > >
> > > > > > Jason has contributed numerous patches to a wide range of areas,
> > > > notably
> > > > > > within the new consumer and the Kafka Connect layers. He has
> > > displayed
> > > > > > great taste and judgement which has been apparent through his
> > > > involvement
> > > > > > across the board from mailing lists, JIRA, code reviews to
> > > contributing
> > > > > > features, bug fixes and code and documentation improvements.
> > > > > >
> > > > > > Thank you for your contribution and welcome to Apache Kafka,
> Jason!
> > > > > > --
> > > > > > Thanks,
> > > > > > Neha
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> Ashish h
>


Build failed in Jenkins: kafka-trunk-jdk8 #865

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4058: Failure in

--
[...truncated 12243 lines...]
org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest 

Build failed in Jenkins: kafka-trunk-jdk7 #1520

2016-09-07 Thread Apache Jenkins Server
See 

Changes:

[ismael] MINOR: Reduce the log level when the peer isn't authenticated but is

--
[...truncated 5644 lines...]
kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED


[jira] [Commented] (KAFKA-3776) Unify store and downstream caching in streams

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469728#comment-15469728
 ] 

ASF GitHub Bot commented on KAFKA-3776:
---

GitHub user enothereska reopened a pull request:

https://github.com/apache/kafka/pull/1752

KAFKA-3776: Unify store and downstream caching in streams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-3776-poc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1752.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1752


commit 9e6d1e0b7b50b8c9bb1848c9e72913701a93cbb0
Author: Eno Thereska 
Date:   2016-08-16T09:45:00Z

Initial commit with stub cache

commit 2a3e46770b2aaddefd843f7cc6cc6727dd1cbc9e
Author: Eno Thereska 
Date:   2016-08-17T08:22:23Z

Adjustments so tests compile

commit 86bb6dcb6beac3d136e01f45cd727e6a6a691b5b
Author: Eno Thereska 
Date:   2016-08-17T10:21:11Z

Remove old cache from RocksDbStore and add global cache to RocksDb

commit 989bce66ddbfc150081cbf74edc8d3d2b6ae70d3
Author: Eno Thereska 
Date:   2016-08-17T11:27:08Z

Create unique cache key per store

commit 959885d4807e7f90030bca946dc5c1bdc4098754
Author: Eno Thereska 
Date:   2016-08-17T12:51:17Z

Enable caching for RocksDBWindowStore

commit 6f582ceb98a5dbd37732e148a0e2f064a90d5d13
Author: Eno Thereska 
Date:   2016-08-17T14:17:07Z

Unit test for windowed stores

commit 7810ab08b02b44e03812e9838062f811c64be0e7
Author: Eno Thereska 
Date:   2016-08-19T14:45:27Z

Plugged in new byte-based cache

commit 333e7992b5f038d1d46d1a17d31a41b700552c77
Author: Damian Guy 
Date:   2016-08-22T15:07:00Z

Add ProcessorRecordContext for tracking timestamp,offset etc of record. 
Update Procesor and Store APIs to use context. Dont forward values from 
Processors that are using a Store. RocksDBStores now have a flush listener and 
forward to processors whenever the cache is flushed. Left one failing test that 
highlights the need for the range queries to not forward values downstream

commit 124ffdf84a42e0fe14f3be6897baecdb8f8b7ef0
Author: Damian Guy 
Date:   2016-08-22T15:56:36Z

merge

commit 34985fae49084b4ee37f0a1cc0c3fdd024a89056
Author: Eno Thereska 
Date:   2016-08-23T06:58:49Z

Initial pass at range queries. Cache based on TreeMap

commit 30bde7301e7541cf2937f56b0303a43900db1fae
Author: Eno Thereska 
Date:   2016-08-23T07:15:19Z

Remove deleted entry from cache

commit 9dbafc07701c5acd6d2c2edbb1f3770670180673
Author: Damian Guy 
Date:   2016-08-23T09:37:17Z

revert api changes. Track RecordContext via ProcessorContext

commit 63aacc465510bfc8cdb0b675b2d4ebb041a5e99c
Author: Eno Thereska 
Date:   2016-08-23T13:18:19Z

Pass at the 'all' method using cache iterator

commit af6686fb298c2417c5eff9a46cbb8b898df894df
Author: Eno Thereska 
Date:   2016-08-23T13:18:24Z

Merge branch 'KAFKA-3776-poc' of https://github.com/enothereska/kafka into 
KAFKA-3776-poc

commit 235b8612a121283841f8b3b8795ff6ff5c425c07
Author: Damian Guy 
Date:   2016-08-24T09:24:02Z

disable caching for joins. expose enableCaching method on 
PersistentKeyValueFactory

commit a6986a66b08d653b4aa019557f39f0713949fe83
Author: Eno Thereska 
Date:   2016-08-24T10:17:30Z

Cleanup memory cache

commit 5b71d68d84007a03d59a3095467785da461d3c20
Author: Eno Thereska 
Date:   2016-08-24T10:25:44Z

Merged

commit 7658e02a4b203d09d607c0a510d0ae3421eb71a9
Author: Damian Guy 
Date:   2016-08-24T11:21:35Z

forward before changelog.

commit 91bab64db1bf3673169879c80a15af67f4670b70
Author: Damian Guy 
Date:   2016-08-24T11:21:40Z

Merge changes from Eno

commit fef6cae098b44cf2e715db00c84d97650114802f
Author: Eno Thereska 
Date:   2016-08-24T13:19:12Z

Flush order should be top-down

commit e1bcd35730a10c06a6e49d3882d787611b106f95
Author: Eno Thereska 
Date:   2016-08-24T15:50:56Z

Merged with trunk

commit 591d4802383d6e8dc0a3e0e97cd619dd99d3db0e
Author: Damian Guy 
Date:   2016-08-24T19:35:28Z

extract caching out of store

commit dbd383c9066de031f48e8539ad071834b294b7b0
Author: Damian Guy 
Date:   2016-08-25T06:30:27Z

Merge pull request #1 from enothereska/dg-3776-poc

extract 

[jira] [Commented] (KAFKA-3776) Unify store and downstream caching in streams

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469727#comment-15469727
 ] 

ASF GitHub Bot commented on KAFKA-3776:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/1752


> Unify store and downstream caching in streams
> -
>
> Key: KAFKA-3776
> URL: https://issues.apache.org/jira/browse/KAFKA-3776
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> This is an umbrella story for capturing changes to processor caching in 
> Streams as first described in KIP-63. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1752: KAFKA-3776: Unify store and downstream caching in ...

2016-09-07 Thread enothereska
GitHub user enothereska reopened a pull request:

https://github.com/apache/kafka/pull/1752

KAFKA-3776: Unify store and downstream caching in streams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-3776-poc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1752.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1752


commit 9e6d1e0b7b50b8c9bb1848c9e72913701a93cbb0
Author: Eno Thereska 
Date:   2016-08-16T09:45:00Z

Initial commit with stub cache

commit 2a3e46770b2aaddefd843f7cc6cc6727dd1cbc9e
Author: Eno Thereska 
Date:   2016-08-17T08:22:23Z

Adjustments so tests compile

commit 86bb6dcb6beac3d136e01f45cd727e6a6a691b5b
Author: Eno Thereska 
Date:   2016-08-17T10:21:11Z

Remove old cache from RocksDbStore and add global cache to RocksDb

commit 989bce66ddbfc150081cbf74edc8d3d2b6ae70d3
Author: Eno Thereska 
Date:   2016-08-17T11:27:08Z

Create unique cache key per store

commit 959885d4807e7f90030bca946dc5c1bdc4098754
Author: Eno Thereska 
Date:   2016-08-17T12:51:17Z

Enable caching for RocksDBWindowStore

commit 6f582ceb98a5dbd37732e148a0e2f064a90d5d13
Author: Eno Thereska 
Date:   2016-08-17T14:17:07Z

Unit test for windowed stores

commit 7810ab08b02b44e03812e9838062f811c64be0e7
Author: Eno Thereska 
Date:   2016-08-19T14:45:27Z

Plugged in new byte-based cache

commit 333e7992b5f038d1d46d1a17d31a41b700552c77
Author: Damian Guy 
Date:   2016-08-22T15:07:00Z

Add ProcessorRecordContext for tracking timestamp,offset etc of record. 
Update Procesor and Store APIs to use context. Dont forward values from 
Processors that are using a Store. RocksDBStores now have a flush listener and 
forward to processors whenever the cache is flushed. Left one failing test that 
highlights the need for the range queries to not forward values downstream

commit 124ffdf84a42e0fe14f3be6897baecdb8f8b7ef0
Author: Damian Guy 
Date:   2016-08-22T15:56:36Z

merge

commit 34985fae49084b4ee37f0a1cc0c3fdd024a89056
Author: Eno Thereska 
Date:   2016-08-23T06:58:49Z

Initial pass at range queries. Cache based on TreeMap

commit 30bde7301e7541cf2937f56b0303a43900db1fae
Author: Eno Thereska 
Date:   2016-08-23T07:15:19Z

Remove deleted entry from cache

commit 9dbafc07701c5acd6d2c2edbb1f3770670180673
Author: Damian Guy 
Date:   2016-08-23T09:37:17Z

revert api changes. Track RecordContext via ProcessorContext

commit 63aacc465510bfc8cdb0b675b2d4ebb041a5e99c
Author: Eno Thereska 
Date:   2016-08-23T13:18:19Z

Pass at the 'all' method using cache iterator

commit af6686fb298c2417c5eff9a46cbb8b898df894df
Author: Eno Thereska 
Date:   2016-08-23T13:18:24Z

Merge branch 'KAFKA-3776-poc' of https://github.com/enothereska/kafka into 
KAFKA-3776-poc

commit 235b8612a121283841f8b3b8795ff6ff5c425c07
Author: Damian Guy 
Date:   2016-08-24T09:24:02Z

disable caching for joins. expose enableCaching method on 
PersistentKeyValueFactory

commit a6986a66b08d653b4aa019557f39f0713949fe83
Author: Eno Thereska 
Date:   2016-08-24T10:17:30Z

Cleanup memory cache

commit 5b71d68d84007a03d59a3095467785da461d3c20
Author: Eno Thereska 
Date:   2016-08-24T10:25:44Z

Merged

commit 7658e02a4b203d09d607c0a510d0ae3421eb71a9
Author: Damian Guy 
Date:   2016-08-24T11:21:35Z

forward before changelog.

commit 91bab64db1bf3673169879c80a15af67f4670b70
Author: Damian Guy 
Date:   2016-08-24T11:21:40Z

Merge changes from Eno

commit fef6cae098b44cf2e715db00c84d97650114802f
Author: Eno Thereska 
Date:   2016-08-24T13:19:12Z

Flush order should be top-down

commit e1bcd35730a10c06a6e49d3882d787611b106f95
Author: Eno Thereska 
Date:   2016-08-24T15:50:56Z

Merged with trunk

commit 591d4802383d6e8dc0a3e0e97cd619dd99d3db0e
Author: Damian Guy 
Date:   2016-08-24T19:35:28Z

extract caching out of store

commit dbd383c9066de031f48e8539ad071834b294b7b0
Author: Damian Guy 
Date:   2016-08-25T06:30:27Z

Merge pull request #1 from enothereska/dg-3776-poc

extract caching out of store

commit 290a66b087847ceb67640574a7a6fc8c8e4d33db
Author: Damian Guy 
Date:   2016-08-25T07:12:55Z

rename MergedSortedCacheRocksDBIterator -> 
MergedSortedCacheKeyValueStoreIterator

commit 

[GitHub] kafka pull request #1752: KAFKA-3776: Unify store and downstream caching in ...

2016-09-07 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/1752


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469699#comment-15469699
 ] 

Balázs Barnabás commented on KAFKA-4126:


Yes, in my opinion that would be useful, a small typo  caused quite a lot of 
head-scratching on our side.

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469675#comment-15469675
 ] 

ASF GitHub Bot commented on KAFKA-4058:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1767


> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
> Fix For: 0.10.1.0
>
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> 

[GitHub] kafka pull request #1767: KAFKA-4058: Failure in org.apache.kafka.streams.in...

2016-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1767


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-09-07 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4058:
-
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1767
[https://github.com/apache/kafka/pull/1767]

> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
> Fix For: 0.10.1.0
>
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
>