[jira] [Commented] (KAFKA-3949) Consumer topic subscription change may be ignored if a rebalance is in progress

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429223#comment-15429223
 ] 

ASF GitHub Bot commented on KAFKA-3949:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1762


> Consumer topic subscription change may be ignored if a rebalance is in 
> progress
> ---
>
> Key: KAFKA-3949
> URL: https://issues.apache.org/jira/browse/KAFKA-3949
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> The consumer's regex subscription works by matching all topics fetched from a 
> metadata update against the provided pattern. When a new topic is created or 
> an old topic is deleted, we update the list of subscribed topics and request 
> a rebalance by setting the {{needsPartitionAssignment}} flag inside 
> {{SubscriptionState}}. On the next call to {{poll()}}, the consumer will 
> observe the flag and begin the rebalance by sending a JoinGroup. The problem 
> is that it does not account for the fact that a rebalance could already be in 
> progress at the time the metadata is updated. This causes the following 
> sequence:
> 1. Rebalance begins (needsPartitionAssignment is set True)
> 2. Metadata max age expires and and update is triggered
> 3. Update returns and causes a topic subscription change 
> (needsPartitionAssignment set again to True).
> 4. Rebalance completes (needsPartitionAssignment is set False)
> In this situation, we will not request a new rebalance which will prevent us 
> from receiving an assignment from any topics added to the consumer's 
> subscription when the metadata was updated. This state will persist until 
> another event causes the group to rebalance.
> A related problem may occur if a rebalance is interrupted with the wakeup() 
> API, and the user calls subscribe(topics) with a change to the subscription 
> set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3949) Consumer topic subscription change may be ignored if a rebalance is in progress

2016-08-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3949.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1762
[https://github.com/apache/kafka/pull/1762]

> Consumer topic subscription change may be ignored if a rebalance is in 
> progress
> ---
>
> Key: KAFKA-3949
> URL: https://issues.apache.org/jira/browse/KAFKA-3949
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> The consumer's regex subscription works by matching all topics fetched from a 
> metadata update against the provided pattern. When a new topic is created or 
> an old topic is deleted, we update the list of subscribed topics and request 
> a rebalance by setting the {{needsPartitionAssignment}} flag inside 
> {{SubscriptionState}}. On the next call to {{poll()}}, the consumer will 
> observe the flag and begin the rebalance by sending a JoinGroup. The problem 
> is that it does not account for the fact that a rebalance could already be in 
> progress at the time the metadata is updated. This causes the following 
> sequence:
> 1. Rebalance begins (needsPartitionAssignment is set True)
> 2. Metadata max age expires and and update is triggered
> 3. Update returns and causes a topic subscription change 
> (needsPartitionAssignment set again to True).
> 4. Rebalance completes (needsPartitionAssignment is set False)
> In this situation, we will not request a new rebalance which will prevent us 
> from receiving an assignment from any topics added to the consumer's 
> subscription when the metadata was updated. This state will persist until 
> another event causes the group to rebalance.
> A related problem may occur if a rebalance is interrupted with the wakeup() 
> API, and the user calls subscribe(topics) with a change to the subscription 
> set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1762: KAFKA-3949: Fix race condition when metadata updat...

2016-08-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1762


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1487

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4016: Added join benchmarks

--
[...truncated 6370 lines...]
kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest STARTED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegmentSize 
STARTED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegmentSize 
PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic STARTED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets STARTED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK STARTED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.CreateTopicsRequestTest > testValidCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testValidCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testErrorCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testErrorCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testInvalidCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testInvalidCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testNotController STARTED

kafka.server.CreateTopicsRequestTest > testNotController PASSED

kafka.server.ServerStartupTest > testBrokerStateRunningAfterZK STARTED

kafka.server.ServerStartupTest > testBrokerStateRunningAfterZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot STARTED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration STARTED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.ServerStartupTest > testBrokerSelfAware STARTED

kafka.server.ServerStartupTest > testBrokerSelfAware PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata STARTED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
STARTED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
STARTED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics STARTED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
STARTED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #827

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4016: Added join benchmarks

--
[...truncated 12090 lines...]
org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED


Kafka KIP meeting Aug 23 at 11:00am PST

2016-08-19 Thread Jun Rao
Hi, Everyone.,

We plan to have a Kafka KIP meeting this coming Tuesday at 11:00am PST. If
you plan to attend but haven't received an invite, please let me know. The
following is the tentative agenda.

Agenda:

Quick status check on those that have been voted
   - KIP-33: Add a time based log index
   - KIP-50: Move Authorizer to o.a.k.common package
   - KIP-55: Secure Quotas for Authenticated Users
   - KIP-67: Queryable state for Kafka Streams
   - KIP-70: Revise Partition Assignment Semantics on New Consumer's
Subscription Change

Discuss some of the unvoted ones that are currently active and may need
more discussion.
   - Time-based releases for Apache Kafka
   - Java 7 support timeline
   - KIP-4: ACL Admin Schema
   - KIP-73: Replication Quotas
   - KIP-74: Add FetchResponse size limit in bytes

Thanks,

Jun


[jira] [Created] (KAFKA-4068) FileSinkTask - use JsonConverter to serialize

2016-08-19 Thread Shikhar Bhushan (JIRA)
Shikhar Bhushan created KAFKA-4068:
--

 Summary: FileSinkTask - use JsonConverter to serialize
 Key: KAFKA-4068
 URL: https://issues.apache.org/jira/browse/KAFKA-4068
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Shikhar Bhushan
Assignee: Shikhar Bhushan
Priority: Minor


People new to Connect often try out hooking up e.g. a Kafka topic with Avro 
data to the file sink connector, only to find the file contain values like:

{noformat}
org.apache.kafka.connect.data.Struct@ca1bf85a
org.apache.kafka.connect.data.Struct@c298db6a
org.apache.kafka.connect.data.Struct@44108fbd
{noformat}

This is because currently the {{FileSinkConnector}} is meant as a toy example 
that expects the schema to be {{Schema.STRING_SCHEMA}}, though it just 
{{toString()}}'s the value without verifying that. 

A better experience would probably be if we used 
{{JsonConverter.fromConnectData()}} for serializing to the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4051) Strange behavior during rebalance when turning the OS clock back

2016-08-19 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429000#comment-15429000
 ] 

Onur Karaman commented on KAFKA-4051:
-

Is there a legitimate reason for rolling back your clock, especially by an 
hour? Unless there's a legitimate reason, I don't see the value in 
investigating a solution further.

Based on some quick reading, when the delta in times grows large (it was hard 
for me to tell if the threshold is 128 ms or 128 s, the docs mention both), 
ntpd may either step the time back or do slew-based correction depending on how 
you configure it.

warning: I know next to nothing about ntp!

reference: http://doc.ntp.org/4.2.4/ntpd.html#op

> Strange behavior during rebalance when turning the OS clock back
> 
>
> Key: KAFKA-4051
> URL: https://issues.apache.org/jira/browse/KAFKA-4051
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: OS: Ubuntu 14.04 - 64bits
>Reporter: Gabriel Ibarra
>Assignee: Rajini Sivaram
>
> If a rebalance is performed after turning the OS clock back, then the kafka 
> server enters in a loop and the rebalance cannot be completed until the 
> system returns to the previous date/hour.
> Steps to Reproduce:
> - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner 
> of all the partitions.
> - Turn the system (OS) clock back. For instance 1 hour.
> - Start a new consumer for TOPIC_NAME  using the same group id, it will force 
> a rebalance.
> After these actions the kafka server logs constantly display the messages 
> below, and after a while both consumers do not receive more packages. This 
> condition lasts at least the time that the clock went back, for this example 
> 1 hour, and finally after this time kafka comes back to work.
> [2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 2 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation 1 
> is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
> Due to the fact that some systems could have enabled NTP or an administrator 
> option to change the system clock (date/time) it's important to do it safely, 
> currently the only way to do it safely is following the next steps:
> 1-  Tear down the Kafka server.
> 2-  Change the date/time
> 3- Tear up the Kafka server.
> But, this approach can be done only if the change was performed by the 
> administrator, not for NTP. Also in many systems turning down the Kafka 
> server might cause the INFORMATION TO BE LOST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4016) Kafka Streams join benchmark

2016-08-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4016.
--
Resolution: Fixed

Issue resolved by pull request 1700
[https://github.com/apache/kafka/pull/1700]

> Kafka Streams join benchmark
> 
>
> Key: KAFKA-4016
> URL: https://issues.apache.org/jira/browse/KAFKA-4016
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> As part of the streams benchmark, we need benchmarks for KStream-KStream, 
> KStream-KTable and KTable-KTable joins. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4016) Kafka Streams join benchmark

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428946#comment-15428946
 ] 

ASF GitHub Bot commented on KAFKA-4016:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1700


> Kafka Streams join benchmark
> 
>
> Key: KAFKA-4016
> URL: https://issues.apache.org/jira/browse/KAFKA-4016
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> As part of the streams benchmark, we need benchmarks for KStream-KStream, 
> KStream-KTable and KTable-KTable joins. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1700: KAFKA-4016: Added join benchmarks

2016-08-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1700


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #826

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-4056: Kafka logs values of sensitive configs like passwords

[cshapi] KAFKA-4053: remove redundant if/else statements in TopicCommand

--
[...truncated 12095 lines...]
org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

Build failed in Jenkins: kafka-trunk-jdk7 #1486

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-4050; Allow configuration of the PRNG used for SSL

[cshapi] KAFKA-4056: Kafka logs values of sensitive configs like passwords

[cshapi] KAFKA-4053: remove redundant if/else statements in TopicCommand

--
[...truncated 12053 lines...]
org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED


[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password

2016-08-19 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428880#comment-15428880
 ] 

Bharat Viswanadham commented on KAFKA-2629:
---

[~singhashish] Are you working on this?
If you are busy with other tasks, if more information can be provided regarding 
implementation how it needs to be handled, I can takeup this task.


> Enable getting SSL password from an executable rather than passing plaintext 
> password
> -
>
> Key: KAFKA-2629
> URL: https://issues.apache.org/jira/browse/KAFKA-2629
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently there are a couple of options to pass SSL passwords to Kafka, i.e., 
> via properties file or via command line argument. Both of these are not 
> recommended security practices.
> * A password on a command line is a no-no: it's trivial to see that password 
> just by using the 'ps' utility.
> * Putting a password into a file, and then passing the location to that file, 
> is the next best option. The access to the file will be governed by unix 
> access permissions which we all know and love. The downside is that the 
> password is still just sitting there in a file, and those who have access can 
> still see it trivially.
> * The most general, secure solution is to provide a layer of abstraction: 
> provide functionality to get the password from "somewhere else".  The most 
> flexible and generic way to do this is to simply call an executable which 
> returns the desired password. 
> ** The executable is again protected with normal file system privileges
> ** The simplest form, a script that looks like "echo 'my-password'", devolves 
> back to putting the password in a file
> ** A more interesting implementation could open up a local encrypted password 
> store and extract the password from it
> ** A maximally secure implementation could contact an external secret manager 
> with centralized control and audit functionality.
> ** In short: getting the password as the output of a script/executable is 
> maximally generic and enables both simple and complex use cases.
> This JIRA intend to add a config param to enable passing an executable to 
> Kafka for SSL passwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #825

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-4050; Allow configuration of the PRNG used for SSL

--
[...truncated 12070 lines...]
org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

Build failed in Jenkins: kafka-trunk-jdk7 #1485

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3163; Add time based index to Kafka.

--
[...truncated 12048 lines...]
org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 

Re: [VOTE] KIP-73 - Replication Quotas

2016-08-19 Thread Gwen Shapira
+1 (binding)

On Fri, Aug 19, 2016 at 1:21 AM, Ben Stopford  wrote:
> I’d like to initiate the voting process for KIP-73:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas 
> 
>
> Ben



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: kafka-0.10.0-jdk7 #191

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-4050; Allow configuration of the PRNG used for SSL

--
[...truncated 6131 lines...]

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldIncludeRecordsThatHappenedAfterWindowStart PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldExcludeRecordsThatHappenedBeforeWindowStart PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > nameMustNotBeEmpty PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > nameMustNotBeNull PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > nameMustNotBeEmpty PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > nameMustNotBeNull PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.KeyValueTest > shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.KafkaStreamsTest > classMethod FAILED
java.lang.OutOfMemoryError: Java heap space

org.apache.kafka.streams.state.internals.OffsetCheckpointTest > testReadWrite 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchBefore PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testInitialLoading PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRolling 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testSegmentMaintenance PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutSameKeyTimestamp PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchAfter PASSED

org.apache.kafka.streams.state.internals.StoreChangeLoggerTest > testAddRemove 
PASSED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > testGrouping 
PASSED


Re: [VOTE] KIP-73 - Replication Quotas

2016-08-19 Thread Sriram Subramanian
I think the manual way of setting the throttling value is a good first step
and definitely required when things go bad. We should continue discussing
how we can build more intelligence over this incrementally.

+1 (binding)

On Fri, Aug 19, 2016 at 1:21 AM, Ben Stopford  wrote:

> I’d like to initiate the voting process for KIP-73:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 73+Replication+Quotas  confluence/display/KAFKA/KIP-73+Replication+Quotas>
>
> Ben


[jira] [Commented] (KAFKA-4053) Refactor TopicCommand to remove redundant if/else statements

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428650#comment-15428650
 ] 

ASF GitHub Bot commented on KAFKA-4053:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1751


> Refactor TopicCommand to remove redundant if/else statements
> 
>
> Key: KAFKA-4053
> URL: https://issues.apache.org/jira/browse/KAFKA-4053
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.0.1
>Reporter: Shuai Zhang
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> In TopicCommand, there are a lot of redundant if/else statements, such as
> ```val ifNotExists = if (opts.options.has(opts.ifNotExistsOpt)) true else 
> false```
> We can refactor it as the following statement:
> ```val ifNotExists = opts.options.has(opts.ifNotExistsOpt)```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4053) Refactor TopicCommand to remove redundant if/else statements

2016-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-4053.
-
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.2)
   0.10.1.0

Issue resolved by pull request 1751
[https://github.com/apache/kafka/pull/1751]

> Refactor TopicCommand to remove redundant if/else statements
> 
>
> Key: KAFKA-4053
> URL: https://issues.apache.org/jira/browse/KAFKA-4053
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.0.1
>Reporter: Shuai Zhang
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> In TopicCommand, there are a lot of redundant if/else statements, such as
> ```val ifNotExists = if (opts.options.has(opts.ifNotExistsOpt)) true else 
> false```
> We can refactor it as the following statement:
> ```val ifNotExists = opts.options.has(opts.ifNotExistsOpt)```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1751: KAFKA-4053: remove redundant if/else statements in...

2016-08-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1751


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4056) Kafka logs values of sensitive configs like passwords

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428597#comment-15428597
 ] 

ASF GitHub Bot commented on KAFKA-4056:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1759


> Kafka logs values of sensitive configs like passwords
> -
>
> Key: KAFKA-4056
> URL: https://issues.apache.org/jira/browse/KAFKA-4056
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: jaikiran pai
>Assignee: Mickael Maison
> Fix For: 0.10.1.0
>
>
> From the mail discussion here: 
> https://www.mail-archive.com/dev@kafka.apache.org/msg55012.html
> {quote}
> We are using 0.9.0.1 of Kafka (Java) libraries for our Kafka consumers and 
> producers. In one of our consumers, our consumer config had a SSL specific 
> property which ended up being used against a non-SSL Kafka broker port. As a 
> result, the logs ended up seeing messages like:
> 17:53:33,722 WARN [o.a.k.c.c.ConsumerConfig] - The configuration 
> *ssl.truststore.password = foobar* was supplied but isn't a known config.
> The log message is fine and makes sense, but can Kafka please not log the 
> values of the properties and instead just include the config name which it 
> considers as unknown? That way it won't ended up logging these potentially 
> sensitive values. I understand that only those with access to these log files 
> can end up seeing these values but even then some of our internal processes 
> forbid logging such sensitive information to the logs. This log message will 
> still end up being useful if only the config name is logged without the 
> value. 
> {quote}
> Apparently (as noted in that thread), there's already code in the Kafka 
> library which masks sensitive values like passwords, but it looks like 
> there's a bug where it unintentionally logs these raw values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4056) Kafka logs values of sensitive configs like passwords

2016-08-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-4056.
-
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1759
[https://github.com/apache/kafka/pull/1759]

> Kafka logs values of sensitive configs like passwords
> -
>
> Key: KAFKA-4056
> URL: https://issues.apache.org/jira/browse/KAFKA-4056
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: jaikiran pai
>Assignee: Mickael Maison
> Fix For: 0.10.1.0
>
>
> From the mail discussion here: 
> https://www.mail-archive.com/dev@kafka.apache.org/msg55012.html
> {quote}
> We are using 0.9.0.1 of Kafka (Java) libraries for our Kafka consumers and 
> producers. In one of our consumers, our consumer config had a SSL specific 
> property which ended up being used against a non-SSL Kafka broker port. As a 
> result, the logs ended up seeing messages like:
> 17:53:33,722 WARN [o.a.k.c.c.ConsumerConfig] - The configuration 
> *ssl.truststore.password = foobar* was supplied but isn't a known config.
> The log message is fine and makes sense, but can Kafka please not log the 
> values of the properties and instead just include the config name which it 
> considers as unknown? That way it won't ended up logging these potentially 
> sensitive values. I understand that only those with access to these log files 
> can end up seeing these values but even then some of our internal processes 
> forbid logging such sensitive information to the logs. This log message will 
> still end up being useful if only the config name is logged without the 
> value. 
> {quote}
> Apparently (as noted in that thread), there's already code in the Kafka 
> library which masks sensitive values like passwords, but it looks like 
> there's a bug where it unintentionally logs these raw values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1759: KAFKA-4056: Kafka logs values of sensitive configs...

2016-08-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1759


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-08-19 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4058:
---
Status: Patch Available  (was: In Progress)

> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> 

[jira] [Resolved] (KAFKA-4050) Allow configuration of the PRNG used for SSL

2016-08-19 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-4050.
---
   Resolution: Fixed
Fix Version/s: 0.10.0.2
   0.10.1.0

Pushed to trunk and 0.10.0

> Allow configuration of the PRNG used for SSL
> 
>
> Key: KAFKA-4050
> URL: https://issues.apache.org/jira/browse/KAFKA-4050
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.1
>Reporter: Todd Palino
>Assignee: Todd Palino
>  Labels: security, ssl
> Fix For: 0.10.1.0, 0.10.0.2
>
>
> This change will make the pseudo-random number generator (PRNG) 
> implementation used by the SSLContext configurable. The configuration is not 
> required, and the default is to use whatever the default PRNG for the JDK/JRE 
> is. Providing a string, such as "SHA1PRNG", will cause that specific 
> SecureRandom implementation to get passed to the SSLContext.
> When enabling inter-broker SSL in our certification cluster, we observed 
> severe performance issues. For reference, this cluster can take up to 600 
> MB/sec of inbound produce traffic over SSL, with RF=2, before it gets close 
> to saturation, and the mirror maker normally produces about 400 MB/sec 
> (unless it is lagging). When we enabled inter-broker SSL, we saw persistent 
> replication problems in the cluster at any inbound rate of more than about 6 
> or 7 MB/sec per-broker. This was narrowed down to all the network threads 
> blocking on a single lock in the SecureRandom code.
> It turns out that the default PRNG implementation on Linux is NativePRNG. 
> This uses randomness from /dev/urandom (which, by itself, is a non-blocking 
> read) and mixes it with randomness from SHA1. The problem is that the entire 
> application shares a single SecureRandom instance, and NativePRNG has a 
> global lock within the implNextBytes method. Switching to another 
> implementation (SHA1PRNG, which has better performance characteristics and is 
> still considered secure) completely eliminated the bottleneck and allowed the 
> cluster to work properly at saturation.
> The SSLContext initialization has an optional argument to provide a 
> SecureRandom instance, which the code currently sets to null. This change 
> creates a new config to specify an implementation, and instantiates that and 
> passes it to SSLContext if provided. This will also let someone select a 
> stronger source of randomness (obviously at a performance cost) if desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1747: KAFKA-4050: Allow configuration of the PRNG used f...

2016-08-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1747


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4050) Allow configuration of the PRNG used for SSL

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428564#comment-15428564
 ] 

ASF GitHub Bot commented on KAFKA-4050:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1747


> Allow configuration of the PRNG used for SSL
> 
>
> Key: KAFKA-4050
> URL: https://issues.apache.org/jira/browse/KAFKA-4050
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.1
>Reporter: Todd Palino
>Assignee: Todd Palino
>  Labels: security, ssl
>
> This change will make the pseudo-random number generator (PRNG) 
> implementation used by the SSLContext configurable. The configuration is not 
> required, and the default is to use whatever the default PRNG for the JDK/JRE 
> is. Providing a string, such as "SHA1PRNG", will cause that specific 
> SecureRandom implementation to get passed to the SSLContext.
> When enabling inter-broker SSL in our certification cluster, we observed 
> severe performance issues. For reference, this cluster can take up to 600 
> MB/sec of inbound produce traffic over SSL, with RF=2, before it gets close 
> to saturation, and the mirror maker normally produces about 400 MB/sec 
> (unless it is lagging). When we enabled inter-broker SSL, we saw persistent 
> replication problems in the cluster at any inbound rate of more than about 6 
> or 7 MB/sec per-broker. This was narrowed down to all the network threads 
> blocking on a single lock in the SecureRandom code.
> It turns out that the default PRNG implementation on Linux is NativePRNG. 
> This uses randomness from /dev/urandom (which, by itself, is a non-blocking 
> read) and mixes it with randomness from SHA1. The problem is that the entire 
> application shares a single SecureRandom instance, and NativePRNG has a 
> global lock within the implNextBytes method. Switching to another 
> implementation (SHA1PRNG, which has better performance characteristics and is 
> still considered secure) completely eliminated the bottleneck and allowed the 
> cluster to work properly at saturation.
> The SSLContext initialization has an optional argument to provide a 
> SecureRandom instance, which the code currently sets to null. This change 
> creates a new config to specify an implementation, and instantiates that and 
> passes it to SSLContext if provided. This will also let someone select a 
> stronger source of randomness (obviously at a performance cost) if desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-71 Enable log compaction and deletion to co-exist

2016-08-19 Thread Damian Guy
Thanks Jun.

Everyone - i've updated the KIP to use a comma-separated list of
cleanup.policies as suggested. I know we have already voted on this
proposal, so are there any objections to this change?

Thanks,
Damian

On Fri, 19 Aug 2016 at 18:38 Jun Rao  wrote:

> Damian,
>
> Yes, using comma-separated policies does seem more extensible for the
> future. If we want to adopt it, it's better to do it as part of this KIP.
> Perhaps you can just update the KIP and ask this thread to see if there is
> any objections with the change.
>
> Thanks,
>
> Jun
>
> On Fri, Aug 19, 2016 at 10:01 AM, Damian Guy  wrote:
>
> > Hi Grant,
> >
> > I apologise - I seemed to have skipped over Joel's email.
> > It is not something we considered, but seems valid.
> > I'm not sure if we should do it as part of this KIP or revisit it if/when
> > we have more cleanup policies?
> >
> > Thanks,
> > Damian
> >
> > On Fri, 19 Aug 2016 at 15:57 Grant Henke  wrote:
> >
> > > Thanks for this KIP Damien.
> > >
> > > I know this vote has passed and I think its is okay as is, but I had
> > > similar thoughts as Joel about combining compaction policies.
> > >
> > > I'm just wondering if it makes sense to allow specifying multiple
> > > > comma-separated policies "compact,delete" as opposed to
> > > > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come
> up
> > > > with more policies. The order could potentially indicate precedence.
> > > >
> > >
> > > Out of curiosity was the option of supporting a list of policies
> > rejected?
> > > Is it something we may consider adding later but didn't want to include
> > in
> > > the scope of this KIP?
> > >
> > > Thanks,
> > > Grant
> > >
> > >
> > >
> > > On Mon, Aug 15, 2016 at 7:25 PM, Joel Koshy 
> wrote:
> > >
> > > > Thanks for the proposal. I'm +1 overall with a thought somewhat
> related
> > > to
> > > > Jun's comments.
> > > >
> > > > While there may not yet be a sensible use case for it, it should be
> (in
> > > > theory) legal to have compact_and_delete with size based retention as
> > > well.
> > > > I'm just wondering if it makes sense to allow specifying multiple
> > > > comma-separated policies "compact,delete" as opposed to
> > > > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come
> up
> > > > with more policies. The order could potentially indicate precedence.
> > > > Anyway, it is just a thought - it may end up being very confusing for
> > > > users.
> > > >
> > > > @Jason - I agree this could be used to handle offset expiration as
> > well.
> > > We
> > > > can discuss that separately though; and if we do that we would want
> to
> > > also
> > > > deprecate the retention field in the commit requests.
> > > >
> > > > Joel
> > > >
> > > > On Mon, Aug 15, 2016 at 2:07 AM, Damian Guy 
> > > wrote:
> > > >
> > > > > Thanks Jason.
> > > > > The log retention.ms will be set to a value that greater than the
> > > window
> > > > > retention time. So as windows expire, they eventually get cleaned
> up
> > by
> > > > the
> > > > > broker. It doesn't matter if old windows are around for sometime
> > beyond
> > > > > their usefulness, more that they do eventually get removed and the
> > log
> > > > > doesn't grow indefinitely (as it does now).
> > > > >
> > > > > Damian
> > > > >
> > > > > On Fri, 12 Aug 2016 at 20:25 Jason Gustafson 
> > > wrote:
> > > > >
> > > > > > Hey Damian,
> > > > > >
> > > > > > That's true, but it would avoid the need to write tombstones for
> > the
> > > > > > expired offsets I guess. I'm actually not sure it's a great idea
> > > > anyway.
> > > > > > One thing we've talked about is not expiring any offsets as long
> > as a
> > > > > group
> > > > > > is alive, which would require some custom expiration logic.
> There's
> > > > also
> > > > > > the fact that we'd risk expiring group metadata which is stored
> in
> > > the
> > > > > same
> > > > > > log. Having a builtin expiration mechanism may be more useful for
> > the
> > > > > > compacted topics we maintain in Connect, but I think there too we
> > > might
> > > > > > need some custom logic. For example, expiring connector configs
> > > purely
> > > > > > based on time doesn't seem like what we'd want.
> > > > > >
> > > > > > By the way, I wonder if you could describe the expected usage in
> a
> > > > little
> > > > > > more detail in the KIP for those of us who are not as familiar
> with
> > > > Kafka
> > > > > > Streams. Is the intention to have the log retain only the most
> > recent
> > > > > > window? In that case, would you always set the log retention time
> > to
> > > > the
> > > > > > window length? And I suppose a consumer would do a seek to the
> > start
> > > of
> > > > > the
> > > > > > window (as soon as KIP-33 is available) and consume from there in
> > > order
> > > > > to
> > > > > > read the current state?
> > > > > >
> > > > > > Thanks,
> > 

Build failed in Jenkins: kafka-trunk-jdk8 #824

2016-08-19 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3163; Add time based index to Kafka.

--
[...truncated 3369 lines...]

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED


Re: [DISCUSS] KIP-71 Enable log compaction and deletion to co-exist

2016-08-19 Thread Jun Rao
Damian,

Yes, using comma-separated policies does seem more extensible for the
future. If we want to adopt it, it's better to do it as part of this KIP.
Perhaps you can just update the KIP and ask this thread to see if there is
any objections with the change.

Thanks,

Jun

On Fri, Aug 19, 2016 at 10:01 AM, Damian Guy  wrote:

> Hi Grant,
>
> I apologise - I seemed to have skipped over Joel's email.
> It is not something we considered, but seems valid.
> I'm not sure if we should do it as part of this KIP or revisit it if/when
> we have more cleanup policies?
>
> Thanks,
> Damian
>
> On Fri, 19 Aug 2016 at 15:57 Grant Henke  wrote:
>
> > Thanks for this KIP Damien.
> >
> > I know this vote has passed and I think its is okay as is, but I had
> > similar thoughts as Joel about combining compaction policies.
> >
> > I'm just wondering if it makes sense to allow specifying multiple
> > > comma-separated policies "compact,delete" as opposed to
> > > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come up
> > > with more policies. The order could potentially indicate precedence.
> > >
> >
> > Out of curiosity was the option of supporting a list of policies
> rejected?
> > Is it something we may consider adding later but didn't want to include
> in
> > the scope of this KIP?
> >
> > Thanks,
> > Grant
> >
> >
> >
> > On Mon, Aug 15, 2016 at 7:25 PM, Joel Koshy  wrote:
> >
> > > Thanks for the proposal. I'm +1 overall with a thought somewhat related
> > to
> > > Jun's comments.
> > >
> > > While there may not yet be a sensible use case for it, it should be (in
> > > theory) legal to have compact_and_delete with size based retention as
> > well.
> > > I'm just wondering if it makes sense to allow specifying multiple
> > > comma-separated policies "compact,delete" as opposed to
> > > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come up
> > > with more policies. The order could potentially indicate precedence.
> > > Anyway, it is just a thought - it may end up being very confusing for
> > > users.
> > >
> > > @Jason - I agree this could be used to handle offset expiration as
> well.
> > We
> > > can discuss that separately though; and if we do that we would want to
> > also
> > > deprecate the retention field in the commit requests.
> > >
> > > Joel
> > >
> > > On Mon, Aug 15, 2016 at 2:07 AM, Damian Guy 
> > wrote:
> > >
> > > > Thanks Jason.
> > > > The log retention.ms will be set to a value that greater than the
> > window
> > > > retention time. So as windows expire, they eventually get cleaned up
> by
> > > the
> > > > broker. It doesn't matter if old windows are around for sometime
> beyond
> > > > their usefulness, more that they do eventually get removed and the
> log
> > > > doesn't grow indefinitely (as it does now).
> > > >
> > > > Damian
> > > >
> > > > On Fri, 12 Aug 2016 at 20:25 Jason Gustafson 
> > wrote:
> > > >
> > > > > Hey Damian,
> > > > >
> > > > > That's true, but it would avoid the need to write tombstones for
> the
> > > > > expired offsets I guess. I'm actually not sure it's a great idea
> > > anyway.
> > > > > One thing we've talked about is not expiring any offsets as long
> as a
> > > > group
> > > > > is alive, which would require some custom expiration logic. There's
> > > also
> > > > > the fact that we'd risk expiring group metadata which is stored in
> > the
> > > > same
> > > > > log. Having a builtin expiration mechanism may be more useful for
> the
> > > > > compacted topics we maintain in Connect, but I think there too we
> > might
> > > > > need some custom logic. For example, expiring connector configs
> > purely
> > > > > based on time doesn't seem like what we'd want.
> > > > >
> > > > > By the way, I wonder if you could describe the expected usage in a
> > > little
> > > > > more detail in the KIP for those of us who are not as familiar with
> > > Kafka
> > > > > Streams. Is the intention to have the log retain only the most
> recent
> > > > > window? In that case, would you always set the log retention time
> to
> > > the
> > > > > window length? And I suppose a consumer would do a seek to the
> start
> > of
> > > > the
> > > > > window (as soon as KIP-33 is available) and consume from there in
> > order
> > > > to
> > > > > read the current state?
> > > > >
> > > > > Thanks,
> > > > > Jason
> > > > >
> > > > > On Fri, Aug 12, 2016 at 8:48 AM, Damian Guy 
> > > > wrote:
> > > > >
> > > > > > Thanks Jun
> > > > > >
> > > > > > On Fri, 12 Aug 2016 at 16:41 Jun Rao  wrote:
> > > > > >
> > > > > > > Hi, Damian,
> > > > > > >
> > > > > > > I was just wondering if we should disable size-based retention
> in
> > > the
> > > > > > > compact_and_delete mode. So, it sounds like that there could
> be a
> > > use
> > > > > > case
> > > > > > > for that. Since by default, the size-based retention is
> > 

Re: ZooKeeper performance issue for storing offset

2016-08-19 Thread Guozhang Wang
Hello Xin,

The ZK write performance, especially latency, depends on the underlying
hardware. AFAIK some organizations use SSD for their ZK clusters so that
the latency is less than 1ms.

There are some more discussions on how many partitions one should choose in
practice and what are ZK's impact on it:

http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/


Guozhang


On Tue, Aug 16, 2016 at 10:18 AM, Xin Jin  wrote:

> Hi,
>
> I'm working on streaming systems in AMPLab at UC Berkeley. This article (
> https://cwiki.apache.org/confluence/display/KAFKA/Committing
> +and+fetching+consumer+offsets+in+Kafka)
> mentioned the ZooKeeper performance issue when consumers store offsets in
> ZooKeeper.
>
> "In Kafka releases through 0.8.1.1, consumers commit their offsets to
> ZooKeeper. ZooKeeper does not scale extremely well (especially for writes)
> when there are a large number of offsets (i.e., consumer-count *
> partition-count)."
>
> Can anyone tell me in production scenarios, how many consumers and
> partitions do you have? How much write (offset update) traffic do you
> generate that ZooKeeper cannot handle?
>
> Thank you very much!
> Xin
>



-- 
-- Guozhang


Re: Should we have a KIP call?

2016-08-19 Thread Guozhang Wang
The list LGTM. And if the status check of the above five times was quick I
would suggest focus on the time-based release plan and Java 7 support
timeline especially.


Guozhang

On Thu, Aug 18, 2016 at 11:22 AM, Jun Rao  wrote:

> Grant,
>
> That sounds like a good idea. I will send out an invite for this Tue at
> 11am. There are quite a few KIPs in your list and we probably can't cover
> them all in one call. Perhaps we can do a quick status check on those that
> have been voted
>- KIP-33: Add a time based log index
>- KIP-50: Move Authorizer to o.a.k.common package
>- KIP-55: Secure Quotas for Authenticated Users
>- KIP-67: Queryable state for Kafka Streams
>- KIP-70: Revise Partition Assignment Semantics on New Consumer's
> Subscription
> Change
>
> and discuss some of the unvoted ones that are currently active and may need
> more discussion?
>- Time-based releases for Apache Kafka
>- Java 7 support timeline
>- KIP-4: ACL Admin Schema
>- KIP-73: Replication Quotas
>- KIP-74: Add FetchResponse size limit in bytes
>
> Thanks,
>
> Jun
>
> On Thu, Aug 18, 2016 at 10:55 AM, Grant Henke  wrote:
>
> > I am thinking it might be a good time to have a Kafka KIP call. There
> are a
> > lot of KIPs and discussions in progress that could benefit from a "quick"
> > call to discuss, coordinate, and prioritize.
> >
> > Some of the voted topics we could discuss are:
> > (I didn't include ones that were just voted or will pass just before the
> > call)
> >
> >- KIP-33: Add a time based log index
> >- KIP-50: Move Authorizer to o.a.k.common package
> >- KIP-55: Secure Quotas for Authenticated Users
> >- KIP-67: Queryable state for Kafka Streams
> >- KIP-70: Revise Partition Assignment Semantics on New Consumer's
> >Subscription Change
> >
> > Some of the un-voted topics we could discuss are:
> >
> >- Time-based releases for Apache Kafka
> >- Java 7 support timeline
> >- KIP-4: ACL Admin Schema
> >- KIP-37 - Add Namespaces to Kafka
> >- KIP-48: Delegation token support for Kafka
> >- KIP-54: Sticky Partition Assignment Strategy
> >- KIP-63: Unify store and downstream caching in streams
> >- KIP-66: Add Kafka Connect Transformers to allow transformations to
> >messages
> >- KIP-72 Allow Sizing Incoming Request Queue in Bytes
> >- KIP-73: Replication Quotas
> >- KIP-74: Add FetchResponse size limit in bytes
> >
> > As a side note it may be worth moving some open KIPs to a "parked" list
> if
> > they are not being actively worked on. We can include a reason why as
> well.
> > Reasons could include being blocked, parked, dormant (no activity), or
> > abandoned (creator isn,t working on it and others can pick it up). We
> would
> > need to ask the KIP creator or define some length of time before we call
> a
> > KIP abandoned and available for pickup.
> >
> > Some KIPs which may be candidates to be "parked" in a first pass are:
> >
> >- KIP-6 - New reassignment partition logic for rebalancing (dormant)
> >- KIP-14 - Tools standardization (dormant)
> >- KIP-17 - Add HighwaterMarkOffset to OffsetFetchResponse (dormant)
> >- KIP-18 - JBOD Support (dormant)
> >- KIP-23 - Add JSON/CSV output and looping options to
> >ConsumerGroupCommand (dormant)
> >- KIP-27 - Conditional Publish (dormant)
> >- KIP-30 - Allow for brokers to have plug-able consensus and meta data
> >storage sub systems (dormant)
> >- KIP-39: Pinning controller to broker (dormant)
> >- KIP-44 - Allow Kafka to have a customized security protocol
> (dormant)
> >- KIP-46 - Self Healing (dormant)
> >- KIP-47 - Add timestamp-based log deletion policy (blocked - by
> KIP-33)
> >- KIP-53 - Add custom policies for reconnect attempts to
> NetworkdClient
> >- KIP-58 - Make Log Compaction Point Configurable (blocked - by
> KIP-33)
> >- KIP-61: Add a log retention parameter for maximum disk space usage
> >percentage (dormant)
> >- KIP-68 Add a consumed log retention before log retention (dormant)
> >
> > Thank you,
> > Grant
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-4067) improve the handling of IOException in log.close()

2016-08-19 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-4067:
--

 Summary: improve the handling of IOException in log.close()
 Key: KAFKA-4067
 URL: https://issues.apache.org/jira/browse/KAFKA-4067
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.10.0.1
Reporter: Jun Rao


While reviewing KIP-33, I noticed that there are a few places that we can 
improve in closing a log.

1. In FileMessageSet.close(): We should probably call flush after trim since 
the later could change the size of the file channel.

2. AbstractIndex.close(): We should call flush at the end.

3. In Log.close(), if we hit an IOException, currently we just log it and 
proceed with a clean shutdown. However, an IOException could leave the log or 
indexes (e.g., when calling flush()) in a corrupted state. So, we probably 
should halt the jvm in this case to force log recovery on restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3163) KIP-33 - Add a time based log index to Kafka

2016-08-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3163:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1215
[https://github.com/apache/kafka/pull/1215]

> KIP-33 - Add a time based log index to Kafka
> 
>
> Key: KAFKA-3163
> URL: https://issues.apache.org/jira/browse/KAFKA-3163
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
> Attachments: 00113931.log, 00113931.timeindex
>
>
> This ticket is associated with KIP-33.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3163) KIP-33 - Add a time based log index to Kafka

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428469#comment-15428469
 ] 

ASF GitHub Bot commented on KAFKA-3163:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1215


> KIP-33 - Add a time based log index to Kafka
> 
>
> Key: KAFKA-3163
> URL: https://issues.apache.org/jira/browse/KAFKA-3163
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
> Attachments: 00113931.log, 00113931.timeindex
>
>
> This ticket is associated with KIP-33.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1215: KAFKA-3163: Add time based index to Kafka.

2016-08-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1215


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-71 Enable log compaction and deletion to co-exist

2016-08-19 Thread Damian Guy
Hi Grant,

I apologise - I seemed to have skipped over Joel's email.
It is not something we considered, but seems valid.
I'm not sure if we should do it as part of this KIP or revisit it if/when
we have more cleanup policies?

Thanks,
Damian

On Fri, 19 Aug 2016 at 15:57 Grant Henke  wrote:

> Thanks for this KIP Damien.
>
> I know this vote has passed and I think its is okay as is, but I had
> similar thoughts as Joel about combining compaction policies.
>
> I'm just wondering if it makes sense to allow specifying multiple
> > comma-separated policies "compact,delete" as opposed to
> > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come up
> > with more policies. The order could potentially indicate precedence.
> >
>
> Out of curiosity was the option of supporting a list of policies rejected?
> Is it something we may consider adding later but didn't want to include in
> the scope of this KIP?
>
> Thanks,
> Grant
>
>
>
> On Mon, Aug 15, 2016 at 7:25 PM, Joel Koshy  wrote:
>
> > Thanks for the proposal. I'm +1 overall with a thought somewhat related
> to
> > Jun's comments.
> >
> > While there may not yet be a sensible use case for it, it should be (in
> > theory) legal to have compact_and_delete with size based retention as
> well.
> > I'm just wondering if it makes sense to allow specifying multiple
> > comma-separated policies "compact,delete" as opposed to
> > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come up
> > with more policies. The order could potentially indicate precedence.
> > Anyway, it is just a thought - it may end up being very confusing for
> > users.
> >
> > @Jason - I agree this could be used to handle offset expiration as well.
> We
> > can discuss that separately though; and if we do that we would want to
> also
> > deprecate the retention field in the commit requests.
> >
> > Joel
> >
> > On Mon, Aug 15, 2016 at 2:07 AM, Damian Guy 
> wrote:
> >
> > > Thanks Jason.
> > > The log retention.ms will be set to a value that greater than the
> window
> > > retention time. So as windows expire, they eventually get cleaned up by
> > the
> > > broker. It doesn't matter if old windows are around for sometime beyond
> > > their usefulness, more that they do eventually get removed and the log
> > > doesn't grow indefinitely (as it does now).
> > >
> > > Damian
> > >
> > > On Fri, 12 Aug 2016 at 20:25 Jason Gustafson 
> wrote:
> > >
> > > > Hey Damian,
> > > >
> > > > That's true, but it would avoid the need to write tombstones for the
> > > > expired offsets I guess. I'm actually not sure it's a great idea
> > anyway.
> > > > One thing we've talked about is not expiring any offsets as long as a
> > > group
> > > > is alive, which would require some custom expiration logic. There's
> > also
> > > > the fact that we'd risk expiring group metadata which is stored in
> the
> > > same
> > > > log. Having a builtin expiration mechanism may be more useful for the
> > > > compacted topics we maintain in Connect, but I think there too we
> might
> > > > need some custom logic. For example, expiring connector configs
> purely
> > > > based on time doesn't seem like what we'd want.
> > > >
> > > > By the way, I wonder if you could describe the expected usage in a
> > little
> > > > more detail in the KIP for those of us who are not as familiar with
> > Kafka
> > > > Streams. Is the intention to have the log retain only the most recent
> > > > window? In that case, would you always set the log retention time to
> > the
> > > > window length? And I suppose a consumer would do a seek to the start
> of
> > > the
> > > > window (as soon as KIP-33 is available) and consume from there in
> order
> > > to
> > > > read the current state?
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > > On Fri, Aug 12, 2016 at 8:48 AM, Damian Guy 
> > > wrote:
> > > >
> > > > > Thanks Jun
> > > > >
> > > > > On Fri, 12 Aug 2016 at 16:41 Jun Rao  wrote:
> > > > >
> > > > > > Hi, Damian,
> > > > > >
> > > > > > I was just wondering if we should disable size-based retention in
> > the
> > > > > > compact_and_delete mode. So, it sounds like that there could be a
> > use
> > > > > case
> > > > > > for that. Since by default, the size-based retention is
> infinite, I
> > > > think
> > > > > > we can just leave the KIP as it is.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Fri, Aug 12, 2016 at 12:10 AM, Damian Guy <
> damian@gmail.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > The only concrete example i can think of is a case for limiting
> > > disk
> > > > > > usage.
> > > > > > > Say, i had something like Connect running that was tracking
> > changes
> > > > in
> > > > > a
> > > > > > > database. Downstream i don't really care about every change, i
> > just
> > > > > want
> > > > > > > the 

[jira] [Commented] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428373#comment-15428373
 ] 

ASF GitHub Bot commented on KAFKA-4058:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1766

KAFKA-4058: Failure in 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

see #1765 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-4058-trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1766.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1766


commit 671bd4cfef5e123acca4ee47b129ed165d1107c4
Author: Matthias J. Sax 
Date:   2016-08-19T15:28:58Z

use AdminTool to check for active consumer group instead of sleep

commit faf4e0413563e268b7b239a9d2149b2f5f34c21c
Author: Matthias J. Sax 
Date:   2016-08-19T16:07:16Z

use AdminTool to check for active consumer group instead of sleep




> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[GitHub] kafka pull request #1766: KAFKA-4058: Failure in org.apache.kafka.streams.in...

2016-08-19 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1766

KAFKA-4058: Failure in 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

see #1765 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-4058-trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1766.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1766


commit 671bd4cfef5e123acca4ee47b129ed165d1107c4
Author: Matthias J. Sax 
Date:   2016-08-19T15:28:58Z

use AdminTool to check for active consumer group instead of sleep

commit faf4e0413563e268b7b239a9d2149b2f5f34c21c
Author: Matthias J. Sax 
Date:   2016-08-19T16:07:16Z

use AdminTool to check for active consumer group instead of sleep




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1764: MINOR: improve Streams application reset tool to m...

2016-08-19 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1764

MINOR: improve Streams application reset tool to make sure application is 
down



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka improveResetTool

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1764.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1764


commit 49b8f520842d4c7ad6bd8228fcca8694b4388de7
Author: Matthias J. Sax 
Date:   2016-08-19T15:32:46Z

MINOR: improve Streams application reset tool to make sure application is 
down




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1765: Improve reset tool 0.10.0

2016-08-19 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1765

Improve reset tool 0.10.0

@guozhangwang @miguno @dguy @enothereska @hjafarpour
See #1764 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka improveResetTool-0.10.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1765.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1765


commit 489bb8a120f7ee18a410f130015e4a46bc04c6f6
Author: Matthias J. Sax 
Date:   2016-08-19T15:32:46Z

MINOR: improve Streams application reset tool to make sure application is 
down

commit 5f2d26cfb596e5fb0db882a63e37d9811d66bdf9
Author: Matthias J. Sax 
Date:   2016-08-19T15:50:18Z

MINOR: improve Streams application reset tool to make sure application is 
down




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (KAFKA-4063) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2016-08-19 Thread Roger Hoover (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roger Hoover closed KAFKA-4063.
---

The JIRA UI was unresponsive so I accidentally submitted the form twice.

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4063
> URL: https://issues.apache.org/jira/browse/KAFKA-4063
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Roger Hoover
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4042) DistributedHerder thread can die because of connector & task lifecycle exceptions

2016-08-19 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan updated KAFKA-4042:
---
Fix Version/s: 0.10.1.0

> DistributedHerder thread can die because of connector & task lifecycle 
> exceptions
> -
>
> Key: KAFKA-4042
> URL: https://issues.apache.org/jira/browse/KAFKA-4042
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> As one example, there isn't exception handling in 
> {{DistributedHerder.startConnector()}} or the call-chain for it originating 
> in the {{tick()}} on the herder thread, and it can throw an exception because 
> of a bad class name in the connector config. (report of issue in wild: 
> https://groups.google.com/d/msg/confluent-platform/EnleFnXpZCU/3B_gRxsRAgAJ)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-71 Enable log compaction and deletion to co-exist

2016-08-19 Thread Grant Henke
Thanks for this KIP Damien.

I know this vote has passed and I think its is okay as is, but I had
similar thoughts as Joel about combining compaction policies.

I'm just wondering if it makes sense to allow specifying multiple
> comma-separated policies "compact,delete" as opposed to
> "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come up
> with more policies. The order could potentially indicate precedence.
>

Out of curiosity was the option of supporting a list of policies rejected?
Is it something we may consider adding later but didn't want to include in
the scope of this KIP?

Thanks,
Grant



On Mon, Aug 15, 2016 at 7:25 PM, Joel Koshy  wrote:

> Thanks for the proposal. I'm +1 overall with a thought somewhat related to
> Jun's comments.
>
> While there may not yet be a sensible use case for it, it should be (in
> theory) legal to have compact_and_delete with size based retention as well.
> I'm just wondering if it makes sense to allow specifying multiple
> comma-separated policies "compact,delete" as opposed to
> "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever come up
> with more policies. The order could potentially indicate precedence.
> Anyway, it is just a thought - it may end up being very confusing for
> users.
>
> @Jason - I agree this could be used to handle offset expiration as well. We
> can discuss that separately though; and if we do that we would want to also
> deprecate the retention field in the commit requests.
>
> Joel
>
> On Mon, Aug 15, 2016 at 2:07 AM, Damian Guy  wrote:
>
> > Thanks Jason.
> > The log retention.ms will be set to a value that greater than the window
> > retention time. So as windows expire, they eventually get cleaned up by
> the
> > broker. It doesn't matter if old windows are around for sometime beyond
> > their usefulness, more that they do eventually get removed and the log
> > doesn't grow indefinitely (as it does now).
> >
> > Damian
> >
> > On Fri, 12 Aug 2016 at 20:25 Jason Gustafson  wrote:
> >
> > > Hey Damian,
> > >
> > > That's true, but it would avoid the need to write tombstones for the
> > > expired offsets I guess. I'm actually not sure it's a great idea
> anyway.
> > > One thing we've talked about is not expiring any offsets as long as a
> > group
> > > is alive, which would require some custom expiration logic. There's
> also
> > > the fact that we'd risk expiring group metadata which is stored in the
> > same
> > > log. Having a builtin expiration mechanism may be more useful for the
> > > compacted topics we maintain in Connect, but I think there too we might
> > > need some custom logic. For example, expiring connector configs purely
> > > based on time doesn't seem like what we'd want.
> > >
> > > By the way, I wonder if you could describe the expected usage in a
> little
> > > more detail in the KIP for those of us who are not as familiar with
> Kafka
> > > Streams. Is the intention to have the log retain only the most recent
> > > window? In that case, would you always set the log retention time to
> the
> > > window length? And I suppose a consumer would do a seek to the start of
> > the
> > > window (as soon as KIP-33 is available) and consume from there in order
> > to
> > > read the current state?
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Fri, Aug 12, 2016 at 8:48 AM, Damian Guy 
> > wrote:
> > >
> > > > Thanks Jun
> > > >
> > > > On Fri, 12 Aug 2016 at 16:41 Jun Rao  wrote:
> > > >
> > > > > Hi, Damian,
> > > > >
> > > > > I was just wondering if we should disable size-based retention in
> the
> > > > > compact_and_delete mode. So, it sounds like that there could be a
> use
> > > > case
> > > > > for that. Since by default, the size-based retention is infinite, I
> > > think
> > > > > we can just leave the KIP as it is.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > > On Fri, Aug 12, 2016 at 12:10 AM, Damian Guy  >
> > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > The only concrete example i can think of is a case for limiting
> > disk
> > > > > usage.
> > > > > > Say, i had something like Connect running that was tracking
> changes
> > > in
> > > > a
> > > > > > database. Downstream i don't really care about every change, i
> just
> > > > want
> > > > > > the latest values, so compaction could be enabled. However, the
> > kafka
> > > > > > cluster has limited disk space so we need to limit the size of
> each
> > > > > > partition.
> > > > > > In a previous life i have done the same, just without compaction
> > > turned
> > > > > on.
> > > > > >
> > > > > > Besides, i don't think it costs us anything in terms of added
> > > > complexity
> > > > > to
> > > > > > enable it for time & size based retention - the code already does
> > > this
> > > > > for
> > > > > > us.
> > > > > >
> > > > > > Thanks,
> > > > > > Damian
> > > > > >
> > > 

Re: [VOTE] KIP-74: Add FetchResponse size limit in bytes

2016-08-19 Thread Ismael Juma
Thanks for the KIP. +1 (binding) with the following suggestion:

Fetch Request (Version: 3) => replica_id max_wait_time min_bytes
response_max_bytes [topics]
  replica_id => INT32
  max_wait_time => INT32
  min_bytes => INT32
  response_max_bytes => INT32
  topics => topic [partitions]
topic => STRING
partitions => partition fetch_offset max_bytes
  partition => INT32
  fetch_offset => INT64
  max_bytes => INT32


I think "response_max_bytes" should be called "max_bytes". That way
it's consistent with "min_bytes" (which is also a response-level
property).

I understand the desire to differentiate it from the "max_bytes"
passed with each partition, but I think it's fine to rely on the
context (containing struct) for that.


Ismael



On Fri, Aug 19, 2016 at 1:47 PM, Tom Crayford  wrote:

> +1 (non binding)
>
> On Fri, Aug 19, 2016 at 6:20 AM, Manikumar Reddy <
> manikumar.re...@gmail.com>
> wrote:
>
> > +1 (non-binding)
> >
> > This feature help us control memory footprint and allows consumer to
> > progress on fetching  large messages.
> >
> > On Fri, Aug 19, 2016 at 10:32 AM, Gwen Shapira 
> wrote:
> >
> > > +1 (binding)
> > >
> > > On Thu, Aug 18, 2016 at 1:47 PM, Andrey L. Neporada
> > >  wrote:
> > > > Hi all!
> > > > I’ve modified KIP-74 a little bit (as requested by Jason Gustafson &
> > Jun
> > > Rao):
> > > > 1) provided more detailed explanation on memory usage (no functional
> > > changes)
> > > > 2) renamed “fetch.response.max.bytes” -> “fetch.max.bytes”
> > > >
> > > > Let’s continue voting in this thread.
> > > >
> > > > Thanks!
> > > > Andrey.
> > > >
> > > >> On 17 Aug 2016, at 00:02, Jun Rao  wrote:
> > > >>
> > > >> Andrey,
> > > >>
> > > >> Thanks for the KIP. +1
> > > >>
> > > >> Jun
> > > >>
> > > >> On Tue, Aug 16, 2016 at 1:32 PM, Andrey L. Neporada <
> > > >> anepor...@yandex-team.ru> wrote:
> > > >>
> > > >>> Hi!
> > > >>>
> > > >>> I would like to initiate the voting process for KIP-74:
> > > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >>> 74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
> > > >>>
> > > >>>
> > > >>> Thanks,
> > > >>> Andrey.
> > > >
> > >
> > >
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk7 #1484

2016-08-19 Thread Apache Jenkins Server
See 



Re: [VOTE] KIP-74: Add FetchResponse size limit in bytes

2016-08-19 Thread Tom Crayford
+1 (non binding)

On Fri, Aug 19, 2016 at 6:20 AM, Manikumar Reddy 
wrote:

> +1 (non-binding)
>
> This feature help us control memory footprint and allows consumer to
> progress on fetching  large messages.
>
> On Fri, Aug 19, 2016 at 10:32 AM, Gwen Shapira  wrote:
>
> > +1 (binding)
> >
> > On Thu, Aug 18, 2016 at 1:47 PM, Andrey L. Neporada
> >  wrote:
> > > Hi all!
> > > I’ve modified KIP-74 a little bit (as requested by Jason Gustafson &
> Jun
> > Rao):
> > > 1) provided more detailed explanation on memory usage (no functional
> > changes)
> > > 2) renamed “fetch.response.max.bytes” -> “fetch.max.bytes”
> > >
> > > Let’s continue voting in this thread.
> > >
> > > Thanks!
> > > Andrey.
> > >
> > >> On 17 Aug 2016, at 00:02, Jun Rao  wrote:
> > >>
> > >> Andrey,
> > >>
> > >> Thanks for the KIP. +1
> > >>
> > >> Jun
> > >>
> > >> On Tue, Aug 16, 2016 at 1:32 PM, Andrey L. Neporada <
> > >> anepor...@yandex-team.ru> wrote:
> > >>
> > >>> Hi!
> > >>>
> > >>> I would like to initiate the voting process for KIP-74:
> > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >>> 74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
> > >>>
> > >>>
> > >>> Thanks,
> > >>> Andrey.
> > >
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


[jira] [Commented] (KAFKA-4066) NullPointerException in Kafka consumer due to unsafe access to findCoordinatorFuture

2016-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427988#comment-15427988
 ] 

ASF GitHub Bot commented on KAFKA-4066:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1763

KAFKA-4066: Fix NPE in consumer due to multi-threaded updates



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4066

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1763.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1763


commit 2a51b691b1f340ca493b7f96a711fd7aa6baad12
Author: Rajini Sivaram 
Date:   2016-08-19T10:40:53Z

KAFKA-4066: Fix NPE in consumer due to multi-threaded updates




> NullPointerException in Kafka consumer due to unsafe access to 
> findCoordinatorFuture
> 
>
> Key: KAFKA-4066
> URL: https://issues.apache.org/jira/browse/KAFKA-4066
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> {quote}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:164)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:245)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:993)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:959)
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:100)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:120)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1763: KAFKA-4066: Fix NPE in consumer due to multi-threa...

2016-08-19 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1763

KAFKA-4066: Fix NPE in consumer due to multi-threaded updates



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4066

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1763.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1763


commit 2a51b691b1f340ca493b7f96a711fd7aa6baad12
Author: Rajini Sivaram 
Date:   2016-08-19T10:40:53Z

KAFKA-4066: Fix NPE in consumer due to multi-threaded updates




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4066) NullPointerException in Kafka consumer due to unsafe access to findCoordinatorFuture

2016-08-19 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4066:
-

 Summary: NullPointerException in Kafka consumer due to unsafe 
access to findCoordinatorFuture
 Key: KAFKA-4066
 URL: https://issues.apache.org/jira/browse/KAFKA-4066
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 0.10.1.0


{quote}
java.lang.NullPointerException
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:164)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:245)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:993)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:959)
at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:100)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:120)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:75)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4065) Property missing in ProcuderConfig.java - KafkaProducer API 0.9.0.0

2016-08-19 Thread manzar (JIRA)
manzar created KAFKA-4065:
-

 Summary: Property missing in ProcuderConfig.java - KafkaProducer 
API 0.9.0.0
 Key: KAFKA-4065
 URL: https://issues.apache.org/jira/browse/KAFKA-4065
 Project: Kafka
  Issue Type: Bug
Reporter: manzar


1 ) "compressed.topics" property is missing in ProducerConfig.java in 
KafkaProducer API 0.9.0.0.

2) "compression.type" property is there in ProducerConfig.java that was 
expected to be "compression.codec" according to official document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-76: Improve Kafka Streams Join Semantics

2016-08-19 Thread Matthias J. Sax
Hi,

we have created KIP-76: Improve Kafka Streams Join Semantics

https://cwiki.apache.org/confluence/display/KAFKA/KIP-76%3A+Improve+Kafka+Streams+Join+Semantics

Please give feedback. Thanks.


-Matthias



signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE] New Kafka PMC member Gwen Shapira

2016-08-19 Thread Matthias J. Sax
Congrats Gwen! :)

On 08/18/2016 08:31 PM, Harsha Chintalapani wrote:
> Congrats Gwen.
> -Harsha
> 
> On Thu, Aug 18, 2016 at 10:59 AM Mayuresh Gharat 
> wrote:
> 
>> Congrats Gwen :)
>>
>> Thanks,
>>
>> Mayuresh
>>
>> On Thu, Aug 18, 2016 at 10:27 AM, Gwen Shapira  wrote:
>>
>>> Thanks team Kafka :) Very excited and happy to contribute and be part
>>> of this fantastic community.
>>>
>>>
>>>
>>> On Thu, Aug 18, 2016 at 9:52 AM, Guozhang Wang 
>> wrote:
 Congrats Gwen!

 On Thu, Aug 18, 2016 at 9:27 AM, Ashish Singh 
>>> wrote:

> Congrats Gwen!
>
> On Thursday, August 18, 2016, Grant Henke 
>> wrote:
>
>> Congratulations Gwen!
>>
>>
>>
>> On Thu, Aug 18, 2016 at 2:36 AM, Ismael Juma > > wrote:
>>
>>> Congratulations Gwen! Great news.
>>>
>>> Ismael
>>>
>>> On 18 Aug 2016 2:44 am, "Jun Rao" > >
>> wrote:
>>>
 Hi, Everyone,

 Gwen Shapira has been active in the Kafka community since she
>>> became
> a
 Kafka committer
 about a year ago. I am glad to announce that Gwen is now a
>> member
>>> of
>>> Kafka
 PMC.

 Congratulations, Gwen!

 Jun

>>>
>>
>>
>>
>> --
>> Grant Henke
>> Software Engineer | Cloudera
>> gr...@cloudera.com  | twitter.com/gchenke |
>> linkedin.com/in/granthenke
>>
>
>
> --
> Ashish h
>



 --
 -- Guozhang
>>>
>>>
>>>
>>> --
>>> Gwen Shapira
>>> Product Manager | Confluent
>>> 650.450.2760 | @gwenshap
>>> Follow us: Twitter | blog
>>>
>>
>>
>>
>> --
>> -Regards,
>> Mayuresh R. Gharat
>> (862) 250-7125
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-55: Secure quotas for authenticated users

2016-08-19 Thread Rajini Sivaram
The PR for this KIP is ready for review. JIRA is
https://issues.apache.org/jira/browse/KAFKA-3492, PR is
https://github.com/apache/kafka/pull/1753.

Thanks,

Rajini

On Tue, Aug 9, 2016 at 1:06 PM, Rajini Sivaram  wrote:

> Hi Tom,
>
> Have updated the KIP wiki. Will submit a PR later this week.
>
> Regards,
>
> Rajini
>
> On Tue, Aug 9, 2016 at 12:30 PM, Tom Crayford 
> wrote:
>
>> Seeing as voting passed on this, can somebody with access update the wiki?
>>
>> Is there code for this KIP in a PR somewhere that needs merging?
>>
>> Thanks
>> Tom Crayford
>> Heroku Kafka
>>
>> On Friday, 1 July 2016, Rajini Sivaram 
>> wrote:
>>
>> > Thank you, Jun.
>> >
>> > Hi all,
>> >
>> > Please let me know if you have any comments or suggestions on the
>> updated
>> > KIP. If there are no objections, I will initiate voting next week.
>> >
>> > Thank you...
>> >
>> >
>> > On Thu, Jun 30, 2016 at 10:37 PM, Jun Rao > >
>> > wrote:
>> >
>> > > Rajini,
>> > >
>> > > The latest wiki looks good to me. Perhaps you want to ask other
>> people to
>> > > also take a look and then we can start the voting.
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > > On Tue, Jun 28, 2016 at 6:27 AM, Rajini Sivaram <
>> > > rajinisiva...@googlemail.com > wrote:
>> > >
>> > > > Jun,
>> > > >
>> > > > Thank you for the review. I have changed all default property
>> configs
>> > to
>> > > be
>> > > > stored with the node name . So the defaults are
>> > > > /config/clients/ for default client-id quota,
>> > > > /config/users/ for default user quota and
>> > > > /config/users/ for default > client-id>
>> > > > quota. Hope that makes sense.
>> > > >
>> > > > On Mon, Jun 27, 2016 at 10:25 PM, Jun Rao > > > wrote:
>> > > >
>> > > > > Rajini,
>> > > > >
>> > > > > Thanks for the update. Looks good to me. My only comment is that
>> > > > > instead of /config/users//clients,
>> > > > > would it be better to represent it as
>> > > > > /config/users//clients/
>> > > > > so that it's more consistent?
>> > > > >
>> > > > > Jun
>> > > > >
>> > > > >
>> > > > > On Thu, Jun 23, 2016 at 2:16 PM, Rajini Sivaram <
>> > > > > rajinisiva...@googlemail.com > wrote:
>> > > > >
>> > > > > > Jun,
>> > > > > >
>> > > > > > Yes, I agree that it makes sense to retain the existing
>> semantics
>> > for
>> > > > > > client-id quotas for compatibility. Especially if we can provide
>> > the
>> > > > > option
>> > > > > > to enable secure client-id quotas for multi-user clusters as
>> well.
>> > > > > >
>> > > > > > I have updated the KIP - each of these levels can have defaults
>> as
>> > > well
>> > > > > as
>> > > > > > specific entries:
>> > > > > >
>> > > > > >- /config/clients : Insecure  quotas with the same
>> > > > > semantics
>> > > > > >as now
>> > > > > >- /config/users: User quotas
>> > > > > >- /config/users/userA/clients:  quotas for
>> > userA
>> > > > > >- /config/users//clients: Default 
>> > > quotas
>> > > > > >
>> > > > > > Now it is fully flexible as well as compatible with the current
>> > > > > > implementation. I used /config/users//clients rather
>> than
>> > > > > > /config/users/clients since "clients" is a valid (unlikely, but
>> > still
>> > > > > > possible) user principal. I used , but it could be
>> > anything
>> > > > that
>> > > > > > is a valid Zookeeper node name, but not a valid URL-encoded
>> name.
>> > > > > >
>> > > > > > Thank you,
>> > > > > >
>> > > > > > Rajini
>> > > > > >
>> > > > > > On Thu, Jun 23, 2016 at 3:43 PM, Jun Rao > > > wrote:
>> > > > > >
>> > > > > > > Hi, Rajini,
>> > > > > > >
>> > > > > > > For the following statements, would it be better to allocate
>> the
>> > > > quota
>> > > > > to
>> > > > > > > all connections whose client-id is clientX? This way, existing
>> > > > > client-id
>> > > > > > > quotas are fully compatible in the new release whether the
>> > cluster
>> > > is
>> > > > > in
>> > > > > > a
>> > > > > > > single user or multi-user environment.
>> > > > > > >
>> > > > > > > 4. If client-id quota override is defined for clientX in
>> > > > > > > /config/clients/clientX, this quota is allocated for the sole
>> use
>> > > of
>> > > > > > > > > > > > > > clientX>
>> > > > > > > 5. If dynamic client-id default is configured in
>> /config/clients,
>> > > > this
>> > > > > > > default quota is allocated for the sole use of > clientX>
>> > > > > > > 6. If quota.producer.default is configured for the broker in
>> > > > > > > server.properties, this default quota is allocated for the
>> sole
>> > use
>> > > > of
>> > > > > > > > > > > > > > clientX>
>> > > > > > >
>> > > > > > > We can potentially add a default quota for both user and
>> client
>> > at
>> > > > path
>> > > > > > > /config/users/clients?
>> > > 

[jira] [Work started] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-08-19 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4058 started by Matthias J. Sax.
--
> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> 

[jira] [Commented] (KAFKA-4051) Strange behavior during rebalance when turning the OS clock back

2016-08-19 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427840#comment-15427840
 ] 

Rajini Sivaram commented on KAFKA-4051:
---

[~ijuma] [~gwenshap] Thank you both for the feedback. I will try it out 
locally, and run performance tests. I can test on Linux and Mac.

> Strange behavior during rebalance when turning the OS clock back
> 
>
> Key: KAFKA-4051
> URL: https://issues.apache.org/jira/browse/KAFKA-4051
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: OS: Ubuntu 14.04 - 64bits
>Reporter: Gabriel Ibarra
>Assignee: Rajini Sivaram
>
> If a rebalance is performed after turning the OS clock back, then the kafka 
> server enters in a loop and the rebalance cannot be completed until the 
> system returns to the previous date/hour.
> Steps to Reproduce:
> - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner 
> of all the partitions.
> - Turn the system (OS) clock back. For instance 1 hour.
> - Start a new consumer for TOPIC_NAME  using the same group id, it will force 
> a rebalance.
> After these actions the kafka server logs constantly display the messages 
> below, and after a while both consumers do not receive more packages. This 
> condition lasts at least the time that the clock went back, for this example 
> 1 hour, and finally after this time kafka comes back to work.
> [2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 2 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation 1 
> is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
> Due to the fact that some systems could have enabled NTP or an administrator 
> option to change the system clock (date/time) it's important to do it safely, 
> currently the only way to do it safely is following the next steps:
> 1-  Tear down the Kafka server.
> 2-  Change the date/time
> 3- Tear up the Kafka server.
> But, this approach can be done only if the change was performed by the 
> administrator, not for NTP. Also in many systems turning down the Kafka 
> server might cause the INFORMATION TO BE LOST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #823

2016-08-19 Thread Apache Jenkins Server
See 



[VOTE] KIP-73 - Replication Quotas

2016-08-19 Thread Ben Stopford
I’d like to initiate the voting process for KIP-73:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas 


Ben

[GitHub] kafka pull request #1745: KAFKA-4042: prevent DistributedHerder thread from ...

2016-08-19 Thread shikhar
GitHub user shikhar reopened a pull request:

https://github.com/apache/kafka/pull/1745

KAFKA-4042: prevent DistributedHerder thread from dying from connector/task 
lifecycle exceptions

- `worker.startConnector()` and `worker.startTask()` can throw (e.g. 
`ClassNotFoundException`, `ConnectException`, or any other exception arising 
from the constructor of the connector or task class when we `newInstance()`), 
so add catch blocks around those calls from the `DistributedHerder` and handle 
by invoking `onFailure()` which updates the `StatusBackingStore`.
- `worker.stopConnector()` throws `ConnectException` if start failed 
causing the connector to not be registered with the worker, so guard with 
`worker.ownsConnector()`
- `worker.stopTasks()` and `worker.awaitStopTasks()` throw 
`ConnectException` if any of them failed to start and are hence not registered 
with the worker, so guard those calls by filtering the task IDs with 
`worker.ownsTask()`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka distherder-stayup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1745.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1745


commit 4bb02e610b01d7b425f5c39b435d4d7484b89ee9
Author: Shikhar Bhushan 
Date:   2016-08-17T23:29:30Z

KAFKA-4042: prevent `DistributedHerder` thread from dying from 
connector/task lifecycle exceptions

- `worker.startConnector()` and `worker.startTask()` can throw (e.g. 
`ClassNotFoundException`, `ConnectException`), so add catch blocks around those 
calls from the `DistributedHerder` and handle by invoking `onFailure()` which 
updates the `StatusBackingStore`.
- `worker.stopConnector()` throws `ConnectException` if start failed 
causing the connector to not be registered with the worker, so guard with 
`worker.ownsConnector()`
- `worker.stopTasks()` and `worker.awaitStopTasks()` throw 
`ConnectException` if any of them failed to start and are hence not registered 
with the worker, so guard those calls by filtering the task IDs with 
`worker.ownsTask()`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1745: KAFKA-4042: prevent DistributedHerder thread from ...

2016-08-19 Thread shikhar
Github user shikhar closed the pull request at:

https://github.com/apache/kafka/pull/1745


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3204) ConsumerConnector blocked on Authenticated by SASL Failed.

2016-08-19 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-3204.

Resolution: Won't Fix

> ConsumerConnector blocked on Authenticated by SASL Failed.
> --
>
> Key: KAFKA-3204
> URL: https://issues.apache.org/jira/browse/KAFKA-3204
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.8.2.0
>Reporter: shenyuan wang
>
> We've set up Kafka to use ZK authentication and test authentication 
> failures.Test program has been blocked, and Repeated retry connection zk.The 
> log repeated like this:
> 2016-02-04 17:24:47 INFO  FourLetterWordMain:46 - connecting to 10.75.202.42 
> 24002
> 2016-02-04 17:24:47 INFO  ClientCnxn:1211 - Opening socket connection to 
> server 10.75.202.42/10.75.202.42:24002. Will not attempt to authenticate 
> using SASL (unknown error)
> 2016-02-04 17:24:47 INFO  ClientCnxn:981 - Socket connection established, 
> initiating session, client: /10.61.22.215:56060, server: 
> 10.75.202.42/10.75.202.42:24002
> 2016-02-04 17:24:47 INFO  ClientCnxn:1472 - Session establishment complete on 
> server 10.75.202.42/10.75.202.42:24002, sessionid = 0xd013ed38238d1aa, 
> negotiated timeout = 4000
> 2016-02-04 17:24:47 INFO  ZkClient:711 - zookeeper state changed 
> (SyncConnected)
> 2016-02-04 17:24:47 INFO  ClientCnxn:1326 - Unable to read additional data 
> from server sessionid 0xd013ed38238d1aa, likely server has closed socket, 
> closing socket connection and attempting reconnect
> 2016-02-04 17:24:47 INFO  ZkClient:711 - zookeeper state changed 
> (Disconnected)
> 2016-02-04 17:24:47 INFO  ZkClient:934 - Waiting for keeper state 
> SyncConnected



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)