Build failed in Jenkins: kafka-trunk-jdk7 #1505

2016-08-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fix ProcessorTopologyTestDriver to support multiple source 
topics

[me] KAFKA-4042: Contain connector & task start/stop failures within the

--
[...truncated 12237 lines...]
org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED


Re: Batch Expired

2016-08-26 Thread R Krishna
Are any requests at all making it? That is a pretty big timeout.

However, I noticed if there is no connections made to broker, you can still
get batch expiry.


On Fri, Aug 26, 2016 at 6:32 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:

> Hi there,
>
> What is the recommended Producer setting for Producer as I see a lot of
> Batch Expired exception even though I put request.timeout=6.
>
> Producer settings:
> acks=1
> retries=3
> batch.size=16384
> linger.ms=5
> buffer.memory=33554432
> request.timeout.ms=6
> timeout.ms=6
>
> Thanks
> Achintya
>



-- 
Radha Krishna, Proddaturi
253-234-5657


[jira] [Resolved] (KAFKA-73) SyncProducer sends messages to invalid partitions without complaint

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-73?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal resolved KAFKA-73.
--
Resolution: Duplicate

> SyncProducer sends messages to invalid partitions without complaint
> ---
>
> Key: KAFKA-73
> URL: https://issues.apache.org/jira/browse/KAFKA-73
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
> Environment: Mac OSX 10.6.7
>Reporter: Jonathan Herman
>Assignee: Dru Panchal
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SyncProducer class will send messages to invalid partitions without 
> throwing an exception or otherwise alerting the user.
> Reproduction:
> Run the kafka-simple-consumer-shell.sh script with an invalid partition 
> number. An exception will be thrown and displayed. Run the 
> kafka-producer-shell.sh with the same partition number. You will be able to 
> send messages without any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-73) SyncProducer sends messages to invalid partitions without complaint

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440101#comment-15440101
 ] 

Dru Panchal commented on KAFKA-73:
--

Marking this JIRA closed as the bug was solved with KAFKA-49.

> SyncProducer sends messages to invalid partitions without complaint
> ---
>
> Key: KAFKA-73
> URL: https://issues.apache.org/jira/browse/KAFKA-73
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
> Environment: Mac OSX 10.6.7
>Reporter: Jonathan Herman
>Assignee: Dru Panchal
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SyncProducer class will send messages to invalid partitions without 
> throwing an exception or otherwise alerting the user.
> Reproduction:
> Run the kafka-simple-consumer-shell.sh script with an invalid partition 
> number. An exception will be thrown and displayed. Run the 
> kafka-producer-shell.sh with the same partition number. You will be able to 
> send messages without any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-73) SyncProducer sends messages to invalid partitions without complaint

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-73?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal reassigned KAFKA-73:


Assignee: Dru Panchal

> SyncProducer sends messages to invalid partitions without complaint
> ---
>
> Key: KAFKA-73
> URL: https://issues.apache.org/jira/browse/KAFKA-73
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
> Environment: Mac OSX 10.6.7
>Reporter: Jonathan Herman
>Assignee: Dru Panchal
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SyncProducer class will send messages to invalid partitions without 
> throwing an exception or otherwise alerting the user.
> Reproduction:
> Run the kafka-simple-consumer-shell.sh script with an invalid partition 
> number. An exception will be thrown and displayed. Run the 
> kafka-producer-shell.sh with the same partition number. You will be able to 
> send messages without any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-156) Messages should not be dropped when brokers are unavailable

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal resolved KAFKA-156.
---
   Resolution: Duplicate
Fix Version/s: 0.10.1.0

> Messages should not be dropped when brokers are unavailable
> ---
>
> Key: KAFKA-156
> URL: https://issues.apache.org/jira/browse/KAFKA-156
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
>Assignee: Dru Panchal
> Fix For: 0.10.1.0
>
>
> When none of the broker is available, producer should spool the messages to 
> disk and keep retrying for brokers to come back.
> This will also enable brokers upgrade/maintenance without message loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-156) Messages should not be dropped when brokers are unavailable

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440079#comment-15440079
 ] 

Dru Panchal commented on KAFKA-156:
---

This JIRA is duplicated by KAFKA-789 which has provided the requested solution 
in Kafka 0.10.1.0. Marking this JIRA resolved.

> Messages should not be dropped when brokers are unavailable
> ---
>
> Key: KAFKA-156
> URL: https://issues.apache.org/jira/browse/KAFKA-156
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
>
> When none of the broker is available, producer should spool the messages to 
> disk and keep retrying for brokers to come back.
> This will also enable brokers upgrade/maintenance without message loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-156) Messages should not be dropped when brokers are unavailable

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal reassigned KAFKA-156:
-

Assignee: Dru Panchal

> Messages should not be dropped when brokers are unavailable
> ---
>
> Key: KAFKA-156
> URL: https://issues.apache.org/jira/browse/KAFKA-156
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
>Assignee: Dru Panchal
>
> When none of the broker is available, producer should spool the messages to 
> disk and keep retrying for brokers to come back.
> This will also enable brokers upgrade/maintenance without message loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2) a restful producer API

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal resolved KAFKA-2.
-
Resolution: Information Provided  (was: Unresolved)

> a restful producer API
> --
>
> Key: KAFKA-2
> URL: https://issues.apache.org/jira/browse/KAFKA-2
> Project: Kafka
>  Issue Type: Improvement
>Assignee: Dru Panchal
>Priority: Minor
>
> If Kafka server supports a restful producer API, we can use Kafka in any 
> programming language without implementing the wire protocol in each language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2) a restful producer API

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal reassigned KAFKA-2:
---

Assignee: Dru Panchal

> a restful producer API
> --
>
> Key: KAFKA-2
> URL: https://issues.apache.org/jira/browse/KAFKA-2
> Project: Kafka
>  Issue Type: Improvement
>Assignee: Dru Panchal
>Priority: Minor
>
> If Kafka server supports a restful producer API, we can use Kafka in any 
> programming language without implementing the wire protocol in each language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2) a restful producer API

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440061#comment-15440061
 ] 

Dru Panchal commented on KAFKA-2:
-

[~anandriyer] I would like to resolve this JIRA because the Kafka REST Proxy 
covers your desired use case.

http://docs.confluent.io/2.0.0/kafka-rest/docs/index.html


> a restful producer API
> --
>
> Key: KAFKA-2
> URL: https://issues.apache.org/jira/browse/KAFKA-2
> Project: Kafka
>  Issue Type: Improvement
>Priority: Minor
>
> If Kafka server supports a restful producer API, we can use Kafka in any 
> programming language without implementing the wire protocol in each language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3094) Kafka process 100% CPU when no message in topic

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440050#comment-15440050
 ] 

Dru Panchal commented on KAFKA-3094:


[~oazabir] Do you still experience this issue have you been able to resolve it? 
If so kindly share the solution as it may help others running into similar 
problems.

> Kafka process 100% CPU when no message in topic
> ---
>
> Key: KAFKA-3094
> URL: https://issues.apache.org/jira/browse/KAFKA-3094
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Omar AL Zabir
>
> When there's no message in a kafka topic and it is not getting any traffic 
> for some time, all the kafka nodes go 100% CPU. 
> As soon as I post a message, the CPU comes back to normal. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #849

2016-08-26 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1504

2016-08-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Include request header in exception when correlation of

--
[...truncated 12219 lines...]
org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED


[jira] [Commented] (KAFKA-4020) Kafka consumer stop taking messages from kafka server

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440016#comment-15440016
 ] 

Dru Panchal commented on KAFKA-4020:


[~shawnhe] What stopped working? Please provide more details.
Did the consumer process crash or shutdown or did the consumer simply stop 
receiving messages. 

If its the latter then can you provide a thread dump of the consumer process 
for further analysis?
See this link on how to create a thread dump: 
https://visualvm.java.net/threads.html


> Kafka consumer stop taking messages from kafka server
> -
>
> Key: KAFKA-4020
> URL: https://issues.apache.org/jira/browse/KAFKA-4020
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Shawn He
>
> It feels like the similar issue of KAFKA-2978, even though I haven't verified 
> if it is caused by the same events. How do I check on that? 
> I have a client that works fine using kafka 0.8.2.1, and can run months 
> without any issue. However, after I upgraded to use kafka 0.10.0.0, it's very 
> repeatable that the client will work for the first 4 hours, and then stopped 
> working. The producer side has no issue, as the data still comes in to the 
> kafka server. 
> I was using Java library kafka.consumer.Consumer.createJavaConsumerConnector 
> and kafka.consumer.KafkaStream class for the access to the kafka server.
> Any help is appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4026) consumer block

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439982#comment-15439982
 ] 

Dru Panchal commented on KAFKA-4026:


[~imperio] 
So far I've tried setting up a 0.8.1.1 environment and used the high level api 
to create a consumer. Using the console producer I published a few messages and 
as expected they were immediately picked up by my consumer, so I am unable to 
reproduce your behavior as described.

Please provide sample code that reproduces your problem along with any 
broker/client setting overrides you use.

> consumer block
> --
>
> Key: KAFKA-4026
> URL: https://issues.apache.org/jira/browse/KAFKA-4026
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Affects Versions: 0.8.1.1
> Environment: ubuntu 14.04
>Reporter: wxmimperio
> Fix For: 0.8.1.2
>
>
> when i use high level api make consumer. it is a block consumer,how can i 
> know the time of blocked?I put messages into a buffer.It did not reach the 
> buffer length the consumer  blocked,the buffer can not be handled.How can i 
> deal with this problem?The buffer did not reach the buffer length,I can 
> handled the buffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #196

2016-08-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Include request header in exception when correlation of

--
[...truncated 5370 lines...]
org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testRefreshOffsetNotCoordinatorForConsumer PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetMetadataTooLarge PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetAsyncDisconnected PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testAutoCommitDynamicAssignment PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testLeaveGroupOnClose PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testAutoCommitManualAssignment PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testJoinGroupInvalidGroupId PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetIllegalGeneration PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCoordinatorNotAvailable PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testNotCoordinator PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
serializeDeserializeNullSubscriptionUserData PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
deserializeNewSubscriptionVersion PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
serializeDeserializeMetadata PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
deserializeNullAssignmentUserData PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
serializeDeserializeAssignment PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
deserializeNewAssignmentVersion PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testUpdateFetchPositionResetToLatestOffset PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchUnknownTopicOrPartition PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchOffsetOutOfRange PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchedRecordsRaisesOnSerializationErrors PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > testQuotaMetrics 
PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchMaxPollRecords PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testUpdateFetchPositionDisconnect PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchRecordTooLarge PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testGetTopicMetadataInvalidTopic PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testUpdateFetchPositionResetToDefaultOffset PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchNonContinuousRecords PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > testFetchNormal PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testGetAllTopicsTimeout PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > testUnauthorizedTopic 
PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testGetAllTopicsUnauthorized PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchNotLeaderForPartition PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testParseInvalidRecord PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchedRecordsAfterSeek PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchOffsetOutOfRangeException PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchOnPausedPartition PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchDuringRebalance PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > testGetAllTopics 
PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testUpdateFetchPositionResetToEarliestOffset PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testStaleOutOfRangeError PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testInFlightFetchOnPausedPartition PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > testFetchDisconnected 
PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testGetAllTopicsDisconnect PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testGetTopicMetadataLeaderNotAvailable PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testUpdateFetchPositionToCommitted PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #848

2016-08-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fix ProcessorTopologyTestDriver to support multiple source 
topics

--
[...truncated 7298 lines...]

kafka.log.LogTest > shouldNotDeleteSegmentsWhenPolicyDoesNotIncludeDelete 
STARTED

kafka.log.LogTest > shouldNotDeleteSegmentsWhenPolicyDoesNotIncludeDelete PASSED

kafka.log.LogTest > testReadOutOfRange STARTED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException STARTED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > 
shouldDeleteSegmentsReadyToBeDeletedWhenCleanupPolicyIsCompactAndDelete STARTED

kafka.log.LogTest > 
shouldDeleteSegmentsReadyToBeDeletedWhenCleanupPolicyIsCompactAndDelete PASSED

kafka.log.LogTest > testReadAtLogGap STARTED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll STARTED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog STARTED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck STARTED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation STARTED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints STARTED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset STARTED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testDeleteOldSegmentsMethod STARTED

kafka.log.LogTest > testDeleteOldSegmentsMethod PASSED

kafka.log.LogTest > shouldDeleteSizeBasedSegments STARTED

kafka.log.LogTest > shouldDeleteSizeBasedSegments PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull STARTED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild STARTED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted STARTED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved STARTED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages STARTED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload STARTED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog STARTED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset STARTED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate STARTED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName STARTED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles STARTED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages STARTED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages PASSED

kafka.log.LogTest > testSizeBasedLogRoll STARTED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
STARTED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter STARTED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName STARTED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo STARTED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile STARTED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets STARTED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid STARTED

kafka.log.LogConfigTest > 

[jira] [Resolved] (KAFKA-4042) DistributedHerder thread can die because of connector & task lifecycle exceptions

2016-08-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4042.
--
Resolution: Fixed

Issue resolved by pull request 1778
[https://github.com/apache/kafka/pull/1778]

> DistributedHerder thread can die because of connector & task lifecycle 
> exceptions
> -
>
> Key: KAFKA-4042
> URL: https://issues.apache.org/jira/browse/KAFKA-4042
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> As one example, there isn't exception handling in 
> {{DistributedHerder.startConnector()}} or the call-chain for it originating 
> in the {{tick()}} on the herder thread, and it can throw an exception because 
> of a bad class name in the connector config. (report of issue in wild: 
> https://groups.google.com/d/msg/confluent-platform/EnleFnXpZCU/3B_gRxsRAgAJ)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4042) DistributedHerder thread can die because of connector & task lifecycle exceptions

2016-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439847#comment-15439847
 ] 

ASF GitHub Bot commented on KAFKA-4042:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1778


> DistributedHerder thread can die because of connector & task lifecycle 
> exceptions
> -
>
> Key: KAFKA-4042
> URL: https://issues.apache.org/jira/browse/KAFKA-4042
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> As one example, there isn't exception handling in 
> {{DistributedHerder.startConnector()}} or the call-chain for it originating 
> in the {{tick()}} on the herder thread, and it can throw an exception because 
> of a bad class name in the connector config. (report of issue in wild: 
> https://groups.google.com/d/msg/confluent-platform/EnleFnXpZCU/3B_gRxsRAgAJ)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1778: KAFKA-4042: Contain connector & task start/stop fa...

2016-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1778


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1981) Make log compaction point configurable

2016-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439828#comment-15439828
 ] 

ASF GitHub Bot commented on KAFKA-1981:
---

Github user ewasserman closed the pull request at:

https://github.com/apache/kafka/pull/1494


> Make log compaction point configurable
> --
>
> Key: KAFKA-1981
> URL: https://issues.apache.org/jira/browse/KAFKA-1981
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>  Labels: newbie++
> Attachments: KIP for Kafka Compaction Patch.md
>
>
> Currently if you enable log compaction the compactor will kick in whenever 
> you hit a certain "dirty ratio", i.e. when 50% of your data is uncompacted. 
> Other than this we don't give you fine-grained control over when compaction 
> occurs. In addition we never compact the active segment (since it is still 
> being written to).
> Other than this we don't really give you much control over when compaction 
> will happen. The result is that you can't really guarantee that a consumer 
> will get every update to a compacted topic--if the consumer falls behind a 
> bit it might just get the compacted version.
> This is usually fine, but it would be nice to make this more configurable so 
> you could set either a # messages, size, or time bound for compaction.
> This would let you say, for example, "any consumer that is no more than 1 
> hour behind will get every message."
> This should be relatively easy to implement since it just impacts the 
> end-point the compactor considers available for compaction. I think we 
> already have that concept, so this would just be some other overrides to add 
> in when calculating that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1494: KAFKA-1981 Make log compaction point configurable

2016-08-26 Thread ewasserman
Github user ewasserman closed the pull request at:

https://github.com/apache/kafka/pull/1494


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1981) Make log compaction point configurable

2016-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439827#comment-15439827
 ] 

ASF GitHub Bot commented on KAFKA-1981:
---

GitHub user ewasserman opened a pull request:

https://github.com/apache/kafka/pull/1794

KAFKA-1981 Make log compaction point configurable

Now uses LogSegment.largestTimestamp to determine age of segment's 
messages. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewasserman/kafka feat-1981

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1794.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1794


commit 50bcc6036217720a69229868fbd7ab3a18c47ff1
Author: Eric Wasserman 
Date:   2016-08-26T19:09:26Z

merge fixes

commit 7e5da446cee19e2db2f7f7f93306d7d81de4c3aa
Author: Eric Wasserman 
Date:   2016-08-26T19:57:58Z

back out orig files

commit 6e8c1ea8832691f4bd8d0c08460dd24a82f676fc
Author: Eric Wasserman 
Date:   2016-08-26T20:48:00Z

change logs to string interpolation




> Make log compaction point configurable
> --
>
> Key: KAFKA-1981
> URL: https://issues.apache.org/jira/browse/KAFKA-1981
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>  Labels: newbie++
> Attachments: KIP for Kafka Compaction Patch.md
>
>
> Currently if you enable log compaction the compactor will kick in whenever 
> you hit a certain "dirty ratio", i.e. when 50% of your data is uncompacted. 
> Other than this we don't give you fine-grained control over when compaction 
> occurs. In addition we never compact the active segment (since it is still 
> being written to).
> Other than this we don't really give you much control over when compaction 
> will happen. The result is that you can't really guarantee that a consumer 
> will get every update to a compacted topic--if the consumer falls behind a 
> bit it might just get the compacted version.
> This is usually fine, but it would be nice to make this more configurable so 
> you could set either a # messages, size, or time bound for compaction.
> This would let you say, for example, "any consumer that is no more than 1 
> hour behind will get every message."
> This should be relatively easy to implement since it just impacts the 
> end-point the compactor considers available for compaction. I think we 
> already have that concept, so this would just be some other overrides to add 
> in when calculating that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1794: KAFKA-1981 Make log compaction point configurable

2016-08-26 Thread ewasserman
GitHub user ewasserman opened a pull request:

https://github.com/apache/kafka/pull/1794

KAFKA-1981 Make log compaction point configurable

Now uses LogSegment.largestTimestamp to determine age of segment's 
messages. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewasserman/kafka feat-1981

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1794.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1794


commit 50bcc6036217720a69229868fbd7ab3a18c47ff1
Author: Eric Wasserman 
Date:   2016-08-26T19:09:26Z

merge fixes

commit 7e5da446cee19e2db2f7f7f93306d7d81de4c3aa
Author: Eric Wasserman 
Date:   2016-08-26T19:57:58Z

back out orig files

commit 6e8c1ea8832691f4bd8d0c08460dd24a82f676fc
Author: Eric Wasserman 
Date:   2016-08-26T20:48:00Z

change logs to string interpolation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4065) Missing Property in ProducerConfig.java - KafkaProducer API 0.9.0.0

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal updated KAFKA-4065:
---
Summary: Missing Property in ProducerConfig.java - KafkaProducer API 
0.9.0.0  (was: Property missing in ProcuderConfig.java - KafkaProducer API 
0.9.0.0)

> Missing Property in ProducerConfig.java - KafkaProducer API 0.9.0.0
> ---
>
> Key: KAFKA-4065
> URL: https://issues.apache.org/jira/browse/KAFKA-4065
> Project: Kafka
>  Issue Type: Bug
>Reporter: manzar
>
> 1 ) "compressed.topics" property is missing in ProducerConfig.java in 
> KafkaProducer API 0.9.0.0. due to that we can't enable some specific topic 
> for compression.
> 2) "compression.type" property is there in ProducerConfig.java that was 
> expected to be "compression.codec" according to official document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4076) Kafka broker shuts down due to irrecoverable IO error

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439798#comment-15439798
 ] 

Dru Panchal edited comment on KAFKA-4076 at 8/26/16 8:43 PM:
-

[~anyun] I can confirm [~omkreddy]'s analysis on this having experienced this 
issue myself. 
Please modify your broker config in {{server.properties}} and specify a 
permanent location for the setting {{log.dirs}}.

Example: {{log.dirs=/var/opt/kafka-logs}}


was (Author: drupad.p):
[~anyun] I can confirm [~omkreddy]'s analysis on this having experienced this 
issue myself. 
Please modify your broker config in {{server.properties}} and specify a 
permanent location for Kafka the setting {{log.dirs}}.

Example: {{log.dirs=/var/opt/kafka-logs}}

> Kafka broker shuts down due to irrecoverable IO error
> -
>
> Key: KAFKA-4076
> URL: https://issues.apache.org/jira/browse/KAFKA-4076
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Anyun 
>
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-48'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:227)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.doSyncGroup(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:247)
> at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:819)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs-new/__consumer_offsets-48/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:241)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
> at kafka.log.Log.roll(Log.scala:627)
> at kafka.log.Log.maybeRoll(Log.scala:602)
> at kafka.log.Log.append(Log.scala:357)
> ... 24 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4076) Kafka broker shuts down due to irrecoverable IO error

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439798#comment-15439798
 ] 

Dru Panchal commented on KAFKA-4076:


[~anyun] I can confirm [~omkreddy]'s analysis on this having experienced this 
issue myself. 
Please modify your broker config in {{server.properties}} and specify a 
permanent location for Kafka the setting {{log.dirs}}.

Example: {{log.dirs=/var/opt/kafka-logs}}

> Kafka broker shuts down due to irrecoverable IO error
> -
>
> Key: KAFKA-4076
> URL: https://issues.apache.org/jira/browse/KAFKA-4076
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Anyun 
>
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-48'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:227)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.doSyncGroup(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:247)
> at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:819)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs-new/__consumer_offsets-48/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:241)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
> at kafka.log.Log.roll(Log.scala:627)
> at kafka.log.Log.maybeRoll(Log.scala:602)
> at kafka.log.Log.append(Log.scala:357)
> ... 24 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-08-26 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439764#comment-15439764
 ] 

Vahid Hashemian commented on KAFKA-3129:


[~cotedm] Thanks for looking into this. I think if we are going to accept the 
current behavior (which is fine to me) this defect should be documented and the 
default ack (as you mentioned) should be set to something else other than 0 so 
this issue is not revealed using default settings.

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4092) retention.bytes should not be allowed to be less than segment.bytes

2016-08-26 Thread Dustin Cote (JIRA)
Dustin Cote created KAFKA-4092:
--

 Summary: retention.bytes should not be allowed to be less than 
segment.bytes
 Key: KAFKA-4092
 URL: https://issues.apache.org/jira/browse/KAFKA-4092
 Project: Kafka
  Issue Type: Improvement
  Components: log
Reporter: Dustin Cote
Assignee: Dustin Cote
Priority: Minor


Right now retention.bytes can be as small as the user wants but it doesn't 
really get acted on for the active segment if retention.bytes is smaller than 
segment.bytes.  We shouldn't allow retention.bytes to be less than 
segment.bytes and validate that at startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1782: MINOR: Fix ProcessorTopologyTestDriver to support ...

2016-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1782


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2016-08-26 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439670#comment-15439670
 ] 

Dustin Cote commented on KAFKA-3129:


What I'm seeing is that we are faking the callback 
{code}org.apache.kafka.clients.producer.internals.Sender#handleProduceResponse{code}
 for the case where acks=0.  This is a problem because the callback gets 
generated when we do 
{code}org.apache.kafka.clients.producer.internals.Sender#createProduceRequests{code}
 in the run loop but the actual send happens a bit later.  When close() comes 
in that window between createProduceRequests and the send, you get messages 
that are lost.  Funny thing is that if you slow down call stack a bit by 
turning on something like strace, the issue goes away so it's hard to tell 
which layer exactly is buffering the requests.

So my question is, do we want to risk a small performance hit for all producers 
to be able to guarantee all messages with acks=0 actually make it out of the 
producer knowing full well that they aren't going to be verified to have made 
it to the broker?  I personally don't feel it's worth the extra locking 
complexity and could be documented known durability issue when you aren't using 
durability settings.  If we go that route, I feel like the console producer 
should have acks=1 by default.  That way, users who are getting started with 
the built-in tools have a cursory durability guarantee and can tune for 
performance instead.  What do you think [~ijuma] and [~vahid]?

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #847

2016-08-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Include request header in exception when correlation of

--
[...truncated 4916 lines...]

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue STARTED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.UtilsTest > testAbs STARTED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix STARTED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator STARTED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes STARTED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList STARTED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt STARTED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.JsonTest > testJsonEncoding STARTED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.consumer.TopicFilterTest > testWhitelists STARTED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson STARTED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists STARTED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > 

[jira] [Resolved] (KAFKA-3993) Console producer drops data

2016-08-26 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote resolved KAFKA-3993.

Resolution: Duplicate

Marking this a duplicate of KAFKA-3129.  The workaround here is to set acks=1 
on the console producer but acks=0 shouldn't mean that all requests don't get 
to make it out of the queue before close() finishes.  I'll follow up on 
KAFKA-3129 to keep everything in one place.

> Console producer drops data
> ---
>
> Key: KAFKA-3993
> URL: https://issues.apache.org/jira/browse/KAFKA-3993
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Roger Hoover
>
> The console producer drops data when if the process exits too quickly.  I 
> suspect that the shutdown hook does not call close() or something goes wrong 
> during that close().
> Here's a simple to illustrate the issue:
> {noformat}
> export BOOTSTRAP_SERVERS=localhost:9092
> export TOPIC=bar
> export MESSAGES=1
> ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --partitions 1 
> --replication-factor 1 --topic "$TOPIC" \
> && echo "acks=all" > /tmp/producer.config \
> && echo "linger.ms=0" >> /tmp/producer.config \
> && seq "$MESSAGES" | ./bin/kafka-console-producer.sh --broker-list 
> "$BOOTSTRAP_SERVERS" --topic "$TOPIC" --producer-config /tmp/producer.config \
> && ./bin/kafka-console-consumer.sh --bootstrap-server "$BOOTSTRAP_SERVERS" 
> --new-consumer --from-beginning --max-messages "${MESSAGES}" --topic "$TOPIC"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4091) Unable to produce or consume on any topic

2016-08-26 Thread Avi Chopra (JIRA)
Avi Chopra created KAFKA-4091:
-

 Summary: Unable to produce or consume on any topic
 Key: KAFKA-4091
 URL: https://issues.apache.org/jira/browse/KAFKA-4091
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.0.0
 Environment: Amazon Linux, t2.micro
Reporter: Avi Chopra
Priority: Critical


While trying to set kafka on 2 slave and 1 master box, got a weird condition 
where I was not able to consume or produce to a topic.

Using Mirror Maker to sync data between slave <--> Master. Getting following 
logs unending :

[2016-08-26 14:28:33,897] WARN Bootstrap broker localhost:9092 disconnected 
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:43,515] WARN 
Bootstrap broker localhost:9092 disconnected 
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:45,118] WARN 
Bootstrap broker localhost:9092 disconnected 
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:46,721] WARN 
Bootstrap broker localhost:9092 disconnected 
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:48,324] WARN 
Bootstrap broker localhost:9092 disconnected 
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:49,927] WARN 
Bootstrap broker localhost:9092 disconnected 
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:53,029] WARN 
Bootstrap broker localhost:9092 disconnected 
(org.apache.kafka.clients.NetworkClient)

Only way I could recover was by restarting Kafka which produced this kind of 
logs :

[2016-08-26 14:30:54,856] WARN Found a corrupted index file, 
/tmp/kafka-logs/__consumer_offsets-43/.index, deleting and 
rebuilding index... (kafka.log.Log) [2016-08-26 14:30:54,856] INFO Recovering 
unflushed segment 0 in log __consumer_offsets-43. (kafka.log.Log) [2016-08-26 
14:30:54,857] INFO Completed load of log __consumer_offsets-43 with log end 
offset 0 (kafka.log.Log) [2016-08-26 14:30:54,860] WARN Found a corrupted index 
file, /tmp/kafka-logs/__consumer_offsets-26/.index, 
deleting and rebuilding index... (kafka.log.Log) [2016-08-26 14:30:54,860] INFO 
Recovering unflushed segment 0 in log __consumer_offsets-26. (kafka.log.Log) 
[2016-08-26 14:30:54,861] INFO Completed load of log __consumer_offsets-26 with 
log end offset 0 (kafka.log.Log) [2016-08-26 14:30:54,864] WARN Found a 
corrupted index file, 
/tmp/kafka-logs/__consumer_offsets-35/.index, deleting and 
rebuilding index... (kafka.log.Log)

ERROR Error when sending message to topic dr_ubr_analytics_limits with key: 
null, value: 1 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 6 ms.

The consumer group command was showing a major lag.

This is my test phase so I was able to restart and recover from the master box 
but I want know what caused this issue and how can it be avoided. Is there a 
way to debug this issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1793: MINOR: Include request header in exception when co...

2016-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1793


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Kafka KIP meeting Aug 30 at 11:00am PST

2016-08-26 Thread Jun Rao
Hi, Everyone.,

We plan to have a Kafka KIP meeting this coming Tuesday at 11:00am PST. If
you plan to attend but haven't received an invite, please let me know. The
following is the tentative agenda.

Agenda:
KIP-48: delegation tokens

Thanks,

Jun


Re: Queryable state client read guarantees

2016-08-26 Thread Eno Thereska
Hi Mikael,

Very good question. You are correct about the desired semantics.

The semantic of case (a) depends on the local store as you mention. For case 
(b), the final check is always performed again on get(), and if the store has 
disappeared between the lookup and get, the user will get an exception and will 
have to retry. The state store in A does become invalid when the state is 
re-assigned. There isn't any other way to detect the change, since we wanted to 
hide the system details (e.g., rebalance) from the user.

Does this make sense?

Thanks
Eno

> On 26 Aug 2016, at 16:26, Mikael Högqvist  wrote:
> 
> Hi,
> 
> I've tried to understand the implementation and APIs from KIP-67 and would
> like to know the possible semantics for read requests from a client
> perspective. As a developer of a queryable state client, the access
> semantics I would like to have (I think...) is one where subsequent reads
> always return the value from the last read or a newer value (if the state
> store is available). This should be independent of the current system
> configuration, e.g. re-balancing, failures etc. .
> 
> A client-side get(k) can be implemented by starting with a lookup for the
> instances that store k followed by a retrieve of the value associated with
> k from the instances returned by the lookup. In the worst case we can
> always do scatter+gather over all instances.
> 
> We can start by considering a get(k) under two failure-free cases: a)
> single instance and b) a system going from one instance to two instances. In
> case a) the lookup will always return the same instance and the following
> get will read from a local store. The semantics in this case depends on the
> local store.
> 
> For case b) the lookup returns instance A, but in between the lookup and
> the get, a new instance B is introduced to which k is transferred? Does the
> state store on A become invalid when the state is re-assigned? Is there
> another way for the client to detect the change?
> 
> Best Regards,
> Mikael



[jira] [Assigned] (KAFKA-4089) KafkaProducer raises Batch Expired exception

2016-08-26 Thread Sumant Tambe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumant Tambe reassigned KAFKA-4089:
---

Assignee: Sumant Tambe  (was: Dong Lin)

> KafkaProducer raises Batch Expired exception 
> -
>
> Key: KAFKA-4089
> URL: https://issues.apache.org/jira/browse/KAFKA-4089
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
>Reporter: Sumant Tambe
>Assignee: Sumant Tambe
>
> The basic idea of batch expiration is that we don't expire batches when 
> producer thinks "it can make progress". Currently the notion of "making 
> progress" involves only in-flight requests (muted partitions). That's not 
> sufficient. The other half of the "making progress" is that if we have stale 
> metadata, we cannot trust it and therefore can't say we can't make progress. 
> Therefore, we don't expire batched when metadata is stale. This also implies 
> we don't want to expire batches when we can still make progress even if the 
> batch remains in the queue longer than the batch expiration time. 
> The current condition in {{abortExpiredBatches}} that bypasses muted 
> partitions is necessary but not sufficient. It should additionally restrict 
> ejection when metadata is stale. 
> Conversely, it should expire batches only when the following is true
> # !muted AND
> # meta-data is fresh AND
> # batch remained in the queue longer than request timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Using SSL port without specifying security protocol causes OOM on 0.9.0.1 and 0.10.0.1

2016-08-26 Thread Jaikiran Pai

Thank you for explaining that, it makes sense.

-Jaikiran
On Friday 26 August 2016 09:02 PM, Rajini Sivaram wrote:

When a port is configured for SSL, broker expects SSL handshake messages
before any Kafka requests are sent. But since the client is using PLAINTEXT
on an SSL port, the client is interpreting SSL handshake protocol messages
as Kafka requests. Hence the size (300MB) being allocated doesn't really
correspond to a size field. Limiting maximum buffer size would avoid OOM in
this case.

On Fri, Aug 26, 2016 at 4:13 PM, Jaikiran Pai 
wrote:


Hi Rajini,

Just filed a JIRA as suggested https://issues.apache.org/jira
/browse/KAFKA-4090. More comments inline.

On Friday 26 August 2016 07:53 PM, Rajini Sivaram wrote:


Jaikiran,

At the moment there is no client-side configuration parameter to restrict
the maximum request size on clients. On the broker side,
"socket.request.max.bytes" restricts the maximum buffer size allocated,
protecting the broker from badly behaved or misconfigured clients. A KIP
would be required to add a similar configuration parameter on the
client-side. It will definitely be useful for clients to catch this issue
and report a meaningful error,


I'm wondering what's causing the current code to request around 300MB of
allocation on the client side? Is it just that the number being sent is
incorrect (corrupt protocol message maybe?) or is it really needing that
many bytes?

I don't have good knowledge of Kafka code so I'm not really sure what kind
of data the broker/client exchange. But wouldn't the broker know beforehand
that a particular port is configured for a certain protocol (SSL in this
case) and if a client (producer/consumer) tries to connect to the with a
unsupported protocol (plaintext in this case), it should just throw back a
error code which then is "understood/parsed" by the client side and just
fail such specific consumer/producer? That way the whole JVM need not be
impacted. As I said, I don't have any good knowledge of the Kafka code so I
might be talking something that isn't feasible in terms of
design/implementation in the Kafka library.

-Jaikiran



but I am not sure whether clients would

typically be able to recover from this scenario without restarting the
JVM.


On Fri, Aug 26, 2016 at 3:18 PM, Ismael Juma  wrote:

Yes, please file a JIRA.

On Fri, Aug 26, 2016 at 2:28 PM, Jaikiran Pai 
wrote:

We have been using Kafka 0.9.0.1 (server and Java client libraries). So

far we had been using it with plaintext transport but recently have been
considering upgrading to using SSL. It mostly works except that a
mis-configured producer (and even consumer) causes a hard to relate
OutOfMemory exception and thus causing the JVM in which the client is
running, to go into a bad state. We can consistently reproduce that OOM
very easily. We decided to check if this is something that is fixed in
0.10.0.1 so upgraded one of our test systems to that version (both
server
and client libraries) but still see the same issue. Here's how it can be
easily reproduced


1. Enable SSL listener on the broker via server.properties, as per the
Kafka documentation

listeners=PLAINTEXT://:9092,SSL://:9093
ssl.keystore.location=
ssl.keystore.password=pass
ssl.key.password=pass
ssl.truststore.location=
ssl.truststore.password=pass


2. Start zookeeper and kafka server

3. Create a "oom-test" topic (which will be used for these tests):

kafka-topics.sh --zookeeper localhost:2181 --create --topic *oom-test*
--partitions 1 --replication-factor 1

4. Create a simple producer which sends a single message to the topic
via
Java (new producer) APIs:

public class OOMTest {

  public static void main(final String[] args) throws Exception {
  final Properties kafkaProducerConfigs = new Properties();
*// NOTE: Intentionally use a SSL port without specifying
security.protocol as SSL**
**kafkaProducerConfigs.setProperty(ProducerConfig.


BOOTSTRAP_SERVERS_CONFIG,


"localhost:9093");**
* kafkaProducerConfigs.setProperty(ProducerConfig.


KEY_SERIALIZER_CLASS_CONFIG,


StringSerializer.class.getName());
kafkaProducerConfigs.setProperty(ProducerConfig.


VALUE_SERIALIZER_CLASS_CONFIG,


StringSerializer.class.getName());
  try (KafkaProducer producer = new
KafkaProducer<>(kafkaProducerConfigs)) {
  System.out.println("Created Kafka producer");
  final String topicName = "oom-test";
  final String message = "Hello OOM!";
  // send a message to the topic
  final Future recordMetadataFuture =
producer.send(new ProducerRecord<>(topicName, message));
  final RecordMetadata sentRecordMetadata =
recordMetadataFuture.get();
  System.out.println("Sent message '" + message + "' to topic


'"


+ topicName + "'");
  }
  System.out.println("Tests complete");

  }
}

Notice that the server URL is 

Re: Using SSL port without specifying security protocol causes OOM on 0.9.0.1 and 0.10.0.1

2016-08-26 Thread Rajini Sivaram
When a port is configured for SSL, broker expects SSL handshake messages
before any Kafka requests are sent. But since the client is using PLAINTEXT
on an SSL port, the client is interpreting SSL handshake protocol messages
as Kafka requests. Hence the size (300MB) being allocated doesn't really
correspond to a size field. Limiting maximum buffer size would avoid OOM in
this case.

On Fri, Aug 26, 2016 at 4:13 PM, Jaikiran Pai 
wrote:

>
> Hi Rajini,
>
> Just filed a JIRA as suggested https://issues.apache.org/jira
> /browse/KAFKA-4090. More comments inline.
>
> On Friday 26 August 2016 07:53 PM, Rajini Sivaram wrote:
>
>> Jaikiran,
>>
>> At the moment there is no client-side configuration parameter to restrict
>> the maximum request size on clients. On the broker side,
>> "socket.request.max.bytes" restricts the maximum buffer size allocated,
>> protecting the broker from badly behaved or misconfigured clients. A KIP
>> would be required to add a similar configuration parameter on the
>> client-side. It will definitely be useful for clients to catch this issue
>> and report a meaningful error,
>>
>
> I'm wondering what's causing the current code to request around 300MB of
> allocation on the client side? Is it just that the number being sent is
> incorrect (corrupt protocol message maybe?) or is it really needing that
> many bytes?
>
> I don't have good knowledge of Kafka code so I'm not really sure what kind
> of data the broker/client exchange. But wouldn't the broker know beforehand
> that a particular port is configured for a certain protocol (SSL in this
> case) and if a client (producer/consumer) tries to connect to the with a
> unsupported protocol (plaintext in this case), it should just throw back a
> error code which then is "understood/parsed" by the client side and just
> fail such specific consumer/producer? That way the whole JVM need not be
> impacted. As I said, I don't have any good knowledge of the Kafka code so I
> might be talking something that isn't feasible in terms of
> design/implementation in the Kafka library.
>
> -Jaikiran
>
>
>
> but I am not sure whether clients would
>> typically be able to recover from this scenario without restarting the
>> JVM.
>>
>>
>> On Fri, Aug 26, 2016 at 3:18 PM, Ismael Juma  wrote:
>>
>> Yes, please file a JIRA.
>>>
>>> On Fri, Aug 26, 2016 at 2:28 PM, Jaikiran Pai 
>>> wrote:
>>>
>>> We have been using Kafka 0.9.0.1 (server and Java client libraries). So
 far we had been using it with plaintext transport but recently have been
 considering upgrading to using SSL. It mostly works except that a
 mis-configured producer (and even consumer) causes a hard to relate
 OutOfMemory exception and thus causing the JVM in which the client is
 running, to go into a bad state. We can consistently reproduce that OOM
 very easily. We decided to check if this is something that is fixed in
 0.10.0.1 so upgraded one of our test systems to that version (both
 server
 and client libraries) but still see the same issue. Here's how it can be
 easily reproduced


 1. Enable SSL listener on the broker via server.properties, as per the
 Kafka documentation

 listeners=PLAINTEXT://:9092,SSL://:9093
 ssl.keystore.location=
 ssl.keystore.password=pass
 ssl.key.password=pass
 ssl.truststore.location=
 ssl.truststore.password=pass


 2. Start zookeeper and kafka server

 3. Create a "oom-test" topic (which will be used for these tests):

 kafka-topics.sh --zookeeper localhost:2181 --create --topic *oom-test*
 --partitions 1 --replication-factor 1

 4. Create a simple producer which sends a single message to the topic
 via
 Java (new producer) APIs:

 public class OOMTest {

  public static void main(final String[] args) throws Exception {
  final Properties kafkaProducerConfigs = new Properties();
 *// NOTE: Intentionally use a SSL port without specifying
 security.protocol as SSL**
 **kafkaProducerConfigs.setProperty(ProducerConfig.

>>> BOOTSTRAP_SERVERS_CONFIG,
>>>
 "localhost:9093");**
 * kafkaProducerConfigs.setProperty(ProducerConfig.

>>> KEY_SERIALIZER_CLASS_CONFIG,
>>>
 StringSerializer.class.getName());
 kafkaProducerConfigs.setProperty(ProducerConfig.

>>> VALUE_SERIALIZER_CLASS_CONFIG,
>>>
 StringSerializer.class.getName());
  try (KafkaProducer producer = new
 KafkaProducer<>(kafkaProducerConfigs)) {
  System.out.println("Created Kafka producer");
  final String topicName = "oom-test";
  final String message = "Hello OOM!";
  // send a message to the topic
  final Future recordMetadataFuture =
 producer.send(new ProducerRecord<>(topicName, message));
  

Queryable state client read guarantees

2016-08-26 Thread Mikael Högqvist
Hi,

I've tried to understand the implementation and APIs from KIP-67 and would
like to know the possible semantics for read requests from a client
perspective. As a developer of a queryable state client, the access
semantics I would like to have (I think...) is one where subsequent reads
always return the value from the last read or a newer value (if the state
store is available). This should be independent of the current system
configuration, e.g. re-balancing, failures etc. .

A client-side get(k) can be implemented by starting with a lookup for the
instances that store k followed by a retrieve of the value associated with
k from the instances returned by the lookup. In the worst case we can
always do scatter+gather over all instances.

We can start by considering a get(k) under two failure-free cases: a)
single instance and b) a system going from one instance to two instances. In
case a) the lookup will always return the same instance and the following
get will read from a local store. The semantics in this case depends on the
local store.

For case b) the lookup returns instance A, but in between the lookup and
the get, a new instance B is introduced to which k is transferred? Does the
state store on A become invalid when the state is re-assigned? Is there
another way for the client to detect the change?

Best Regards,
Mikael


[jira] [Commented] (KAFKA-4081) Consumer API consumer new interface commitSyn does not verify the validity of offset

2016-08-26 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439258#comment-15439258
 ] 

Mickael Maison commented on KAFKA-4081:
---

The same happens with 0.10.0.1. Looking at the offset commit logic, the value 
of the offset is not checked at all.

It seems some code (ConsumerCoordinator:730) attributes a special meaning to 
negative offsets. 
One inconsistency I can find is that in the client that commits a negative 
offset, calls to committed() will return the negative value whereas in clients 
that just fetch the committed offset it returns nothing. This is because when 
we commit, it updates the SubscriptionState. This can be seen by using 
ConsumerGroupCommand, it shows "unknown" for topics that have a negative offset 
committed.

Are there use cases we'd break by forcing offsets to be valid when committed ? 
[~ijuma]



> Consumer API consumer new interface commitSyn does not verify the validity of 
> offset
> 
>
> Key: KAFKA-4081
> URL: https://issues.apache.org/jira/browse/KAFKA-4081
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: lifeng
>
> Consumer API consumer new interface commitSyn synchronization update offset, 
> for the illegal offset successful return, illegal offset<0 or offset>hw



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4090) JVM runs into OOM if (Java) client uses a SSL port without setting the security protocol

2016-08-26 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15439247#comment-15439247
 ] 

jaikiran pai commented on KAFKA-4090:
-

Mailing list discussion is here 
https://www.mail-archive.com/dev@kafka.apache.org/msg55658.html


> JVM runs into OOM if (Java) client uses a SSL port without setting the 
> security protocol
> 
>
> Key: KAFKA-4090
> URL: https://issues.apache.org/jira/browse/KAFKA-4090
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: jaikiran pai
>
> Quoting from the mail thread that was sent to Kafka mailing list:
> {quote}
> We have been using Kafka 0.9.0.1 (server and Java client libraries). So far 
> we had been using it with plaintext transport but recently have been 
> considering upgrading to using SSL. It mostly works except that a 
> mis-configured producer (and even consumer) causes a hard to relate 
> OutOfMemory exception and thus causing the JVM in which the client is 
> running, to go into a bad state. We can consistently reproduce that OOM very 
> easily. We decided to check if this is something that is fixed in 0.10.0.1 so 
> upgraded one of our test systems to that version (both server and client 
> libraries) but still see the same issue. Here's how it can be easily 
> reproduced
> 1. Enable SSL listener on the broker via server.properties, as per the Kafka 
> documentation
> {code}
> listeners=PLAINTEXT://:9092,SSL://:9093
> ssl.keystore.location=
> ssl.keystore.password=pass
> ssl.key.password=pass
> ssl.truststore.location=
> ssl.truststore.password=pass
> {code}
> 2. Start zookeeper and kafka server
> 3. Create a "oom-test" topic (which will be used for these tests):
> {code}
> kafka-topics.sh --zookeeper localhost:2181 --create --topic oom-test  
> --partitions 1 --replication-factor 1
> {code}
> 4. Create a simple producer which sends a single message to the topic via 
> Java (new producer) APIs:
> {code}
> public class OOMTest {
> public static void main(final String[] args) throws Exception {
> final Properties kafkaProducerConfigs = new Properties();
> // NOTE: Intentionally use a SSL port without specifying 
> security.protocol as SSL
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9093");
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
> StringSerializer.class.getName());
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
>  StringSerializer.class.getName());
> try (KafkaProducer producer = new 
> KafkaProducer<>(kafkaProducerConfigs)) {
> System.out.println("Created Kafka producer");
> final String topicName = "oom-test";
> final String message = "Hello OOM!";
> // send a message to the topic
> final Future recordMetadataFuture = 
> producer.send(new ProducerRecord<>(topicName, message));
> final RecordMetadata sentRecordMetadata = 
> recordMetadataFuture.get();
> System.out.println("Sent message '" + message + "' to topic '" + 
> topicName + "'");
> }
> System.out.println("Tests complete");
> }
> }
> {code}
> Notice that the server URL is using a SSL endpoint localhost:9093 but isn't 
> specifying any of the other necessary SSL configs like security.protocol.
> 5. For the sake of easily reproducing this issue run this class with a max 
> heap size of 256MB (-Xmx256M). Running this code throws up the following 
> OutOfMemoryError in one of the Sender threads:
> {code}
> 18:33:25,770 ERROR [KafkaThread] - Uncaught exception in 
> kafka-producer-network-thread | producer-1:
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
> at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
> at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Note that I set it to 256MB as heap size to easily reproduce it but this 
> isn't specific to that size. 

Re: Using SSL port without specifying security protocol causes OOM on 0.9.0.1 and 0.10.0.1

2016-08-26 Thread Jaikiran Pai


Hi Rajini,

Just filed a JIRA as suggested 
https://issues.apache.org/jira/browse/KAFKA-4090. More comments inline.


On Friday 26 August 2016 07:53 PM, Rajini Sivaram wrote:

Jaikiran,

At the moment there is no client-side configuration parameter to restrict
the maximum request size on clients. On the broker side,
"socket.request.max.bytes" restricts the maximum buffer size allocated,
protecting the broker from badly behaved or misconfigured clients. A KIP
would be required to add a similar configuration parameter on the
client-side. It will definitely be useful for clients to catch this issue
and report a meaningful error,


I'm wondering what's causing the current code to request around 300MB of 
allocation on the client side? Is it just that the number being sent is 
incorrect (corrupt protocol message maybe?) or is it really needing that 
many bytes?


I don't have good knowledge of Kafka code so I'm not really sure what 
kind of data the broker/client exchange. But wouldn't the broker know 
beforehand that a particular port is configured for a certain protocol 
(SSL in this case) and if a client (producer/consumer) tries to connect 
to the with a unsupported protocol (plaintext in this case), it should 
just throw back a error code which then is "understood/parsed" by the 
client side and just fail such specific consumer/producer? That way the 
whole JVM need not be impacted. As I said, I don't have any good 
knowledge of the Kafka code so I might be talking something that isn't 
feasible in terms of design/implementation in the Kafka library.


-Jaikiran



but I am not sure whether clients would
typically be able to recover from this scenario without restarting the JVM.


On Fri, Aug 26, 2016 at 3:18 PM, Ismael Juma  wrote:


Yes, please file a JIRA.

On Fri, Aug 26, 2016 at 2:28 PM, Jaikiran Pai 
wrote:


We have been using Kafka 0.9.0.1 (server and Java client libraries). So
far we had been using it with plaintext transport but recently have been
considering upgrading to using SSL. It mostly works except that a
mis-configured producer (and even consumer) causes a hard to relate
OutOfMemory exception and thus causing the JVM in which the client is
running, to go into a bad state. We can consistently reproduce that OOM
very easily. We decided to check if this is something that is fixed in
0.10.0.1 so upgraded one of our test systems to that version (both server
and client libraries) but still see the same issue. Here's how it can be
easily reproduced


1. Enable SSL listener on the broker via server.properties, as per the
Kafka documentation

listeners=PLAINTEXT://:9092,SSL://:9093
ssl.keystore.location=
ssl.keystore.password=pass
ssl.key.password=pass
ssl.truststore.location=
ssl.truststore.password=pass


2. Start zookeeper and kafka server

3. Create a "oom-test" topic (which will be used for these tests):

kafka-topics.sh --zookeeper localhost:2181 --create --topic *oom-test*
--partitions 1 --replication-factor 1

4. Create a simple producer which sends a single message to the topic via
Java (new producer) APIs:

public class OOMTest {

 public static void main(final String[] args) throws Exception {
 final Properties kafkaProducerConfigs = new Properties();
*// NOTE: Intentionally use a SSL port without specifying
security.protocol as SSL**
**kafkaProducerConfigs.setProperty(ProducerConfig.

BOOTSTRAP_SERVERS_CONFIG,

"localhost:9093");**
* kafkaProducerConfigs.setProperty(ProducerConfig.

KEY_SERIALIZER_CLASS_CONFIG,

StringSerializer.class.getName());
kafkaProducerConfigs.setProperty(ProducerConfig.

VALUE_SERIALIZER_CLASS_CONFIG,

StringSerializer.class.getName());
 try (KafkaProducer producer = new
KafkaProducer<>(kafkaProducerConfigs)) {
 System.out.println("Created Kafka producer");
 final String topicName = "oom-test";
 final String message = "Hello OOM!";
 // send a message to the topic
 final Future recordMetadataFuture =
producer.send(new ProducerRecord<>(topicName, message));
 final RecordMetadata sentRecordMetadata =
recordMetadataFuture.get();
 System.out.println("Sent message '" + message + "' to topic

'"

+ topicName + "'");
 }
 System.out.println("Tests complete");

 }
}

Notice that the server URL is using a SSL endpoint localhost:9093 but
isn't specifying any of the other necessary SSL configs like
security.protocol.

5. For the sake of easily reproducing this issue run this class with a

max

heap size of 256MB (-Xmx256M). Running this code throws up the following
OutOfMemoryError in one of the Sender threads:

*18:33:25,770 ERROR [KafkaThread] - Uncaught exception in
kafka-producer-network-thread | producer-1: **
**java.lang.OutOfMemoryError: Java heap space**
*at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
 at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
 

[jira] [Created] (KAFKA-4090) JVM runs into OOM if (Java) client uses a SSL port without setting the security protocol

2016-08-26 Thread jaikiran pai (JIRA)
jaikiran pai created KAFKA-4090:
---

 Summary: JVM runs into OOM if (Java) client uses a SSL port 
without setting the security protocol
 Key: KAFKA-4090
 URL: https://issues.apache.org/jira/browse/KAFKA-4090
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.10.0.1, 0.9.0.1
Reporter: jaikiran pai


Quoting from the mail thread that was sent to Kafka mailing list:

{quote}
We have been using Kafka 0.9.0.1 (server and Java client libraries). So far we 
had been using it with plaintext transport but recently have been considering 
upgrading to using SSL. It mostly works except that a mis-configured producer 
(and even consumer) causes a hard to relate OutOfMemory exception and thus 
causing the JVM in which the client is running, to go into a bad state. We can 
consistently reproduce that OOM very easily. We decided to check if this is 
something that is fixed in 0.10.0.1 so upgraded one of our test systems to that 
version (both server and client libraries) but still see the same issue. Here's 
how it can be easily reproduced


1. Enable SSL listener on the broker via server.properties, as per the Kafka 
documentation

{code}
listeners=PLAINTEXT://:9092,SSL://:9093
ssl.keystore.location=
ssl.keystore.password=pass
ssl.key.password=pass
ssl.truststore.location=
ssl.truststore.password=pass
{code}

2. Start zookeeper and kafka server

3. Create a "oom-test" topic (which will be used for these tests):

{code}
kafka-topics.sh --zookeeper localhost:2181 --create --topic oom-test  
--partitions 1 --replication-factor 1
{code}

4. Create a simple producer which sends a single message to the topic via Java 
(new producer) APIs:

{code}
public class OOMTest {

public static void main(final String[] args) throws Exception {
final Properties kafkaProducerConfigs = new Properties();
// NOTE: Intentionally use a SSL port without specifying 
security.protocol as SSL

kafkaProducerConfigs.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
"localhost:9093");

kafkaProducerConfigs.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class.getName());

kafkaProducerConfigs.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class.getName());
try (KafkaProducer producer = new 
KafkaProducer<>(kafkaProducerConfigs)) {
System.out.println("Created Kafka producer");
final String topicName = "oom-test";
final String message = "Hello OOM!";
// send a message to the topic
final Future recordMetadataFuture = 
producer.send(new ProducerRecord<>(topicName, message));
final RecordMetadata sentRecordMetadata = 
recordMetadataFuture.get();
System.out.println("Sent message '" + message + "' to topic '" + 
topicName + "'");
}
System.out.println("Tests complete");

}
}

{code}

Notice that the server URL is using a SSL endpoint localhost:9093 but isn't 
specifying any of the other necessary SSL configs like security.protocol.

5. For the sake of easily reproducing this issue run this class with a max heap 
size of 256MB (-Xmx256M). Running this code throws up the following 
OutOfMemoryError in one of the Sender threads:

{code}
18:33:25,770 ERROR [KafkaThread] - Uncaught exception in 
kafka-producer-network-thread | producer-1:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at 
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)
at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
at java.lang.Thread.run(Thread.java:745)

{code}

Note that I set it to 256MB as heap size to easily reproduce it but this isn't 
specific to that size. We have been able to reproduce it at even 516MB and 
higher too.

This even happens with the consumer and in fact can be reproduced out of the 
box with the kafka-consumer-group.sh script. All you have to do is run that 
tool as follows:

{code}
./kafka-consumer-groups.sh --list --bootstrap-server localhost:9093 
--new-consumer
{code}

{code}
Error while executing consumer group command Java heap space
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
   

Re: Using SSL port without specifying security protocol causes OOM on 0.9.0.1 and 0.10.0.1

2016-08-26 Thread Rajini Sivaram
Jaikiran,

At the moment there is no client-side configuration parameter to restrict
the maximum request size on clients. On the broker side,
"socket.request.max.bytes" restricts the maximum buffer size allocated,
protecting the broker from badly behaved or misconfigured clients. A KIP
would be required to add a similar configuration parameter on the
client-side. It will definitely be useful for clients to catch this issue
and report a meaningful error, but I am not sure whether clients would
typically be able to recover from this scenario without restarting the JVM.


On Fri, Aug 26, 2016 at 3:18 PM, Ismael Juma  wrote:

> Yes, please file a JIRA.
>
> On Fri, Aug 26, 2016 at 2:28 PM, Jaikiran Pai 
> wrote:
>
> > We have been using Kafka 0.9.0.1 (server and Java client libraries). So
> > far we had been using it with plaintext transport but recently have been
> > considering upgrading to using SSL. It mostly works except that a
> > mis-configured producer (and even consumer) causes a hard to relate
> > OutOfMemory exception and thus causing the JVM in which the client is
> > running, to go into a bad state. We can consistently reproduce that OOM
> > very easily. We decided to check if this is something that is fixed in
> > 0.10.0.1 so upgraded one of our test systems to that version (both server
> > and client libraries) but still see the same issue. Here's how it can be
> > easily reproduced
> >
> >
> > 1. Enable SSL listener on the broker via server.properties, as per the
> > Kafka documentation
> >
> > listeners=PLAINTEXT://:9092,SSL://:9093
> > ssl.keystore.location=
> > ssl.keystore.password=pass
> > ssl.key.password=pass
> > ssl.truststore.location=
> > ssl.truststore.password=pass
> >
> >
> > 2. Start zookeeper and kafka server
> >
> > 3. Create a "oom-test" topic (which will be used for these tests):
> >
> > kafka-topics.sh --zookeeper localhost:2181 --create --topic *oom-test*
> > --partitions 1 --replication-factor 1
> >
> > 4. Create a simple producer which sends a single message to the topic via
> > Java (new producer) APIs:
> >
> > public class OOMTest {
> >
> > public static void main(final String[] args) throws Exception {
> > final Properties kafkaProducerConfigs = new Properties();
> > *// NOTE: Intentionally use a SSL port without specifying
> > security.protocol as SSL**
> > **kafkaProducerConfigs.setProperty(ProducerConfig.
> BOOTSTRAP_SERVERS_CONFIG,
> > "localhost:9093");**
> > * kafkaProducerConfigs.setProperty(ProducerConfig.
> KEY_SERIALIZER_CLASS_CONFIG,
> > StringSerializer.class.getName());
> > kafkaProducerConfigs.setProperty(ProducerConfig.
> VALUE_SERIALIZER_CLASS_CONFIG,
> > StringSerializer.class.getName());
> > try (KafkaProducer producer = new
> > KafkaProducer<>(kafkaProducerConfigs)) {
> > System.out.println("Created Kafka producer");
> > final String topicName = "oom-test";
> > final String message = "Hello OOM!";
> > // send a message to the topic
> > final Future recordMetadataFuture =
> > producer.send(new ProducerRecord<>(topicName, message));
> > final RecordMetadata sentRecordMetadata =
> > recordMetadataFuture.get();
> > System.out.println("Sent message '" + message + "' to topic
> '"
> > + topicName + "'");
> > }
> > System.out.println("Tests complete");
> >
> > }
> > }
> >
> > Notice that the server URL is using a SSL endpoint localhost:9093 but
> > isn't specifying any of the other necessary SSL configs like
> > security.protocol.
> >
> > 5. For the sake of easily reproducing this issue run this class with a
> max
> > heap size of 256MB (-Xmx256M). Running this code throws up the following
> > OutOfMemoryError in one of the Sender threads:
> >
> > *18:33:25,770 ERROR [KafkaThread] - Uncaught exception in
> > kafka-producer-network-thread | producer-1: **
> > **java.lang.OutOfMemoryError: Java heap space**
> > *at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> > at org.apache.kafka.common.network.NetworkReceive.readFromReada
> > bleChannel(NetworkReceive.java:93)
> > at org.apache.kafka.common.network.NetworkReceive.readFrom(
> > NetworkReceive.java:71)
> > at org.apache.kafka.common.network.KafkaChannel.receive(KafkaCh
> > annel.java:153)
> > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChann
> > el.java:134)
> > at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
> > at org.apache.kafka.clients.NetworkClient.poll(
> NetworkClient.java:256)
> > at org.apache.kafka.clients.producer.internals.Sender.run(Sende
> > r.java:216)
> > at org.apache.kafka.clients.producer.internals.Sender.run(Sende
> > r.java:128)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Note that I set it to 256MB as heap size to easily reproduce it but this
> > 

[GitHub] kafka pull request #1793: MINOR: Include request header in exception when co...

2016-08-26 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1793

MINOR: Include request header in exception when correlation of 
request/response fails



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
include-request-header-if-request-correlation-fails

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1793.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1793


commit e5f70cd6f4761ab8b6f7f024b49b2a19094e829f
Author: Ismael Juma 
Date:   2016-08-26T14:21:10Z

MINOR: Include request header in exception when correlation of 
request/response fails




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Using SSL port without specifying security protocol causes OOM on 0.9.0.1 and 0.10.0.1

2016-08-26 Thread Ismael Juma
Yes, please file a JIRA.

On Fri, Aug 26, 2016 at 2:28 PM, Jaikiran Pai 
wrote:

> We have been using Kafka 0.9.0.1 (server and Java client libraries). So
> far we had been using it with plaintext transport but recently have been
> considering upgrading to using SSL. It mostly works except that a
> mis-configured producer (and even consumer) causes a hard to relate
> OutOfMemory exception and thus causing the JVM in which the client is
> running, to go into a bad state. We can consistently reproduce that OOM
> very easily. We decided to check if this is something that is fixed in
> 0.10.0.1 so upgraded one of our test systems to that version (both server
> and client libraries) but still see the same issue. Here's how it can be
> easily reproduced
>
>
> 1. Enable SSL listener on the broker via server.properties, as per the
> Kafka documentation
>
> listeners=PLAINTEXT://:9092,SSL://:9093
> ssl.keystore.location=
> ssl.keystore.password=pass
> ssl.key.password=pass
> ssl.truststore.location=
> ssl.truststore.password=pass
>
>
> 2. Start zookeeper and kafka server
>
> 3. Create a "oom-test" topic (which will be used for these tests):
>
> kafka-topics.sh --zookeeper localhost:2181 --create --topic *oom-test*
> --partitions 1 --replication-factor 1
>
> 4. Create a simple producer which sends a single message to the topic via
> Java (new producer) APIs:
>
> public class OOMTest {
>
> public static void main(final String[] args) throws Exception {
> final Properties kafkaProducerConfigs = new Properties();
> *// NOTE: Intentionally use a SSL port without specifying
> security.protocol as SSL**
> **kafkaProducerConfigs.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
> "localhost:9093");**
> * kafkaProducerConfigs.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
> StringSerializer.class.getName());
> kafkaProducerConfigs.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
> StringSerializer.class.getName());
> try (KafkaProducer producer = new
> KafkaProducer<>(kafkaProducerConfigs)) {
> System.out.println("Created Kafka producer");
> final String topicName = "oom-test";
> final String message = "Hello OOM!";
> // send a message to the topic
> final Future recordMetadataFuture =
> producer.send(new ProducerRecord<>(topicName, message));
> final RecordMetadata sentRecordMetadata =
> recordMetadataFuture.get();
> System.out.println("Sent message '" + message + "' to topic '"
> + topicName + "'");
> }
> System.out.println("Tests complete");
>
> }
> }
>
> Notice that the server URL is using a SSL endpoint localhost:9093 but
> isn't specifying any of the other necessary SSL configs like
> security.protocol.
>
> 5. For the sake of easily reproducing this issue run this class with a max
> heap size of 256MB (-Xmx256M). Running this code throws up the following
> OutOfMemoryError in one of the Sender threads:
>
> *18:33:25,770 ERROR [KafkaThread] - Uncaught exception in
> kafka-producer-network-thread | producer-1: **
> **java.lang.OutOfMemoryError: Java heap space**
> *at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at org.apache.kafka.common.network.NetworkReceive.readFromReada
> bleChannel(NetworkReceive.java:93)
> at org.apache.kafka.common.network.NetworkReceive.readFrom(
> NetworkReceive.java:71)
> at org.apache.kafka.common.network.KafkaChannel.receive(KafkaCh
> annel.java:153)
> at org.apache.kafka.common.network.KafkaChannel.read(KafkaChann
> el.java:134)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sende
> r.java:216)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sende
> r.java:128)
> at java.lang.Thread.run(Thread.java:745)
>
>
> Note that I set it to 256MB as heap size to easily reproduce it but this
> isn't specific to that size. We have been able to reproduce it at even
> 516MB and higher too.
>
> This even happens with the consumer and in fact can be reproduced out of
> the box with the kafka-consumer-group.sh script. All you have to do is run
> that tool as follows:
>
>
> *./kafka-consumer-groups.sh --list --bootstrap-server localhost:9093
> --new-consumer*
>
> Error while executing consumer group command Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at org.apache.kafka.common.network.NetworkReceive.readFromReada
> bleChannel(NetworkReceive.java:93)
> at org.apache.kafka.common.network.NetworkReceive.readFrom(
> NetworkReceive.java:71)
> at org.apache.kafka.common.network.KafkaChannel.receive(KafkaCh

Batch Expired

2016-08-26 Thread Ghosh, Achintya (Contractor)
Hi there,

What is the recommended Producer setting for Producer as I see a lot of Batch 
Expired exception even though I put request.timeout=6.

Producer settings:
acks=1
retries=3
batch.size=16384
linger.ms=5
buffer.memory=33554432
request.timeout.ms=6
timeout.ms=6

Thanks
Achintya


Using SSL port without specifying security protocol causes OOM on 0.9.0.1 and 0.10.0.1

2016-08-26 Thread Jaikiran Pai
We have been using Kafka 0.9.0.1 (server and Java client libraries). So 
far we had been using it with plaintext transport but recently have been 
considering upgrading to using SSL. It mostly works except that a 
mis-configured producer (and even consumer) causes a hard to relate 
OutOfMemory exception and thus causing the JVM in which the client is 
running, to go into a bad state. We can consistently reproduce that OOM 
very easily. We decided to check if this is something that is fixed in 
0.10.0.1 so upgraded one of our test systems to that version (both 
server and client libraries) but still see the same issue. Here's how it 
can be easily reproduced



1. Enable SSL listener on the broker via server.properties, as per the 
Kafka documentation


listeners=PLAINTEXT://:9092,SSL://:9093
ssl.keystore.location=
ssl.keystore.password=pass
ssl.key.password=pass
ssl.truststore.location=
ssl.truststore.password=pass


2. Start zookeeper and kafka server

3. Create a "oom-test" topic (which will be used for these tests):

kafka-topics.sh --zookeeper localhost:2181 --create --topic *oom-test* 
--partitions 1 --replication-factor 1


4. Create a simple producer which sends a single message to the topic 
via Java (new producer) APIs:


public class OOMTest {

public static void main(final String[] args) throws Exception {
final Properties kafkaProducerConfigs = new Properties();
*// NOTE: Intentionally use a SSL port without specifying 
security.protocol as SSL**
**kafkaProducerConfigs.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
"localhost:9093");**
* 
kafkaProducerConfigs.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class.getName());
kafkaProducerConfigs.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class.getName());
try (KafkaProducer producer = new 
KafkaProducer<>(kafkaProducerConfigs)) {

System.out.println("Created Kafka producer");
final String topicName = "oom-test";
final String message = "Hello OOM!";
// send a message to the topic
final Future recordMetadataFuture = 
producer.send(new ProducerRecord<>(topicName, message));
final RecordMetadata sentRecordMetadata = 
recordMetadataFuture.get();
System.out.println("Sent message '" + message + "' to topic 
'" + topicName + "'");

}
System.out.println("Tests complete");

}
}

Notice that the server URL is using a SSL endpoint localhost:9093 but 
isn't specifying any of the other necessary SSL configs like 
security.protocol.


5. For the sake of easily reproducing this issue run this class with a 
max heap size of 256MB (-Xmx256M). Running this code throws up the 
following OutOfMemoryError in one of the Sender threads:


*18:33:25,770 ERROR [KafkaThread] - Uncaught exception in 
kafka-producer-network-thread | producer-1: **

**java.lang.OutOfMemoryError: Java heap space**
*at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at 
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
at 
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)

at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)

at java.lang.Thread.run(Thread.java:745)


Note that I set it to 256MB as heap size to easily reproduce it but this 
isn't specific to that size. We have been able to reproduce it at even 
516MB and higher too.


This even happens with the consumer and in fact can be reproduced out of 
the box with the kafka-consumer-group.sh script. All you have to do is 
run that tool as follows:



*./kafka-consumer-groups.sh --list --bootstrap-server localhost:9093 
--new-consumer*


Error while executing consumer group command Java heap space
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at 
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at 
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:323)

at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at 

Re: [DISCUSS] KIP-76: Improve Kafka Streams Join Semantics

2016-08-26 Thread Matthias J. Sax
Thanks Bill.

We will need to start an dedicated voting thread though.

-Matthias

On 08/26/2016 03:10 PM, Bill Bejeck wrote:
> +1
> 
> On Fri, Aug 26, 2016 at 8:12 AM, Matthias J. Sax 
> wrote:
> 
>> If there is no further feedback, I would like to start the voting process.
>>
>> -Matthias
>>
>> On 08/19/2016 12:18 PM, Matthias J. Sax wrote:
>>> Hi,
>>>
>>> we have created KIP-76: Improve Kafka Streams Join Semantics
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 76%3A+Improve+Kafka+Streams+Join+Semantics
>>>
>>> Please give feedback. Thanks.
>>>
>>>
>>> -Matthias
>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-76: Improve Kafka Streams Join Semantics

2016-08-26 Thread Bill Bejeck
+1

On Fri, Aug 26, 2016 at 8:12 AM, Matthias J. Sax 
wrote:

> If there is no further feedback, I would like to start the voting process.
>
> -Matthias
>
> On 08/19/2016 12:18 PM, Matthias J. Sax wrote:
> > Hi,
> >
> > we have created KIP-76: Improve Kafka Streams Join Semantics
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 76%3A+Improve+Kafka+Streams+Join+Semantics
> >
> > Please give feedback. Thanks.
> >
> >
> > -Matthias
> >
>
>


Re: [VOTE] KIP-63: Unify store and downstream caching in streams

2016-08-26 Thread Bill Bejeck
+1

On Thu, Aug 25, 2016 at 11:45 AM, Matthias J. Sax 
wrote:

> +1
>
> On 08/25/2016 04:22 PM, Damian Guy wrote:
> > +1
> >
> > On Thu, 25 Aug 2016 at 11:57 Eno Thereska 
> wrote:
> >
> >> Hi folks,
> >>
> >> We'd like to start the vote for KIP-63. At this point the Wiki addresses
> >> all previous questions and we believe the PoC is feature-complete.
> >>
> >> Thanks
> >> Eno
> >>
> >
>
>


Re: [DISCUSS] KIP-76: Improve Kafka Streams Join Semantics

2016-08-26 Thread Matthias J. Sax
If there is no further feedback, I would like to start the voting process.

-Matthias

On 08/19/2016 12:18 PM, Matthias J. Sax wrote:
> Hi,
> 
> we have created KIP-76: Improve Kafka Streams Join Semantics
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-76%3A+Improve+Kafka+Streams+Join+Semantics
> 
> Please give feedback. Thanks.
> 
> 
> -Matthias
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Commented] (KAFKA-2170) 10 LogTest cases failed for file.renameTo failed under windows

2016-08-26 Thread Harald Kirsch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438668#comment-15438668
 ] 

Harald Kirsch commented on KAFKA-2170:
--

Tried to added the patches. Since I had the 1757 in already, I only added the 
1716 (one liner). The 1718 was in my code already too (prevent null 
DirectBuffer use).

The test results are exactly as in my last comment, namely that the 
.index.deleted files are not really deleted (with no error message). The next 
time the cleaner runs, it complains about not being able to rename to the 
already existing file (no surprise).

So the remaining problem seems to that even with a low 
log.segment.delete.delay.ms the files are not deleted. And also their 
counterparts without .delete, despite being empty, are not deleted.

> 10 LogTest cases failed for  file.renameTo failed under windows
> ---
>
> Key: KAFKA-2170
> URL: https://issues.apache.org/jira/browse/KAFKA-2170
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.1.0
> Environment: Windows
>Reporter: Honghai Chen
>Assignee: Jay Kreps
>
> get latest code from trunk, then run test 
> gradlew  -i core:test --tests kafka.log.LogTest
> Got 10 cases failed for same reason:
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 0
>   at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:259)
>   at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:756)
>   at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:747)
>   at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:514)
>   at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:514)
>   at scala.collection.immutable.List.foreach(List.scala:318)
>   at kafka.log.Log.deleteOldSegments(Log.scala:514)
>   at kafka.log.LogTest.testAsyncDelete(LogTest.scala:633)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
>   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at $Proxy2.processTestClass(Unknown Source)
>   at 
> 

[GitHub] kafka pull request #1792: KAFKA-3595: window stores use compact,delete confi...

2016-08-26 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1792

KAFKA-3595: window stores use compact,delete config for changelogs

changelogs of window stores now configure cleanup.policy=compact,delete 
with retention.ms set to window maintainMs + 
StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG
StoreChangeLogger produces messages with context.timestamp().

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-3595

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1792.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1792


commit c7a13b09efc3695e1d60b8375fa16993c9f9a06e
Author: Damian Guy 
Date:   2016-08-12T14:42:09Z

window stores use compact,delete config for changelogs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438666#comment-15438666
 ] 

ASF GitHub Bot commented on KAFKA-3595:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1792

KAFKA-3595: window stores use compact,delete config for changelogs

changelogs of window stores now configure cleanup.policy=compact,delete 
with retention.ms set to window maintainMs + 
StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG
StoreChangeLogger produces messages with context.timestamp().

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-3595

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1792.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1792


commit c7a13b09efc3695e1d60b8375fa16993c9f9a06e
Author: Damian Guy 
Date:   2016-08-12T14:42:09Z

window stores use compact,delete config for changelogs




> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Damian Guy
>Priority: Minor
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently state store replication always go through a compact kafka topic. 
> For some state stores, e.g. JoinWindow, there are no duplicates in the store, 
> there is not much benefit using a compacted topic.
> The problem of using compacted topic is the records can stay in kafka broker 
> forever. In my use case, my key is ad_id, it's incrementing all the time, not 
> bounded, I am worried the disk space on broker for that topic will go forever.
> I think we either need the capability to purge the compacted records on 
> broker, or allow us to specify different compact option for state store 
> replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #846

2016-08-26 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Update Kafka configuration documentation to use kafka-configs.…

[cshapi] KAFKA-4052: Allow passing properties file to ProducerPerformance

[cshapi] KAFKA-4070: implement Connect Struct.toString()

[cshapi] MINOR: Improve log message in `ReplicaManager.becomeLeaderOrFollower`

[cshapi] KAFKA-3742: (FIX) Can't run bin/connect-*.sh with -daemon flag

[cshapi] MINOR: doc changes for QueueTimeMs JMX metrics.

[cshapi] MINOR: Update MirrorMaker docs to remove multiple --consumer.config

--
[...truncated 7223 lines...]

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException STARTED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > 
shouldDeleteSegmentsReadyToBeDeletedWhenCleanupPolicyIsCompactAndDelete STARTED

kafka.log.LogTest > 
shouldDeleteSegmentsReadyToBeDeletedWhenCleanupPolicyIsCompactAndDelete PASSED

kafka.log.LogTest > testReadAtLogGap STARTED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll STARTED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog STARTED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck STARTED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation STARTED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints STARTED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset STARTED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testDeleteOldSegmentsMethod STARTED

kafka.log.LogTest > testDeleteOldSegmentsMethod PASSED

kafka.log.LogTest > shouldDeleteSizeBasedSegments STARTED

kafka.log.LogTest > shouldDeleteSizeBasedSegments PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull STARTED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild STARTED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted STARTED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved STARTED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages STARTED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload STARTED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog STARTED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset STARTED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate STARTED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName STARTED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles STARTED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages STARTED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages PASSED

kafka.log.LogTest > testSizeBasedLogRoll STARTED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
STARTED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter STARTED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName STARTED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo STARTED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile STARTED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets STARTED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED


Build failed in Jenkins: kafka-trunk-jdk7 #1502

2016-08-26 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3742: (FIX) Can't run bin/connect-*.sh with -daemon flag

[cshapi] MINOR: doc changes for QueueTimeMs JMX metrics.

[cshapi] MINOR: Update MirrorMaker docs to remove multiple --consumer.config

--
[...truncated 4793 lines...]

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue STARTED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.UtilsTest > testAbs STARTED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix STARTED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator STARTED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes STARTED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList STARTED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt STARTED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.JsonTest > testJsonEncoding STARTED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.consumer.TopicFilterTest > testWhitelists STARTED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson STARTED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists STARTED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED