Jenkins build is back to normal : kafka-trunk-jdk7 #2187

2017-05-12 Thread Apache Jenkins Server
See 




[jira] [Updated] (KAFKA-5167) streams task gets stuck after re-balance due to LockException

2017-05-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5167:
-
   Resolution: Fixed
Fix Version/s: 0.11.0.0
   0.10.2.2
   Status: Resolved  (was: Patch Available)

> streams task gets stuck after re-balance due to LockException
> -
>
> Key: KAFKA-5167
> URL: https://issues.apache.org/jira/browse/KAFKA-5167
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Narendra Kumar
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.2, 0.11.0.0
>
> Attachments: BugTest.java, DebugTransformer.java, logs.txt
>
>
> During rebalance processor node's close() method gets called two times once 
> from StreamThread.suspendTasksAndState() and once from 
> StreamThread.closeNonAssignedSuspendedTasks(). I have some instance filed 
> which I am closing in processor's close method. This instance's close method 
> throws some exception if I call close more than once. Because of this 
> exception, the Kafka streams does not attempt to close the statemanager ie.  
> task.closeStateManager(true) is never called. When a task moves from one 
> thread to another within same machine the task blocks trying to get lock on 
> state directory which is still held by unclosed statemanager and keep 
> throwing the below warning message:
> 2017-04-30 12:34:17 WARN  StreamThread:1214 - Could not create task 0_1. Will 
> retry.
> org.apache.kafka.streams.errors.LockException: task [0_1] Failed to lock the 
> state directory for task 0_1
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:100)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5167) streams task gets stuck after re-balance due to LockException

2017-05-12 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009155#comment-16009155
 ] 

Guozhang Wang commented on KAFKA-5167:
--

Merged https://github.com/apache/kafka/pull/3001 to 0.10.2 branch. If we would 
release a 0.10.2.2 bug-fix then it will be included.

I think it is OK to let `processor#close()` be idempotent.

> streams task gets stuck after re-balance due to LockException
> -
>
> Key: KAFKA-5167
> URL: https://issues.apache.org/jira/browse/KAFKA-5167
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Narendra Kumar
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0, 0.10.2.2
>
> Attachments: BugTest.java, DebugTransformer.java, logs.txt
>
>
> During rebalance processor node's close() method gets called two times once 
> from StreamThread.suspendTasksAndState() and once from 
> StreamThread.closeNonAssignedSuspendedTasks(). I have some instance filed 
> which I am closing in processor's close method. This instance's close method 
> throws some exception if I call close more than once. Because of this 
> exception, the Kafka streams does not attempt to close the statemanager ie.  
> task.closeStateManager(true) is never called. When a task moves from one 
> thread to another within same machine the task blocks trying to get lock on 
> state directory which is still held by unclosed statemanager and keep 
> throwing the below warning message:
> 2017-04-30 12:34:17 WARN  StreamThread:1214 - Could not create task 0_1. Will 
> retry.
> org.apache.kafka.streams.errors.LockException: task [0_1] Failed to lock the 
> state directory for task 0_1
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:100)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3040: MINOR: Some minor improvements to TxnOffsetCommit ...

2017-05-12 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3040

MINOR: Some minor improvements to TxnOffsetCommit handling



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka txn-offset-commit-cleanups

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3040.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3040


commit 01a02cbfbbea991cca233e1951a5d7f8677da208
Author: Jason Gustafson 
Date:   2017-05-13T02:58:41Z

MINOR: Some minor cleanups from TxnOffsetCommit patch




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2186

2017-05-12 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Rename InitPidRequest/InitPidResponse to

--
[...truncated 1.66 MB...]
org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsInvalid STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsInvalid PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithDuplicateSourceName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithDuplicateSourceName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource 

[jira] [Commented] (KAFKA-4144) Allow per stream/table timestamp extractor

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009132#comment-16009132
 ] 

ASF GitHub Bot commented on KAFKA-4144:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2466


> Allow per stream/table timestamp extractor
> --
>
> Key: KAFKA-4144
> URL: https://issues.apache.org/jira/browse/KAFKA-4144
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Jeyhun Karimov
>  Labels: api, kip
> Fix For: 0.11.0.0
>
>
> At the moment the timestamp extractor is configured via a {{StreamConfig}} 
> value to {{KafkaStreams}}. That means you can only have a single timestamp 
> extractor per app, even though you may be joining multiple streams/tables 
> that require different timestamp extraction methods.
> You should be able to specify a timestamp extractor via 
> {{KStreamBuilder.stream()/table()}}, just like you can specify key and value 
> serdes that override the StreamConfig defaults.
> KIP: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=68714788
> Specifying a per-stream extractor should only be possible for sources, but 
> not for intermediate topics. For PAPI we cannot enforce this, but for DSL 
> {{through()}} should not allow to set a custom extractor by the user. In 
> contrast, with regard to KAFKA-4785, is must internally set an extractor that 
> returns the record's metadata timestamp in order to overwrite the global 
> extractor from {{StreamsConfig}} (ie, set 
> {{FailOnInvalidTimestampExtractor}}). This change should be done in 
> KAFKA-4785 though.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2466: KAFKA-4144: Allow per stream/table timestamp extra...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2466


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4144) Allow per stream/table timestamp extractor

2017-05-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4144:
-
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2466
[https://github.com/apache/kafka/pull/2466]

> Allow per stream/table timestamp extractor
> --
>
> Key: KAFKA-4144
> URL: https://issues.apache.org/jira/browse/KAFKA-4144
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Jeyhun Karimov
>  Labels: api, kip
> Fix For: 0.11.0.0
>
>
> At the moment the timestamp extractor is configured via a {{StreamConfig}} 
> value to {{KafkaStreams}}. That means you can only have a single timestamp 
> extractor per app, even though you may be joining multiple streams/tables 
> that require different timestamp extraction methods.
> You should be able to specify a timestamp extractor via 
> {{KStreamBuilder.stream()/table()}}, just like you can specify key and value 
> serdes that override the StreamConfig defaults.
> KIP: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=68714788
> Specifying a per-stream extractor should only be possible for sources, but 
> not for intermediate topics. For PAPI we cannot enforce this, but for DSL 
> {{through()}} should not allow to set a custom extractor by the user. In 
> contrast, with regard to KAFKA-4785, is must internally set an extractor that 
> returns the record's metadata timestamp in order to overwrite the global 
> extractor from {{StreamsConfig}} (ie, set 
> {{FailOnInvalidTimestampExtractor}}). This change should be done in 
> KAFKA-4785 though.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5231) TransactinoCoordinator does not bump epoch when aborting open transactions

2017-05-12 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5231:
---

 Summary: TransactinoCoordinator does not bump epoch when aborting 
open transactions
 Key: KAFKA-5231
 URL: https://issues.apache.org/jira/browse/KAFKA-5231
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta
Assignee: Guozhang Wang
Priority: Blocker
 Fix For: 0.11.0.0


When the TransactionCoordinator receives an InitPidRequest when there is an 
open transaction for a transactional id, it should first bump the epoch and 
then abort the open transaction.

Currently, it aborts the open transaction with the existing epoch, hence the 
old producer is never fenced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1516

2017-05-12 Thread Apache Jenkins Server
See 




[jira] [Commented] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-12 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009080#comment-16009080
 ] 

Matthias J. Sax commented on KAFKA-5226:


I see your point, however, it's unclear to me how {{rawRecord}} could be 
{{null}} as we access {{rawRecord}} successfully to generate the exception 
message that show up complete in the stack trace. And {{sourceNode}} is only 
set once at startup -- thus, the error would occur for all records as all 
record go through this code path to get de-serialized.

The only scenario I can think of atm that would explain this would be if the 
error occurs directly after a rebalance, fails for the first record, and 
another rebalance happens (and after the second rebalance we do initialize 
everything correctly). Can you see from the logs if this scenario could be the 
case? Can you maybe call {{e.printStrackTrace}} so we can get the complete 
stacktrace in case it happens again.

Just to be sure, can you also double check that your serde does handle {{null}} 
without problems? Do you know if you have records with {{null}} key and confirm 
if those could cause the problem or not? That would help a lot to limit the 
number of possible root causes. Thanks a lot.

> NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize
> --
>
> Key: KAFKA-5226
> URL: https://issues.apache.org/jira/browse/KAFKA-5226
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
> Environment: 64-bit Amazon Linux, JDK8
>Reporter: Ian Springer
>
> I saw the following NPE in our Kafka Streams app, which has 3 nodes running 
> on 3 separate machines.. Out of hundreds of messages processed, the NPE only 
> occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
> I'm hoping the Kafka Streams team can guess the cause based on the stack 
> trace. If I can provide any additional details about our app, please let me 
> know.
>  
> {code}
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka version : 0.10.2.1
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka commitId : e89bffd6b2eff799
> INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
> Starting beans in phase 0
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from CREATED to RUNNING.
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started 
> Kafka Stream process
> INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
> Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
> null) for group evergage-app.
> INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [] for group evergage-app
> INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 18
> INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 02:58:27,729 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> state stores
> INFO  2017-05-10 02:58:27,731 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> [14 hours pass...]
> INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [us.app.Trigger-0] for group 
> evergage-app
> INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 16:21:27,489 

[GitHub] kafka pull request #2997: MINOR: Rename InitPidRequest/InitPidResponse to In...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2997


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5166) Add option "dry run" to Streams application reset tool

2017-05-12 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated KAFKA-5166:
--
Status: Patch Available  (was: In Progress)

> Add option "dry run" to Streams application reset tool
> --
>
> Key: KAFKA-5166
> URL: https://issues.apache.org/jira/browse/KAFKA-5166
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, tools
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: needs-kip
>
> We want to add an option to Streams application reset tool, that allow for a 
> "dry run". Ie, only prints what topics would get modified/deleted without 
> actually applying any actions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3038: KAFKA-5180: fix transient failure in ControllerInt...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3038


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-05-12 Thread Joseph Aliase (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009056#comment-16009056
 ] 

Joseph Aliase commented on KAFKA-5007:
--

[~cuiyang] I have replicated this issue in all version except 0.8.

> Kafka Replica Fetcher Thread- Resource Leak
> ---
>
> Key: KAFKA-5007
> URL: https://issues.apache.org/jira/browse/KAFKA-5007
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.10.0.0, 0.10.1.1, 0.10.2.0
> Environment: Centos 7
> Jave 8
>Reporter: Joseph Aliase
>Priority: Critical
>  Labels: reliability
> Attachments: jstack-kafka.out, jstack-zoo.out, lsofkafka.txt, 
> lsofzookeeper.txt
>
>
> Kafka is running out of open file descriptor when system network interface is 
> done.
> Issue description:
> We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
> descriptor for the account running Kafka is set to 10.
> During an upgrade, network interface went down. Outage continued for 12 hours 
> eventually all the broker crashed with java.io.IOException: Too many open 
> files error.
> We repeated the test in a lower environment and observed that Open Socket 
> count keeps on increasing while the NIC is down.
> We have around 13 topics with max partition size of 120 and number of replica 
> fetcher thread is set to 8.
> Using an internal monitoring tool we observed that Open Socket descriptor   
> for the broker pid continued to increase although NIC was down leading to  
> Open File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5180) Transient failure: ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

2017-05-12 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5180:
---
   Resolution: Fixed
Fix Version/s: 0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3038
[https://github.com/apache/kafka/pull/3038]

> Transient failure: 
> ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch
> 
>
> Key: KAFKA-5180
> URL: https://issues.apache.org/jira/browse/KAFKA-5180
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Onur Karaman
> Fix For: 0.11.0.0
>
>
> {code}
> Stacktrace org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /controller_epoch  at 
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1001)   
>   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1100)at 
> org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1095)at 
> kafka.utils.ZkUtils.readData(ZkUtils.scala:652)  at 
> kafka.controller.ControllerIntegrationTest.$anonfun$testControllerMoveIncrementsControllerEpoch$2(ControllerIntegrationTest.scala:68)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:808)at 
> kafka.controller.ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch(ControllerIntegrationTest.scala:68)
> {code}
> cc [~onurkaraman]
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/1487/tests



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5180) Transient failure: ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009057#comment-16009057
 ] 

ASF GitHub Bot commented on KAFKA-5180:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3038


> Transient failure: 
> ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch
> 
>
> Key: KAFKA-5180
> URL: https://issues.apache.org/jira/browse/KAFKA-5180
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Onur Karaman
> Fix For: 0.11.0.0
>
>
> {code}
> Stacktrace org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /controller_epoch  at 
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1001)   
>   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1100)at 
> org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1095)at 
> kafka.utils.ZkUtils.readData(ZkUtils.scala:652)  at 
> kafka.controller.ControllerIntegrationTest.$anonfun$testControllerMoveIncrementsControllerEpoch$2(ControllerIntegrationTest.scala:68)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:808)at 
> kafka.controller.ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch(ControllerIntegrationTest.scala:68)
> {code}
> cc [~onurkaraman]
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/1487/tests



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)

2017-05-12 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4982:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3004
[https://github.com/apache/kafka/pull/3004]

> Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)
> --
>
> Key: KAFKA-4982
> URL: https://issues.apache.org/jira/browse/KAFKA-4982
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
> Fix For: 0.11.0.0
>
>
> Metrics in socket-server-metrics like connection-count connection-close-rate 
> etc are tagged with networkProcessor:
> where the id of a network processor is just a numeric integer.
> If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id 
> just keeps incrementing and when looking at the metrics it is hard to match 
> the metric tag to a listener. 
> You need to know the number of network threads and the order in which the 
> listeners are declared in the brokers' server.properties.
> We should add a tag showing the listener label, that would also make it much 
> easier to group the metrics in a tool like grafana



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3039: MINOR: Close ZooKeeper clients in tests

2017-05-12 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3039

MINOR: Close ZooKeeper clients in tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka MINOR-close-zkclient

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3039.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3039


commit e62cecfde31093a245ebcf30f1c1578f83925bfb
Author: Rajini Sivaram 
Date:   2017-05-13T01:51:43Z

MINOR: Close ZooKeeper clients in tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-05-12 Thread Joseph Aliase (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009052#comment-16009052
 ] 

Joseph Aliase commented on KAFKA-5007:
--

[~cuiyang] you are referring to open file descriptor? its:
lsof -p 

> Kafka Replica Fetcher Thread- Resource Leak
> ---
>
> Key: KAFKA-5007
> URL: https://issues.apache.org/jira/browse/KAFKA-5007
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.10.0.0, 0.10.1.1, 0.10.2.0
> Environment: Centos 7
> Jave 8
>Reporter: Joseph Aliase
>Priority: Critical
>  Labels: reliability
> Attachments: jstack-kafka.out, jstack-zoo.out, lsofkafka.txt, 
> lsofzookeeper.txt
>
>
> Kafka is running out of open file descriptor when system network interface is 
> done.
> Issue description:
> We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
> descriptor for the account running Kafka is set to 10.
> During an upgrade, network interface went down. Outage continued for 12 hours 
> eventually all the broker crashed with java.io.IOException: Too many open 
> files error.
> We repeated the test in a lower environment and observed that Open Socket 
> count keeps on increasing while the NIC is down.
> We have around 13 topics with max partition size of 120 and number of replica 
> fetcher thread is set to 8.
> Using an internal monitoring tool we observed that Open Socket descriptor   
> for the broker pid continued to increase although NIC was down leading to  
> Open File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-0.10.2-jdk7 #157

2017-05-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-4965: set internal.leave.group.on.close to false in 
StreamsConfig

[wangguoz] KAFKA-5198: Synchronize on RocksDbStore#openIterators since it is

--
[...truncated 616.78 KB...]

org.apache.kafka.clients.producer.KafkaProducerTest > 
testMetadataFetchOnStaleMetadata PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testMetadataFetch STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testMetadataFetch PASSED

org.apache.kafka.clients.producer.ProducerRecordTest > testInvalidRecords 
STARTED

org.apache.kafka.clients.producer.ProducerRecordTest > testInvalidRecords PASSED

org.apache.kafka.clients.producer.ProducerRecordTest > testEqualsAndHashCode 
STARTED

org.apache.kafka.clients.producer.ProducerRecordTest > testEqualsAndHashCode 
PASSED

org.apache.kafka.clients.ClientUtilsTest > testOnlyBadHostname STARTED

org.apache.kafka.clients.ClientUtilsTest > testOnlyBadHostname PASSED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses STARTED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest > testNoPort STARTED

org.apache.kafka.clients.ClientUtilsTest > testNoPort PASSED

org.apache.kafka.clients.NodeApiVersionsTest > 
testUsableVersionCalculationNoKnownVersions STARTED

org.apache.kafka.clients.NodeApiVersionsTest > 
testUsableVersionCalculationNoKnownVersions PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testVersionsToString STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testVersionsToString PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUnsupportedVersionsToString 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUnsupportedVersionsToString 
PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUnknownApiVersionsToString 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUnknownApiVersionsToString 
PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionCalculation 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionCalculation 
PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionLatestVersions 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionLatestVersions 
PASSED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResponse STARTED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResponse PASSED

org.apache.kafka.clients.NetworkClientTest > 
testDisconnectDuringUserMetadataRequest STARTED

org.apache.kafka.clients.NetworkClientTest > 
testDisconnectDuringUserMetadataRequest PASSED

org.apache.kafka.clients.NetworkClientTest > testClose STARTED

org.apache.kafka.clients.NetworkClientTest > testClose PASSED

org.apache.kafka.clients.NetworkClientTest > testLeastLoadedNode STARTED

org.apache.kafka.clients.NetworkClientTest > testLeastLoadedNode PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayConnected 
STARTED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayConnected PASSED

org.apache.kafka.clients.NetworkClientTest > testRequestTimeout STARTED

org.apache.kafka.clients.NetworkClientTest > testRequestTimeout PASSED

org.apache.kafka.clients.NetworkClientTest > 
testSimpleRequestResponseWithNoBrokerDiscovery STARTED

org.apache.kafka.clients.NetworkClientTest > 
testSimpleRequestResponseWithNoBrokerDiscovery PASSED

org.apache.kafka.clients.NetworkClientTest > 
testSimpleRequestResponseWithStaticNodes STARTED

org.apache.kafka.clients.NetworkClientTest > 
testSimpleRequestResponseWithStaticNodes PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelay STARTED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelay PASSED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode STARTED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayDisconnected 
STARTED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayDisconnected 
PASSED
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

[jira] [Commented] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-05-12 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009040#comment-16009040
 ] 

cuiyang commented on KAFKA-5007:


@Joseph Aliase Could you give the command which you used to catch log?

> Kafka Replica Fetcher Thread- Resource Leak
> ---
>
> Key: KAFKA-5007
> URL: https://issues.apache.org/jira/browse/KAFKA-5007
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.10.0.0, 0.10.1.1, 0.10.2.0
> Environment: Centos 7
> Jave 8
>Reporter: Joseph Aliase
>Priority: Critical
>  Labels: reliability
> Attachments: jstack-kafka.out, jstack-zoo.out, lsofkafka.txt, 
> lsofzookeeper.txt
>
>
> Kafka is running out of open file descriptor when system network interface is 
> done.
> Issue description:
> We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
> descriptor for the account running Kafka is set to 10.
> During an upgrade, network interface went down. Outage continued for 12 hours 
> eventually all the broker crashed with java.io.IOException: Too many open 
> files error.
> We repeated the test in a lower environment and observed that Open Socket 
> count keeps on increasing while the NIC is down.
> We have around 13 topics with max partition size of 120 and number of replica 
> fetcher thread is set to 8.
> Using an internal monitoring tool we observed that Open Socket descriptor   
> for the broker pid continued to increase although NIC was down leading to  
> Open File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-05-12 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009039#comment-16009039
 ] 

cuiyang commented on KAFKA-5007:


[~joseph.alias...@gmail.com] [~junrao] 
we also encounter this issu 3 times in last 3 weeks on our prod environment.
Our Kafka cluster version is 0.9.0.1 and 0.10.1.2. I am sure both version 
0.9.0.1 and 0.10.1.2 has this issue.
We set the fd to 10 by "ulimit -c 10", but it can not work.
When the issue happened, we monitor the fd on broker, but it is not much:
2017-05-12-22:39:56 FD_total_num:8153 FD_pair_num:6205 FD_ads_num:1459
2017-05-12-22:40:07 FD_total_num:8157 FD_pair_num:6206 FD_ads_num:1459
2017-05-12-22:40:18 FD_total_num:8155 FD_pair_num:6207 FD_ads_num:1459
2017-05-12-22:40:29 FD_total_num:8158 FD_pair_num:6208 FD_ads_num:1460
2017-05-12-22:40:40 FD_total_num:8160 FD_pair_num:6211 FD_ads_num:1461
2017-05-12-22:40:51 FD_total_num:8162 FD_pair_num:6213 FD_ads_num:1461
2017-05-12-22:41:02 FD_total_num:8172 FD_pair_num:6214 FD_ads_num:1462
2017-05-12-22:41:13 FD_total_num:8167 FD_pair_num:6214 FD_ads_num:1462
2017-05-12-22:41:24 FD_total_num:8172 FD_pair_num:6215 FD_ads_num:1462
2017-05-12-22:41:36 FD_total_num:8172 FD_pair_num:6216 FD_ads_num:1462
2017-05-12-22:41:47 FD_total_num:8169 FD_pair_num:6216 FD_ads_num:1462
2017-05-12-22:41:58 FD_total_num:8193 FD_pair_num:6216 FD_ads_num:1462
2017-05-12-22:42:08 FD_total_num:0 FD_pair_num:0 FD_ads_num:0
2017-05-12-22:42:19 FD_total_num:0 FD_pair_num:0 FD_ads_num:0
2017-05-12-22:42:29 FD_total_num:0 FD_pair_num:0 FD_ads_num:0
On 2017-05-12-22:42:08, FD is 0, because the broker is down.


> Kafka Replica Fetcher Thread- Resource Leak
> ---
>
> Key: KAFKA-5007
> URL: https://issues.apache.org/jira/browse/KAFKA-5007
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.10.0.0, 0.10.1.1, 0.10.2.0
> Environment: Centos 7
> Jave 8
>Reporter: Joseph Aliase
>Priority: Critical
>  Labels: reliability
> Attachments: jstack-kafka.out, jstack-zoo.out, lsofkafka.txt, 
> lsofzookeeper.txt
>
>
> Kafka is running out of open file descriptor when system network interface is 
> done.
> Issue description:
> We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
> descriptor for the account running Kafka is set to 10.
> During an upgrade, network interface went down. Outage continued for 12 hours 
> eventually all the broker crashed with java.io.IOException: Too many open 
> files error.
> We repeated the test in a lower environment and observed that Open Socket 
> count keeps on increasing while the NIC is down.
> We have around 13 topics with max partition size of 120 and number of replica 
> fetcher thread is set to 8.
> Using an internal monitoring tool we observed that Open Socket descriptor   
> for the broker pid continued to increase although NIC was down leading to  
> Open File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009035#comment-16009035
 ] 

ASF GitHub Bot commented on KAFKA-4982:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3004


> Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)
> --
>
> Key: KAFKA-4982
> URL: https://issues.apache.org/jira/browse/KAFKA-4982
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
> Fix For: 0.11.0.0
>
>
> Metrics in socket-server-metrics like connection-count connection-close-rate 
> etc are tagged with networkProcessor:
> where the id of a network processor is just a numeric integer.
> If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id 
> just keeps incrementing and when looking at the metrics it is hard to match 
> the metric tag to a listener. 
> You need to know the number of network threads and the order in which the 
> listeners are declared in the brokers' server.properties.
> We should add a tag showing the listener label, that would also make it much 
> easier to group the metrics in a tool like grafana



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk7 #2182

2017-05-12 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #1515

2017-05-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5216: Fix peekNextKey in cached window/session store iterators

[jason] KAFKA-5160; KIP-98 Broker side support for TxnOffsetCommitRequest

[wangguoz] KAFKA-5198: Synchronize on RocksDbStore#openIterators

--
[...truncated 860.57 KB...]
kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString STARTED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > 

[jira] [Commented] (KAFKA-5227) SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008998#comment-16008998
 ] 

ASF GitHub Bot commented on KAFKA-5227:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3037


> SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
> -
>
> Key: KAFKA-5227
> URL: https://issues.apache.org/jira/browse/KAFKA-5227
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
> Fix For: 0.11.0.0
>
>
> Error Message
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
> {code}
> Stacktrace
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:322)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:190)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
>   at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:64)
>   at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
>   at 
> kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
>   at 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

[GitHub] kafka pull request #3037: KAFKA-5227: Remove unsafe assertion, make jaas con...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3037


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5227) SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2017-05-12 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5227.

   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 3037
[https://github.com/apache/kafka/pull/3037]

> SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
> -
>
> Key: KAFKA-5227
> URL: https://issues.apache.org/jira/browse/KAFKA-5227
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
> Fix For: 0.11.0.0
>
>
> Error Message
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
> {code}
> Stacktrace
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:322)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:190)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
>   at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:64)
>   at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
>   at 
> kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
>   at 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

Build failed in Jenkins: kafka-0.10.2-jdk7 #156

2017-05-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5216: Fix peekNextKey in cached window/session store iterators

--
[...truncated 158.85 KB...]
kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault STARTED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol STARTED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled STARTED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault STARTED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration STARTED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol STARTED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol PASSED

kafka.server.SslReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.SslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade STARTED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithLeaderThrottle STARTED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithLeaderThrottle PASSED

kafka.server.ReplicationQuotasTest > shouldThrottleOldSegments STARTED

kafka.server.ReplicationQuotasTest > shouldThrottleOldSegments PASSED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithFollowerThrottle STARTED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithFollowerThrottle PASSED

kafka.server.ServerStartupTest > testBrokerStateRunningAfterZK STARTED

kafka.server.ServerStartupTest > testBrokerStateRunningAfterZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot STARTED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerStartupWithSamePort STARTED

kafka.server.ServerStartupTest > testConflictBrokerStartupWithSamePort PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration STARTED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.ServerStartupTest > testBrokerSelfAware STARTED

kafka.server.ServerStartupTest > testBrokerSelfAware PASSED

kafka.server.ProduceRequestTest > testSimpleProduceRequest STARTED

kafka.server.ProduceRequestTest > testSimpleProduceRequest PASSED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest STARTED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping STARTED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > 
testFetchBeyondHighWatermarkReturnEmptyResponse STARTED

kafka.server.ReplicaManagerTest > 
testFetchBeyondHighWatermarkReturnEmptyResponse PASSED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks STARTED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks PASSED

kafka.server.ReplicaManagerTest > testClearPurgatoryOnBecomingFollower STARTED

kafka.server.ReplicaManagerTest > testClearPurgatoryOnBecomingFollower PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
STARTED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.KafkaMetricReporterClusterIdTest > testClusterIdPresent STARTED

kafka.server.KafkaMetricReporterClusterIdTest > testClusterIdPresent PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets STARTED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload STARTED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.OffsetCommitTest > testOffsetsDeleteAfterTopicDeletion STARTED


Build failed in Jenkins: kafka-trunk-jdk7 #2181

2017-05-12 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5160; KIP-98 Broker side support for TxnOffsetCommitRequest

--
[...truncated 858.84 KB...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest 

[jira] [Commented] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-05-12 Thread Joseph Aliase (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008959#comment-16008959
 ] 

Joseph Aliase commented on KAFKA-5007:
--

[~cmccabe] PID: 24739  is Kafka process. Let me give you a background. Initial 
when this issue occurred in Prod, Kafka process died so I was not able to 
collect open file descriptors.

We knew the issue was triggered by NIC. So to replicate it in DEV I brought the 
NIC down on the server and saw the descriptors growing. That's is the unix 
socket you are referring too.

Issues reoccurred in prod but this time I was able to collect stats like lsof 
those are attached files.

I don't see this has an issue between zookeeper and Kafka.

It's easy to reproduce the issue in MacBook. Just start a Kafka Cluster in 
remote system with one broker in your macbook and ingest some data in test 
topic. After some time bring down the internet, you would start seeing replica 
fetcher thread error message and the open file descriptor rising.



> Kafka Replica Fetcher Thread- Resource Leak
> ---
>
> Key: KAFKA-5007
> URL: https://issues.apache.org/jira/browse/KAFKA-5007
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.10.0.0, 0.10.1.1, 0.10.2.0
> Environment: Centos 7
> Jave 8
>Reporter: Joseph Aliase
>Priority: Critical
>  Labels: reliability
> Attachments: jstack-kafka.out, jstack-zoo.out, lsofkafka.txt, 
> lsofzookeeper.txt
>
>
> Kafka is running out of open file descriptor when system network interface is 
> done.
> Issue description:
> We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
> descriptor for the account running Kafka is set to 10.
> During an upgrade, network interface went down. Outage continued for 12 hours 
> eventually all the broker crashed with java.io.IOException: Too many open 
> files error.
> We repeated the test in a lower environment and observed that Open Socket 
> count keeps on increasing while the NIC is down.
> We have around 13 topics with max partition size of 120 and number of replica 
> fetcher thread is set to 8.
> Using an internal monitoring tool we observed that Open Socket descriptor   
> for the broker pid continued to increase although NIC was down leading to  
> Open File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5198) RocksDbStore#openIterators should be synchronized, since it is accessed from multiple threads

2017-05-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5198.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 3000
[https://github.com/apache/kafka/pull/3000]

> RocksDbStore#openIterators should be synchronized, since it is accessed from 
> multiple threads
> -
>
> Key: KAFKA-5198
> URL: https://issues.apache.org/jira/browse/KAFKA-5198
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>
> Currently, {{RocksDBIterator#close}} accesses {{RocksDbStore#openIterators}} 
> without taking the {{RocksDbStore}} lock.  The only lock 
> {{RocksDBIterator#close}} holds is a lock on the iterator object, which does 
> not help here.  So {{RocksDbStore#openIterators}} should be made 
> synchronized.  Otherwise there is undefined behavior, including 
> {{ConcurrentModificationExceptions}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5198) RocksDbStore#openIterators should be synchronized, since it is accessed from multiple threads

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008942#comment-16008942
 ] 

ASF GitHub Bot commented on KAFKA-5198:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3000


> RocksDbStore#openIterators should be synchronized, since it is accessed from 
> multiple threads
> -
>
> Key: KAFKA-5198
> URL: https://issues.apache.org/jira/browse/KAFKA-5198
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> Currently, {{RocksDBIterator#close}} accesses {{RocksDbStore#openIterators}} 
> without taking the {{RocksDbStore}} lock.  The only lock 
> {{RocksDBIterator#close}} holds is a lock on the iterator object, which does 
> not help here.  So {{RocksDbStore#openIterators}} should be made 
> synchronized.  Otherwise there is undefined behavior, including 
> {{ConcurrentModificationExceptions}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5180) Transient failure: ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

2017-05-12 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-5180:

Status: Patch Available  (was: In Progress)

> Transient failure: 
> ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch
> 
>
> Key: KAFKA-5180
> URL: https://issues.apache.org/jira/browse/KAFKA-5180
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Onur Karaman
>
> {code}
> Stacktrace org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /controller_epoch  at 
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1001)   
>   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1100)at 
> org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1095)at 
> kafka.utils.ZkUtils.readData(ZkUtils.scala:652)  at 
> kafka.controller.ControllerIntegrationTest.$anonfun$testControllerMoveIncrementsControllerEpoch$2(ControllerIntegrationTest.scala:68)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:808)at 
> kafka.controller.ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch(ControllerIntegrationTest.scala:68)
> {code}
> cc [~onurkaraman]
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/1487/tests



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5180) Transient failure: ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

2017-05-12 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5180 started by Onur Karaman.
---
> Transient failure: 
> ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch
> 
>
> Key: KAFKA-5180
> URL: https://issues.apache.org/jira/browse/KAFKA-5180
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Onur Karaman
>
> {code}
> Stacktrace org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /controller_epoch  at 
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1001)   
>   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1100)at 
> org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1095)at 
> kafka.utils.ZkUtils.readData(ZkUtils.scala:652)  at 
> kafka.controller.ControllerIntegrationTest.$anonfun$testControllerMoveIncrementsControllerEpoch$2(ControllerIntegrationTest.scala:68)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:808)at 
> kafka.controller.ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch(ControllerIntegrationTest.scala:68)
> {code}
> cc [~onurkaraman]
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/1487/tests



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3000: KAFKA-5198. RocksDbStore#openIterators should be s...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3000


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5180) Transient failure: ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008940#comment-16008940
 ] 

ASF GitHub Bot commented on KAFKA-5180:
---

GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/3038

KAFKA-5180: fix transient failure in 
ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

The tests previously ignored the fact that the controller does not 
atomically create the /controller znode and create/increment the 
/controller_epoch znode.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-5180

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3038.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3038


commit 83b471f9b1bd4e4532f280cc83cb3aef67ce51e8
Author: Onur Karaman 
Date:   2017-05-12T23:51:41Z

KAFKA-5180: fix transient failure in 
ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

The tests previously ignored the fact that the controller does not 
atomically create the /controller znode and create/increment the 
/controller_epoch znode.




> Transient failure: 
> ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch
> 
>
> Key: KAFKA-5180
> URL: https://issues.apache.org/jira/browse/KAFKA-5180
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Onur Karaman
>
> {code}
> Stacktrace org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /controller_epoch  at 
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1001)   
>   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1100)at 
> org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1095)at 
> kafka.utils.ZkUtils.readData(ZkUtils.scala:652)  at 
> kafka.controller.ControllerIntegrationTest.$anonfun$testControllerMoveIncrementsControllerEpoch$2(ControllerIntegrationTest.scala:68)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:808)at 
> kafka.controller.ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch(ControllerIntegrationTest.scala:68)
> {code}
> cc [~onurkaraman]
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/1487/tests



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3038: KAFKA-5180: fix transient failure in ControllerInt...

2017-05-12 Thread onurkaraman
GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/3038

KAFKA-5180: fix transient failure in 
ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

The tests previously ignored the fact that the controller does not 
atomically create the /controller znode and create/increment the 
/controller_epoch znode.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-5180

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3038.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3038


commit 83b471f9b1bd4e4532f280cc83cb3aef67ce51e8
Author: Onur Karaman 
Date:   2017-05-12T23:51:41Z

KAFKA-5180: fix transient failure in 
ControllerIntegrationTest.testControllerMoveIncrementsControllerEpoch

The tests previously ignored the fact that the controller does not 
atomically create the /controller znode and create/increment the 
/controller_epoch znode.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-05-12 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008930#comment-16008930
 ] 

Colin P. McCabe commented on KAFKA-5007:


I tried running a kafka instance that could not communicate with ZooKeeper on 
my laptop, and did not observe any growth in UNIX domain socket (or other 
sockets) which were open.

I also noticed that the inital lsof file attached to the ticket does not show 
UNIX domain sockets.  Did the issue change?

> Kafka Replica Fetcher Thread- Resource Leak
> ---
>
> Key: KAFKA-5007
> URL: https://issues.apache.org/jira/browse/KAFKA-5007
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.10.0.0, 0.10.1.1, 0.10.2.0
> Environment: Centos 7
> Jave 8
>Reporter: Joseph Aliase
>Priority: Critical
>  Labels: reliability
> Attachments: jstack-kafka.out, jstack-zoo.out, lsofkafka.txt, 
> lsofzookeeper.txt
>
>
> Kafka is running out of open file descriptor when system network interface is 
> done.
> Issue description:
> We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
> descriptor for the account running Kafka is set to 10.
> During an upgrade, network interface went down. Outage continued for 12 hours 
> eventually all the broker crashed with java.io.IOException: Too many open 
> files error.
> We repeated the test in a lower environment and observed that Open Socket 
> count keeps on increasing while the NIC is down.
> We have around 13 topics with max partition size of 120 and number of replica 
> fetcher thread is set to 8.
> Using an internal monitoring tool we observed that Open Socket descriptor   
> for the broker pid continued to increase although NIC was down leading to  
> Open File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2970: Kafka-5160; KIP-98 Broker side support for TxnOffs...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2970


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5160) KIP-98 : broker side handling for the TxnOffsetCommitRequest

2017-05-12 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5160.

   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 2970
[https://github.com/apache/kafka/pull/2970]

> KIP-98 : broker side handling for the TxnOffsetCommitRequest
> 
>
> Key: KAFKA-5160
> URL: https://issues.apache.org/jira/browse/KAFKA-5160
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-12 Thread Ian Springer (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008885#comment-16008885
 ] 

Ian Springer commented on KAFKA-5226:
-

That's not how I read the stack trace. If my code was causing the NPE, the line 
immediately after "Caused by: java.lang.NullPointerException: null"  (i.e. the 
first line of the root cause stack trace) would be a line from one of my 
classes. I read it as either sourceNode or rawRecord being null. I suppose it's 
also possible something is truncating the stack trace. My uncaught exception 
handler is simply logging the error using Logback.

{code}
@Override
public void uncaughtException(Thread t, Throwable e) {
log.error("Uncaught exception in thread [{}]", t.getName(), e);
}
{code}

I don't think the JDK is trimming the stack trace at all, since the exception 
only occurred twice.


> NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize
> --
>
> Key: KAFKA-5226
> URL: https://issues.apache.org/jira/browse/KAFKA-5226
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
> Environment: 64-bit Amazon Linux, JDK8
>Reporter: Ian Springer
>
> I saw the following NPE in our Kafka Streams app, which has 3 nodes running 
> on 3 separate machines.. Out of hundreds of messages processed, the NPE only 
> occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
> I'm hoping the Kafka Streams team can guess the cause based on the stack 
> trace. If I can provide any additional details about our app, please let me 
> know.
>  
> {code}
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka version : 0.10.2.1
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka commitId : e89bffd6b2eff799
> INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
> Starting beans in phase 0
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from CREATED to RUNNING.
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started 
> Kafka Stream process
> INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
> Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
> null) for group evergage-app.
> INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [] for group evergage-app
> INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 18
> INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 02:58:27,729 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> state stores
> INFO  2017-05-10 02:58:27,731 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> [14 hours pass...]
> INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [us.app.Trigger-0] for group 
> evergage-app
> INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 19
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 16:21:27,489 org.apache.kafka.streams.KafkaStreams  
> stream-client 

Re: [DISCUSS] KIP-136: Add Listener name and Security Protocol name to SelectorMetrics tags

2017-05-12 Thread Edoardo Comar
Thanks Ismael, following the discussion in the PR thread, I've updated the 
implementation to only use the listener tag
and I've updated the KIP to match the implementation and describe the 
compatibility choice of not tagging the yammer metric.

--
Edoardo Comar
IBM MessageHub
eco...@uk.ibm.com
IBM UK Ltd, Hursley Park, SO21 2JN




From:   Edoardo Comar/UK/IBM@IBMGB
To: dev@kafka.apache.org
Date:   10/05/2017 17:25
Subject:Re: [DISCUSS] KIP-136: Add Listener name and Security 
Protocol name to SelectorMetrics tags



Great feedback, Ismael, will update KIP and PR
thanks
--
Edoardo Comar
IBM MessageHub
eco...@uk.ibm.com
IBM UK Ltd, Hursley Park, SO21 2JN




From:   Ismael Juma 
To: dev@kafka.apache.org
Date:   10/05/2017 11:01
Subject:Re: [DISCUSS] KIP-136: Add Listener name and Security 
Protocol name to SelectorMetrics tags
Sent by:isma...@gmail.com



Hi Edoardo,

I would go with what's in the KIP as it's more general. Listener names and
security protocols are different concepts (even if the default value is 
the
same for compatibility reasons) and it's a bit odd to expect generic tools
that process Kafka metrics to have to fallback to the security protocol if
listener name is missing.

Ismael

On Wed, May 10, 2017 at 2:31 PM, Edoardo Comar  wrote:

> While coding I thought of a slight deviation from the KIP screenshots.
> That is, adding the listener tag only if it differs from protocol.
> So in most common cases we won't see PLAINTEXT-PLAINTEXT, SSL-SSL etc
>
> I should have offered this option while discussing the KIP.
> Any late ppinions?
>
> Edo
> --
> Edoardo Comar
> IBM MessageHub
> eco...@uk.ibm.com
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
>
> From:   Edoardo Comar/UK/IBM@IBMGB
> To: dev@kafka.apache.org
> Date:   03/05/2017 03:00
> Subject:Re: [DISCUSS] KIP-136: Add Listener name and Security
> Protocol name to SelectorMetrics tags
>
>
>
> Thanks Ismael,
> I meant for this change to only apply to brokers, not clients, I'm going
> to update the KIP
> --
> Edoardo Comar
> IBM MessageHub
> eco...@uk.ibm.com
> IBM UK Ltd, Hursley Park, SO21 2JN
>
> IBM United Kingdom Limited Registered in England and Wales with number
> 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. 
PO6
>
> 3AU
>
>
>
> From:   Ismael Juma 
> To: dev@kafka.apache.org
> Date:   02/05/2017 20:20
> Subject:Re: [DISCUSS] KIP-136: Add Listener name and Security
> Protocol name to SelectorMetrics tags
> Sent by:isma...@gmail.com
>
>
>
> Edoardo,
>
> Are you planning to do this only in the broker? Listener names only 
exist
> at the broker level. Also, clients only have a single security protocol,
> so
> maybe we should do this for brokers only. In that case, the 
compatibility
> impact is also lower because more users rely on the Yammer metrics
> instead.
>
> I would suggest you start the voting thread after clarifying if this
> change
> only applies at the broker level.
>
> Ismael
>
> On Tue, Apr 25, 2017 at 1:19 PM, Ismael Juma  wrote:
>
> > Thanks for the KIP. I think it makes sense to have those tags. My only
> > question is regarding the compatibility impact. We don't have a good
> > compatibility story when it comes to adding tags to existing metrics
> since
> > the JmxReporter adds the tags to the object name.
> >
> > Ismael
> >
> > On Thu, Mar 30, 2017 at 4:51 PM, Edoardo Comar 
> wrote:
> >
> >> Hi all,
> >>
> >> We created KIP-136: Add Listener name and Security Protocol name to
> >> SelectorMetrics tags
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-136%
> >> 
3A+Add+Listener+name+and+Security+Protocol+name+to+SelectorMetrics+tags
> >>
> >> Please help review the KIP. You feedback is appreciated!
> >>
> >> cheers,
> >> Edo
> >> --
> >> Edoardo Comar
> >> IBM MessageHub
> >> eco...@uk.ibm.com
> >> IBM UK Ltd, Hursley Park, SO21 2JN
> >>
> >> IBM United Kingdom Limited Registered in England and Wales with 
number
> >> 741598 Registered office: PO Box 41, North Harbour, Portsmouth, 
Hants.
> PO6
> >> 3AU
> >> Unless stated otherwise above:
> >> IBM United Kingdom Limited - Registered in England and Wales with
> number
> >> 741598.
> >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire 
PO6
> 3AU
> >>
> >
> >
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 
3AU
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, 

[jira] [Created] (KAFKA-5230) Recommended values for Connect transformations contain the wrong class name

2017-05-12 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5230:


 Summary: Recommended values for Connect transformations contain 
the wrong class name
 Key: KAFKA-5230
 URL: https://issues.apache.org/jira/browse/KAFKA-5230
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.2.1, 0.10.2.0
Reporter: Ewen Cheslack-Postava


If you try to validate a connector config with a transformation, it includes 
suggested values for that config:

{code}
curl 
'http://localhost:8083/connector-plugins/org.apache.kafka.connect.file.FileStreamSourceConnector/config/validate'
 -X PUT -H 'Content-Type: application/json' -H 'Accept: */*' --data-binary 
'{"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector","name":"blah-blah","transforms":"something","file":"/tmp/blah.what","topic":"test-topic","tasks.max":1}'
{code}

However, some of those recommendations do not contain the correct value:

{code}
{
  "definition": {
"name": "transforms.something.type",
"type": "CLASS",
"required": true,
"default_value": null,
"importance": "HIGH",
"documentation": "Class for the 'something' transformation.",
"group": "Transforms: something",
"width": "LONG",
"display_name": "Transformation type for something",
"dependents": [],
"order": 0
  },
  "value": {
"name": "transforms.something.type",
"value": null,
"recommended_values": [
  "org.apache.kafka.connect.transforms.ExtractField.Key",
  "org.apache.kafka.connect.transforms.ExtractField.Value",
  "org.apache.kafka.connect.transforms.HoistField.Key",
  "org.apache.kafka.connect.transforms.HoistField.Value",
  "org.apache.kafka.connect.transforms.InsertField.Key",
  "org.apache.kafka.connect.transforms.InsertField.Value",
  "org.apache.kafka.connect.transforms.MaskField.Key",
  "org.apache.kafka.connect.transforms.MaskField.Value",
  "org.apache.kafka.connect.transforms.RegexRouter",
  "org.apache.kafka.connect.transforms.ReplaceField.Key",
  "org.apache.kafka.connect.transforms.ReplaceField.Value",
  "org.apache.kafka.connect.transforms.SetSchemaMetadata.Key",
  "org.apache.kafka.connect.transforms.SetSchemaMetadata.Value",
  "org.apache.kafka.connect.transforms.TimestampRouter",
  "org.apache.kafka.connect.transforms.ValueToKey"
],
"errors": [
  "Missing required configuration \"transforms.something.type\" which 
has no default value.",
  "Invalid value null for configuration transforms.something.type: Not 
a Transformation"
],
"visible": true
  }
{code}

In particular, nested classes for Key and Value transformations are being 
returned as, e.g.

org.apache.kafka.connect.transforms.ReplaceField.Key

instead of

org.apache.kafka.connect.transforms.ReplaceField$Key

It seems this is the difference between the canonical and regular name.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5227) SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008850#comment-16008850
 ] 

ASF GitHub Bot commented on KAFKA-5227:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3037

KAFKA-5227: Remove unsafe assertion, make jaas config safer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5227

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3037.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3037


commit 054d9006b2fb0893c30ebdc21d36f934110cef91
Author: Rajini Sivaram 
Date:   2017-05-12T22:23:03Z

KAFKA-5227: Remove unsafe assertion, make jaas config safer




> SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
> -
>
> Key: KAFKA-5227
> URL: https://issues.apache.org/jira/browse/KAFKA-5227
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
>
> Error Message
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
> {code}
> Stacktrace
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:322)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:190)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
>   at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:64)
>   at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
>   at 
> kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
>   at 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> 

[jira] [Assigned] (KAFKA-5227) SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2017-05-12 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram reassigned KAFKA-5227:
-

Assignee: Rajini Sivaram

> SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
> -
>
> Key: KAFKA-5227
> URL: https://issues.apache.org/jira/browse/KAFKA-5227
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
>
> Error Message
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
> {code}
> Stacktrace
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:322)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:190)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
>   at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:64)
>   at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
>   at 
> kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
>   at 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[GitHub] kafka pull request #3016: KAFKA-5216 fix error on peekNextKey in cached wind...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3016


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5216) Cached Session/Window store may return error on iterator.peekNextKey()

2017-05-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5216:
-
   Resolution: Fixed
Fix Version/s: 0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3016
[https://github.com/apache/kafka/pull/3016]

> Cached Session/Window store may return error on iterator.peekNextKey()
> --
>
> Key: KAFKA-5216
> URL: https://issues.apache.org/jira/browse/KAFKA-5216
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
> Fix For: 0.11.0.0
>
>
> {{AbstractMergedSortedCacheStoreIterator}} uses the wrong cache key 
> deserializer in {{peekNextKey}}. This may result in errors or incorrect keys 
> returned from {{peekNextKey}} on a {{WindowStoreIterator}} or 
> {{SessionStoreIterator}} for a cached window or session store.
> CachingKeyValueStore does not seem to be affected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5215) Small JavaDoc fix for AdminClient#describeTopics

2017-05-12 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5215 started by Colin P. McCabe.
--
> Small JavaDoc fix for AdminClient#describeTopics
> 
>
> Key: KAFKA-5215
> URL: https://issues.apache.org/jira/browse/KAFKA-5215
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> Small JavaDoc fix for AdminClient#describeTopics



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5198) RocksDbStore#openIterators should be synchronized, since it is accessed from multiple threads

2017-05-12 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5198 started by Colin P. McCabe.
--
> RocksDbStore#openIterators should be synchronized, since it is accessed from 
> multiple threads
> -
>
> Key: KAFKA-5198
> URL: https://issues.apache.org/jira/browse/KAFKA-5198
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> Currently, {{RocksDBIterator#close}} accesses {{RocksDbStore#openIterators}} 
> without taking the {{RocksDbStore}} lock.  The only lock 
> {{RocksDBIterator#close}} holds is a lock on the iterator object, which does 
> not help here.  So {{RocksDbStore#openIterators}} should be made 
> synchronized.  Otherwise there is undefined behavior, including 
> {{ConcurrentModificationExceptions}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5214) KafkaAdminClient#apiVersions should return a public class

2017-05-12 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5214 started by Colin P. McCabe.
--
> KafkaAdminClient#apiVersions should return a public class
> -
>
> Key: KAFKA-5214
> URL: https://issues.apache.org/jira/browse/KAFKA-5214
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> KafkaAdminClient#apiVersions should not refer to internal classes like 
> ApiKeys, NodeApiVersions, etc.  Instead, we should have stable public classes 
> to represent these things in the API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5229) Reflections logs excessive warnings when scanning classpaths

2017-05-12 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5229:


 Summary: Reflections logs excessive warnings when scanning 
classpaths
 Key: KAFKA-5229
 URL: https://issues.apache.org/jira/browse/KAFKA-5229
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.2.1, 0.10.2.0, 0.10.1.1, 0.10.1.0, 0.10.0.1, 0.10.0.0
Reporter: Ewen Cheslack-Postava
Priority: Minor


We use Reflections to scan the classpath for available plugins (connectors, 
converters, transformations), but when doing so Reflections tends to generate a 
lot of log noise like this:

{code}
[2017-05-12 14:59:48,224] WARN could not get type for name 
org.jboss.netty.channel.SimpleChannelHandler from any class loader 
(org.reflections.Reflections:396)
org.reflections.ReflectionsException: could not get type for name 
org.jboss.netty.channel.SimpleChannelHandler
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
at org.reflections.Reflections.(Reflections.java:126)
at 
org.apache.kafka.connect.runtime.PluginDiscovery.scanClasspathForPlugins(PluginDiscovery.java:68)
at 
org.apache.kafka.connect.runtime.AbstractHerder$1.run(AbstractHerder.java:391)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: 
org.jboss.netty.channel.SimpleChannelHandler
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388)
... 5 more
{code}

Despite being benign, these warnings worry users, especially first time users.

We should either a) see if we can get Reflections to turn off these specific 
warnings via some config or b) make Reflections only log at > WARN by default 
in our log4j config. (b) is probably safe since we should only be seeing these 
at startup and I don't think I've seen any actual issue logged at WARN.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5130) Change InterBrokerSendThread to use a Queue per broker

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008783#comment-16008783
 ] 

ASF GitHub Bot commented on KAFKA-5130:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2964


> Change InterBrokerSendThread to use a Queue per broker
> --
>
> Key: KAFKA-5130
> URL: https://issues.apache.org/jira/browse/KAFKA-5130
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> Change the {{InterBrokerSendThread}} to use a queue per broker and only 
> attempt to send to brokers that are ready



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5130) Change InterBrokerSendThread to use a Queue per broker

2017-05-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-5130:


Assignee: Guozhang Wang

> Change InterBrokerSendThread to use a Queue per broker
> --
>
> Key: KAFKA-5130
> URL: https://issues.apache.org/jira/browse/KAFKA-5130
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> Change the {{InterBrokerSendThread}} to use a queue per broker and only 
> attempt to send to brokers that are ready



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2964: KAFKA-5130: Refactor TC In-memory Cache

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2964


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5130) Change InterBrokerSendThread to use a Queue per broker

2017-05-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5130.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 2964
[https://github.com/apache/kafka/pull/2964]

> Change InterBrokerSendThread to use a Queue per broker
> --
>
> Key: KAFKA-5130
> URL: https://issues.apache.org/jira/browse/KAFKA-5130
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
> Fix For: 0.11.0.0
>
>
> Change the {{InterBrokerSendThread}} to use a queue per broker and only 
> attempt to send to brokers that are ready



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-12 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008704#comment-16008704
 ] 

Matthias J. Sax commented on KAFKA-5226:


Thanks for sharing. The loc raising the exception is as follows:
{noformat}
try {
key = sourceNode.deserializeKey(rawRecord.topic(), rawRecord.key());
} catch (Exception e) {
throw new StreamsException(format("Failed to deserialize key for 
record. topic=%s, partition=%d, offset=%d",
  rawRecord.topic(), 
rawRecord.partition(), rawRecord.offset()), e);
}
{noformat}

As we see a proper exception message, we know that {{rawRecord}} is not 
{{null}}, thus, the NPE could only occur if {{sourceNode}} would be {{null}} or 
the call to {{deserializeKey()}} throws. If {{sourceNode}} would be {{null}} it 
would fail for each input record, that I want to rule this out. Furthermore, 
{{deserializeKey()}} is a  single liner: {{return 
keyDeserializer.deserialize(topic, data);}} Thus, I assume that the NPE come 
from your code within {{deserialize}}. I am not familiar with how your code 
works in detail, but can it handle {{null}} properly? Do you have {{null}} keys 
in your input topic for some (very rare message) that could trigger a NPE?

> NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize
> --
>
> Key: KAFKA-5226
> URL: https://issues.apache.org/jira/browse/KAFKA-5226
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
> Environment: 64-bit Amazon Linux, JDK8
>Reporter: Ian Springer
>
> I saw the following NPE in our Kafka Streams app, which has 3 nodes running 
> on 3 separate machines.. Out of hundreds of messages processed, the NPE only 
> occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
> I'm hoping the Kafka Streams team can guess the cause based on the stack 
> trace. If I can provide any additional details about our app, please let me 
> know.
>  
> {code}
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka version : 0.10.2.1
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka commitId : e89bffd6b2eff799
> INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
> Starting beans in phase 0
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from CREATED to RUNNING.
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started 
> Kafka Stream process
> INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
> Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
> null) for group evergage-app.
> INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [] for group evergage-app
> INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 18
> INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 02:58:27,729 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> state stores
> INFO  2017-05-10 02:58:27,731 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> [14 hours pass...]
> INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [us.app.Trigger-0] for group 
> evergage-app
> INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
> 

[jira] [Commented] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-05-12 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008699#comment-16008699
 ] 

Colin P. McCabe commented on KAFKA-5007:


{code}
 java 24739 cyclone 1216u unix 0x880035b35000 0t0 42713601 socket
{code}

If I'm reading this right, this is a UNIX domain socket that is being leaked.  
Can you run {{netstat --unix}} and see if the sockets show up in that output?  
Also, I assume that 24739 was the Kafka process ID?

A UNIX domain socket leak in Java code could indicate a problem with how the 
code uses NIO

> Kafka Replica Fetcher Thread- Resource Leak
> ---
>
> Key: KAFKA-5007
> URL: https://issues.apache.org/jira/browse/KAFKA-5007
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.10.0.0, 0.10.1.1, 0.10.2.0
> Environment: Centos 7
> Jave 8
>Reporter: Joseph Aliase
>Priority: Critical
>  Labels: reliability
> Attachments: jstack-kafka.out, jstack-zoo.out, lsofkafka.txt, 
> lsofzookeeper.txt
>
>
> Kafka is running out of open file descriptor when system network interface is 
> done.
> Issue description:
> We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
> descriptor for the account running Kafka is set to 10.
> During an upgrade, network interface went down. Outage continued for 12 hours 
> eventually all the broker crashed with java.io.IOException: Too many open 
> files error.
> We repeated the test in a lower environment and observed that Open Socket 
> count keeps on increasing while the NIC is down.
> We have around 13 topics with max partition size of 120 and number of replica 
> fetcher thread is set to 8.
> Using an internal monitoring tool we observed that Open Socket descriptor   
> for the broker pid continued to increase although NIC was down leading to  
> Open File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5228) Revisit Streams DSL JavaDocs

2017-05-12 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5228:
--

 Summary: Revisit Streams DSL JavaDocs
 Key: KAFKA-5228
 URL: https://issues.apache.org/jira/browse/KAFKA-5228
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.2.1
Reporter: Matthias J. Sax
Priority: Trivial


We got some user feedback, that is it sometimes not clear from the JavaDocs, if 
provides {{Serdes}} are for input or output records.

For example:
{noformat}
...
 * @param keySerde key serdes for materializing this stream.
 * If not specified the default serdes defined in the 
configs will be used
 * @param valSerde value serdes for materializing this stream,
 * if not specified the default serdes defined in the 
configs will be used
...
 KStream join(final KTable table,
 final ValueJoiner joiner,
 final Serde keySerde,
 final Serde valSerde);
{noformat}

The phrase "for this stream" means the input stream. But it is rather subtle. 
We should revisit the complete JavaDocs and rephrase the Serde parameter 
description if required. We should also rename the parameter names (in the 
example about, maybe from {{keySerde}} to {{inputKStreamKeySerde}})



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: synchronous request response using kafka

2017-05-12 Thread Colin McCabe
Hi Sanjay,

Can you be a little clearer what you are trying to achieve?  If you want
to build an RPC system where one entity makes a remote procedure call to
another, you might consider using something like CORBA, Apache Thrift,
gRPC, etc.

best,
Colin


On Fri, May 12, 2017, at 07:55, Banerjee, Sanjay wrote:
> Can someone please share some thoughts whether we can do synchronous call
>  (request response) using kafka similar to JMS
> 
> Thanks
> Sanjay
> 913-221-9164
> 


[jira] [Commented] (KAFKA-5227) SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2017-05-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008641#comment-16008641
 ] 

Xavier Léauté commented on KAFKA-5227:
--

[~rsivaram] this happened here 
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/3831/, so less than two 
hours ago. My branch was not rebased on the latest, but Jenkins would have 
merged it into trunk before running the tests.

> SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
> -
>
> Key: KAFKA-5227
> URL: https://issues.apache.org/jira/browse/KAFKA-5227
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>
> Error Message
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
> {code}
> Stacktrace
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:322)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:190)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
>   at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:64)
>   at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
>   at 
> kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
>   at 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Work started] (KAFKA-5186) Avoid expensive initialization of producer state when upgrading

2017-05-12 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5186 started by Jason Gustafson.
--
> Avoid expensive initialization of producer state when upgrading
> ---
>
> Key: KAFKA-5186
> URL: https://issues.apache.org/jira/browse/KAFKA-5186
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> Currently the producer state is always loaded upon broker initialization. If 
> we don't find a snapshot file to load from, then we scan the log segments 
> from the beginning to rebuild the state. Of course, when users upgrade to the 
> new version, there will be no snapshot file, so the upgrade could be quite 
> intensive. It would be nice to avoid this by assuming instead that the 
> absence of a snapshot file means that the producer state should start clean 
> and we can avoid the expensive scanning.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5227) SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2017-05-12 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008635#comment-16008635
 ] 

Rajini Sivaram commented on KAFKA-5227:
---

[~xvrl] Was this failure on a PR that was at the latest level of Kafka? There 
was a fix for this last night, did this failure occur after a rebase today?

> SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
> -
>
> Key: KAFKA-5227
> URL: https://issues.apache.org/jira/browse/KAFKA-5227
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>
> Error Message
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
> {code}
> Stacktrace
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:322)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:190)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
>   at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:64)
>   at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
>   at 
> kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
>   at 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

[jira] [Work started] (KAFKA-5211) KafkaConsumer should not skip a corrupted record after throwing an exception.

2017-05-12 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5211 started by Jiangjie Qin.
---
> KafkaConsumer should not skip a corrupted record after throwing an exception.
> -
>
> Key: KAFKA-5211
> URL: https://issues.apache.org/jira/browse/KAFKA-5211
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>  Labels: clients, consumer
> Fix For: 0.11.0.0
>
>
> In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw 
> an exception and block on that corrupted record. In the latest trunk this 
> behavior has changed to skip the corrupted record (which is the old consumer 
> behavior). With KIP-98, skipping corrupted messages would be a little 
> dangerous as the message could be a control message for a transaction. We 
> should fix the issue to let the KafkaConsumer block on the corrupted messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3995) Split the ProducerBatch and resend when received RecordTooLargeException

2017-05-12 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008634#comment-16008634
 ] 

Jiangjie Qin commented on KAFKA-3995:
-

[~ijuma] done.

> Split the ProducerBatch and resend when received RecordTooLargeException
> 
>
> Key: KAFKA-3995
> URL: https://issues.apache.org/jira/browse/KAFKA-3995
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
>
> We recently see a few cases where RecordTooLargeException is thrown because 
> the compressed message sent by KafkaProducer exceeded the max message size.
> The root cause of this issue is because the compressor is estimating the 
> batch size using an estimated compression ratio based on heuristic 
> compression ratio statistics. This does not quite work for the traffic with 
> highly variable compression ratios. 
> For example, if the batch size is set to 1MB and the max message size is 1MB. 
> Initially a the producer is sending messages (each message is 1MB) to topic_1 
> whose data can be compressed to 1/10 of the original size. After a while the 
> estimated compression ratio in the compressor will be trained to 1/10 and the 
> producer would put 10 messages into one batch. Now the producer starts to 
> send messages (each message is also 1MB) to topic_2 whose message can only be 
> compress to 1/5 of the original size. The producer would still use 1/10 as 
> the estimated compression ratio and put 10 messages into a batch. That batch 
> would be 2 MB after compression which exceeds the maximum message size. In 
> this case the user do not have many options other than resend everything or 
> close the producer if they care about ordering.
> This is especially an issue for services like MirrorMaker whose producer is 
> shared by many different topics.
> To solve this issue, we can probably add a configuration 
> "enable.compression.ratio.estimation" to the producer. So when this 
> configuration is set to false, we stop estimating the compressed size but 
> will close the batch once the uncompressed bytes in the batch reaches the 
> batch size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk7 #2177

2017-05-12 Thread Apache Jenkins Server
See 




KafkaStreams reports RUNNING even though all StreamThreads has crashed

2017-05-12 Thread Andreas Gabrielsson
Hi All,

We recently implemented a health check for a Kafka Streams based application. 
The health check is simply checking the state of Kafka Streams by calling 
KafkaStreams.state(). It reports healthy if it’s not in PENDING_SHUTDOWN or 
NOT_RUNNING states. 

We truly appreciate having the possibility to easily check the state of Kafka 
Streams but to our surprise we noticed that KafkaStreams.state() returns 
RUNNING even though all StreamThreads has crashed and reached NOT_RUNNING 
state. Is this intended behaviour or is it a bug? Semantically it seems weird 
to me that KafkaStreams would say it’s RUNNING when it is in fact not consuming 
anything since all underlying working threads has crashed. 

If this is intended behaviour I would appreciate an explanation of why that is 
the case. Also in that case, how could I determine if the consumption from 
Kafka hasn’t crashed? 

If this is not intended behaviour, how fast could I expect it to be fixed? I 
wouldn’t mind fixing it myself but I’m not sure if this is considered trivial 
or big enough to require a JIRA. Also, if I would implement a fix I’d like your 
input on what would be a reasonable solution. By just inspecting to code I have 
an idea but I’m not sure I understand all the implication so I’d be happy to 
hear your thoughts first. 

Thanks in advance,
Andreas Gabrielsson



[jira] [Created] (KAFKA-5227) SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2017-05-12 Thread JIRA
Xavier Léauté created KAFKA-5227:


 Summary: 
SaslScramSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
 Key: KAFKA-5227
 URL: https://issues.apache.org/jira/browse/KAFKA-5227
 Project: Kafka
  Issue Type: Sub-task
Reporter: Xavier Léauté


Error Message

{code}
java.lang.SecurityException: zookeeper.set.acl is true, but the verification of 
the JAAS login file failed.
{code}

Stacktrace

{code}
java.lang.SecurityException: zookeeper.set.acl is true, but the verification of 
the JAAS login file failed.
at kafka.server.KafkaServer.initZk(KafkaServer.scala:322)
at kafka.server.KafkaServer.startup(KafkaServer.scala:190)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:91)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
at 
kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:64)
at 
kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
at 
kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
at 
kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 

[jira] [Commented] (KAFKA-5196) LogCleaner should be transaction-aware

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008602#comment-16008602
 ] 

ASF GitHub Bot commented on KAFKA-5196:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3008


> LogCleaner should be transaction-aware
> --
>
> Key: KAFKA-5196
> URL: https://issues.apache.org/jira/browse/KAFKA-5196
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> In accordance with the KIP-98 design, the log cleaner should be aware of 
> transaction markers. This means the following:
> 1. Cleaning should be restricted below the last stable offset (which ensures 
> that only decided data is considered when cleaning).
> 2. Records from aborted transactions are removed immediately. 
> 3. Removal of the COMMIT/ABORT markers must be delayed to ensure that they 
> are not missed by concurrent readers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3008: KAFKA-5196: Make LogCleaner transaction-aware

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3008


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5196) LogCleaner should be transaction-aware

2017-05-12 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5196.

Resolution: Fixed

Issue resolved by pull request 3008
[https://github.com/apache/kafka/pull/3008]

> LogCleaner should be transaction-aware
> --
>
> Key: KAFKA-5196
> URL: https://issues.apache.org/jira/browse/KAFKA-5196
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> In accordance with the KIP-98 design, the log cleaner should be aware of 
> transaction markers. This means the following:
> 1. Cleaning should be restricted below the last stable offset (which ensures 
> that only decided data is considered when cleaning).
> 2. Records from aborted transactions are removed immediately. 
> 3. Removal of the COMMIT/ABORT markers must be delayed to ensure that they 
> are not missed by concurrent readers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-12 Thread Ian Springer (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008543#comment-16008543
 ] 

Ian Springer commented on KAFKA-5226:
-

The key data type is a String (UUID), and the value data type is a POJO 
serialized to SMILE via Jackson. Here's how we configure the serdes for the 
streams:


{code}
Serde myPojoSerde = new SmileSerializer<>(MyPojo.class).toSerde();
KStreamBuilder builder = new KStreamBuilder();
KStream myPojoStream = builder.stream(Serdes.String(), 
myPojoSerde, TOPIC_NAME_PATTERN);
{code}

And here is the SmileSerializer class:


{code}
package foo.serialization;

import foo.util.ObjectMapperHolder;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;

import java.io.Closeable;
import java.io.IOException;
import java.util.Map;

public class SmileSerializer
implements Closeable, Serializer, Deserializer {

private static final ObjectMapper MAPPER = 
ObjectMapperHolder.getInstance().getSmileObjectMapper();

private Class type;

@SuppressWarnings("unused")
public SmileSerializer() {
}

public SmileSerializer(Class type) {
this.type = type;
}

@Override
public void configure(Map config, boolean isKey) {
return;
}

@Override
public byte[] serialize(String topicId, T data) {
try {
return MAPPER.writeValueAsBytes(data);
} catch (JsonProcessingException e) {
throw new IllegalArgumentException(e);
}
}

@Override
public T deserialize(String s, byte[] bytes) {
try {
return MAPPER.readValue(bytes, type);
} catch (IOException e) {
throw new IllegalArgumentException(e);
}
}

@Override
public void close() {
return;
}

public Serde toSerde() {
return Serdes.serdeFrom(this, this);
}

}
{code}


> NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize
> --
>
> Key: KAFKA-5226
> URL: https://issues.apache.org/jira/browse/KAFKA-5226
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
> Environment: 64-bit Amazon Linux, JDK8
>Reporter: Ian Springer
>
> I saw the following NPE in our Kafka Streams app, which has 3 nodes running 
> on 3 separate machines.. Out of hundreds of messages processed, the NPE only 
> occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
> I'm hoping the Kafka Streams team can guess the cause based on the stack 
> trace. If I can provide any additional details about our app, please let me 
> know.
>  
> {code}
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka version : 0.10.2.1
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka commitId : e89bffd6b2eff799
> INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
> Starting beans in phase 0
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from CREATED to RUNNING.
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started 
> Kafka Stream process
> INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
> Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
> null) for group evergage-app.
> INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [] for group evergage-app
> INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 18
> INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 02:58:27,729 
> 

[jira] [Commented] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)

2017-05-12 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008535#comment-16008535
 ] 

Ismael Juma commented on KAFKA-4982:


Thanks! Can you please send an update to the KIP thread as well?

> Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)
> --
>
> Key: KAFKA-4982
> URL: https://issues.apache.org/jira/browse/KAFKA-4982
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
> Fix For: 0.11.0.0
>
>
> Metrics in socket-server-metrics like connection-count connection-close-rate 
> etc are tagged with networkProcessor:
> where the id of a network processor is just a numeric integer.
> If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id 
> just keeps incrementing and when looking at the metrics it is hard to match 
> the metric tag to a listener. 
> You need to know the number of network threads and the order in which the 
> listeners are declared in the brokers' server.properties.
> We should add a tag showing the listener label, that would also make it much 
> easier to group the metrics in a tool like grafana



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk7 #2176

2017-05-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5132: abort long running transactions

--
[...truncated 850.79 KB...]

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate STARTED

kafka.controller.ControllerFailoverTest > testMetadataUpdate SKIPPED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED


[jira] [Commented] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)

2017-05-12 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008499#comment-16008499
 ] 

Edoardo Comar commented on KAFKA-4982:
--

Thanks [~ijuma] following the discussion in the PR thread, I've updated the 
implementation to only use the listener tag
and I've updated the KIP to match the implementation and describe the 
compatibility choice of not tagging the yammer metric.



> Add listener tag to socket-server-metrics.connection-... metrics (KIP-136)
> --
>
> Key: KAFKA-4982
> URL: https://issues.apache.org/jira/browse/KAFKA-4982
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
> Fix For: 0.11.0.0
>
>
> Metrics in socket-server-metrics like connection-count connection-close-rate 
> etc are tagged with networkProcessor:
> where the id of a network processor is just a numeric integer.
> If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id 
> just keeps incrementing and when looking at the metrics it is hard to match 
> the metric tag to a listener. 
> You need to know the number of network threads and the order in which the 
> listeners are declared in the brokers' server.properties.
> We should add a tag showing the listener label, that would also make it much 
> easier to group the metrics in a tool like grafana



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1511

2017-05-12 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #2236: KAFKA-4517: Remove deprecated shell script

2017-05-12 Thread jeffwidman
Github user jeffwidman closed the pull request at:

https://github.com/apache/kafka/pull/2236


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Work started] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-05-12 Thread Mickael Maison (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3356 started by Mickael Maison.
-
> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Mickael Maison
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008463#comment-16008463
 ] 

ASF GitHub Bot commented on KAFKA-3356:
---

GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/3036

KAFKA-3356: Remove ConsumerOffsetChecker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-3356

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3036.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3036


commit c1ec0258e0d1cc76672eece84f2ed0a5aca2d67a
Author: Mickael Maison 
Date:   2017-05-12T17:45:03Z

KAFKA-3356: Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

Deleted both the script and tool




> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3036: KAFKA-3356: Remove ConsumerOffsetChecker

2017-05-12 Thread mimaison
GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/3036

KAFKA-3356: Remove ConsumerOffsetChecker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-3356

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3036.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3036


commit c1ec0258e0d1cc76672eece84f2ed0a5aca2d67a
Author: Mickael Maison 
Date:   2017-05-12T17:45:03Z

KAFKA-3356: Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

Deleted both the script and tool




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3514) Stream timestamp computation needs some further thoughts

2017-05-12 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008459#comment-16008459
 ] 

Matthias J. Sax commented on KAFKA-3514:


This JIRA also relates to this user question: 
http://search-hadoop.com/m/uyzND1iKZJN1yz0E5=Order+of+punctuate+and+process+in+a+stream+processor

We should consider scheduled punctuations, when advancing "stream time" -- not 
just advance "stream time" coarse grain by record timestamps only.

> Stream timestamp computation needs some further thoughts
> 
>
> Key: KAFKA-3514
> URL: https://issues.apache.org/jira/browse/KAFKA-3514
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Eno Thereska
>  Labels: architecture
> Fix For: 0.11.0.0
>
>
> Our current stream task's timestamp is used for punctuate function as well as 
> selecting which stream to process next (i.e. best effort stream 
> synchronization). And it is defined as the smallest timestamp over all 
> partitions in the task's partition group. This results in two unintuitive 
> corner cases:
> 1) observing a late arrived record would keep that stream's timestamp low for 
> a period of time, and hence keep being process until that late record. For 
> example take two partitions within the same task annotated by their 
> timestamps:
> {code}
> Stream A: 5, 6, 7, 8, 9, 1, 10
> {code}
> {code}
> Stream B: 2, 3, 4, 5
> {code}
> The late arrived record with timestamp "1" will cause stream A to be selected 
> continuously in the thread loop, i.e. messages with timestamp 5, 6, 7, 8, 9 
> until the record itself is dequeued and processed, then stream B will be 
> selected starting with timestamp 2.
> 2) an empty buffered partition will cause its timestamp to be not advanced, 
> and hence the task timestamp as well since it is the smallest among all 
> partitions. This may not be a severe problem compared with 1) above though.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2937: MINOR: Fix consumer and producer to actually suppo...

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2937


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2957: KAFKA-5132: abort long running transactions

2017-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2957


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5132) Abort long running transactions

2017-05-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008442#comment-16008442
 ] 

ASF GitHub Bot commented on KAFKA-5132:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2957


> Abort long running transactions
> ---
>
> Key: KAFKA-5132
> URL: https://issues.apache.org/jira/browse/KAFKA-5132
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.11.0.0
>
>
> We need to abort any transactions that have been running longer than the txn 
> timeout



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5132) Abort long running transactions

2017-05-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5132:
-
   Resolution: Fixed
Fix Version/s: 0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2957
[https://github.com/apache/kafka/pull/2957]

> Abort long running transactions
> ---
>
> Key: KAFKA-5132
> URL: https://issues.apache.org/jira/browse/KAFKA-5132
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.11.0.0
>
>
> We need to abort any transactions that have been running longer than the txn 
> timeout



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-12 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008427#comment-16008427
 ] 

Matthias J. Sax commented on KAFKA-5226:


[~ian.springer] Thanks for reporting this. It's hard to say from the 
stacktrace. What is your data type for key and value and what serdes do you use?

> NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize
> --
>
> Key: KAFKA-5226
> URL: https://issues.apache.org/jira/browse/KAFKA-5226
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
> Environment: 64-bit Amazon Linux, JDK8
>Reporter: Ian Springer
>
> I saw the following NPE in our Kafka Streams app, which has 3 nodes running 
> on 3 separate machines.. Out of hundreds of messages processed, the NPE only 
> occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
> I'm hoping the Kafka Streams team can guess the cause based on the stack 
> trace. If I can provide any additional details about our app, please let me 
> know.
>  
> {code}
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka version : 0.10.2.1
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka commitId : e89bffd6b2eff799
> INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
> Starting beans in phase 0
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from CREATED to RUNNING.
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started 
> Kafka Stream process
> INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
> Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
> null) for group evergage-app.
> INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [] for group evergage-app
> INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 18
> INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 02:58:27,729 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> state stores
> INFO  2017-05-10 02:58:27,731 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> [14 hours pass...]
> INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [us.app.Trigger-0] for group 
> evergage-app
> INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 19
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 16:21:27,489 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 16:21:27,489 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 16:21:27,493 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> INFO  2017-05-10 16:21:30,584 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions 

Build failed in Jenkins: kafka-trunk-jdk7 #2175

2017-05-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Set min isr to avoid race condition in ReplicationBytesIn

--
[...truncated 1.79 MB...]
org.apache.kafka.connect.runtime.WorkerConnectorTest > testFailureIsFinalState 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testInitializeFailure 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testInitializeFailure 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndPause 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndPause 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndShutdown 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndShutdown 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > 
testTransitionStartedToStarted STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > 
testTransitionStartedToStarted PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testShutdownFailure 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testShutdownFailure 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeWithNoPreviousValueIsPropagated STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeWithNoPreviousValueIsPropagated PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride PASSED


[jira] [Updated] (KAFKA-5211) KafkaConsumer should not skip a corrupted record after throwing an exception.

2017-05-12 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-5211:

Fix Version/s: 0.11.0.0

> KafkaConsumer should not skip a corrupted record after throwing an exception.
> -
>
> Key: KAFKA-5211
> URL: https://issues.apache.org/jira/browse/KAFKA-5211
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>  Labels: clients, consumer
> Fix For: 0.11.0.0
>
>
> In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw 
> an exception and block on that corrupted record. In the latest trunk this 
> behavior has changed to skip the corrupted record (which is the old consumer 
> behavior). With KIP-98, skipping corrupted messages would be a little 
> dangerous as the message could be a control message for a transaction. We 
> should fix the issue to let the KafkaConsumer block on the corrupted messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5204) Connect needs to validate Connector type during instantiation

2017-05-12 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008389#comment-16008389
 ] 

Ewen Cheslack-Postava commented on KAFKA-5204:
--

Note that if we enforce this check more aggressively we will break some 
connectors that happen to work today because various validations happen to work 
even when inheriting from Connector. We might want to consider detecting this, 
logging a warning, and only enforcing this aggressively in a future version so 
folks have a chance to clean up the connectors.

> Connect needs to validate Connector type during instantiation
> -
>
> Key: KAFKA-5204
> URL: https://issues.apache.org/jira/browse/KAFKA-5204
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
> Fix For: 0.11.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Connect will accept to instantiate connectors that extend the 
> {{Connector}} abstract class but not one of its subclasses, 
> {{SourceConnector}} or {{SinkConnector}}. 
> However, in distributed mode as well as in REST, Connect assumes in a few 
> places that there are only two types of connectors, sinks or sources. Based 
> on this assumption it checks the type dynamically, and if it is not a sink it 
> treats it as a source (by constructing the corresponding configs). 
> A connector that implements only the {{Connector}} abstract class does not 
> fit into this classification. Therefore a validation needs to take place 
> early, during the instantiation of the {{Connector}} object. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3995) Split the ProducerBatch and resend when received RecordTooLargeException

2017-05-12 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3995:

Summary: Split the ProducerBatch and resend when received 
RecordTooLargeException  (was: Add a new configuration 
"enable.compression.ratio.estimation" to the producer config)

> Split the ProducerBatch and resend when received RecordTooLargeException
> 
>
> Key: KAFKA-3995
> URL: https://issues.apache.org/jira/browse/KAFKA-3995
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
>
> We recently see a few cases where RecordTooLargeException is thrown because 
> the compressed message sent by KafkaProducer exceeded the max message size.
> The root cause of this issue is because the compressor is estimating the 
> batch size using an estimated compression ratio based on heuristic 
> compression ratio statistics. This does not quite work for the traffic with 
> highly variable compression ratios. 
> For example, if the batch size is set to 1MB and the max message size is 1MB. 
> Initially a the producer is sending messages (each message is 1MB) to topic_1 
> whose data can be compressed to 1/10 of the original size. After a while the 
> estimated compression ratio in the compressor will be trained to 1/10 and the 
> producer would put 10 messages into one batch. Now the producer starts to 
> send messages (each message is also 1MB) to topic_2 whose message can only be 
> compress to 1/5 of the original size. The producer would still use 1/10 as 
> the estimated compression ratio and put 10 messages into a batch. That batch 
> would be 2 MB after compression which exceeds the maximum message size. In 
> this case the user do not have many options other than resend everything or 
> close the producer if they care about ordering.
> This is especially an issue for services like MirrorMaker whose producer is 
> shared by many different topics.
> To solve this issue, we can probably add a configuration 
> "enable.compression.ratio.estimation" to the producer. So when this 
> configuration is set to false, we stop estimating the compressed size but 
> will close the batch once the uncompressed bytes in the batch reaches the 
> batch size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties

2017-05-12 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008338#comment-16008338
 ] 

Bharat Viswanadham edited comment on KAFKA-5225 at 5/12/17 4:39 PM:


Matthias, I will contribute to this Jira. 


was (Author: bharatviswa):
Matthias will contribute to this Jira. 

> StreamsResetter doesn't allow custom Consumer properties
> 
>
> Key: KAFKA-5225
> URL: https://issues.apache.org/jira/browse/KAFKA-5225
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.1
>Reporter: Dustin Cote
>Assignee: Bharat Viswanadham
>
> The StreamsResetter doesn't let the user pass in any configurations to the 
> embedded consumer. This is a problem in secured environments because you 
> can't configure the embedded consumer to talk to the cluster. The tool should 
> take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a 
> config file to be passed in the command line for such operations.
> cc [~mjsax]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3035: DOC: clarifications on message committed to the lo...

2017-05-12 Thread edoardocomar
GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/3035

DOC: clarifications on message committed to the log

based on conversations with @vahidhashemian @rajinisivaram @apurvam 

The docs didn't make clear that what gets committed and what gets not may 
depend on the producer acks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka 
DOC-clarification-on-committed

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3035.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3035


commit f95d01e26837b789b7791cf69790be9def5599be
Author: Edoardo Comar 
Date:   2017-05-12T11:12:37Z

DOC: clarifications on message committed to the log

based on conversations with @vahidhashemian @rajinisivaram @apurvam




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-12 Thread Ian Springer (JIRA)
Ian Springer created KAFKA-5226:
---

 Summary: NullPointerException (NPE) in 
SourceNodeRecordDeserializer.deserialize
 Key: KAFKA-5226
 URL: https://issues.apache.org/jira/browse/KAFKA-5226
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.1
 Environment: 64-bit Amazon Linux, JDK8
Reporter: Ian Springer


I saw the following NPE in our Kafka Streams app, which has 3 nodes running on 
3 separate machines.. Out of hundreds of messages processed, the NEP only 
occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
I'm hoping the Kafka Streams team can guess the cause based on the stack trace. 
If I can provide any additional details about our app, please let me know.
 

{code}
INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
Kafka version : 0.10.2.1
INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
Kafka commitId : e89bffd6b2eff799
INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
Starting beans in phase 0
INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from CREATED to RUNNING.
INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started Kafka 
Stream process
INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
null) for group evergage-app.
INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
Revoking previously assigned partitions [] for group evergage-app
INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from RUNNING to REBALANCING.
INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
(Re-)joining group evergage-app
INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
Successfully joined group evergage-app with generation 18
INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to REBALANCING.
INFO  2017-05-10 02:58:27,729 o.a.kafka.streams.processor.internals.StreamTask  
task [0_0] Initializing state stores
INFO  2017-05-10 02:58:27,731 o.a.kafka.streams.processor.internals.StreamTask  
task [0_0] Initializing processor nodes of the topology
INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to RUNNING.

[14 hours pass...]

INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
Revoking previously assigned partitions [us.app.Trigger-0] for group 
evergage-app
INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from RUNNING to REBALANCING.
INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
(Re-)joining group evergage-app
INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.AbstractCoordinator  
Successfully joined group evergage-app with generation 19
INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.ConsumerCoordinator  
Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
INFO  2017-05-10 16:21:27,489 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to REBALANCING.
INFO  2017-05-10 16:21:27,489 o.a.kafka.streams.processor.internals.StreamTask  
task [0_0] Initializing processor nodes of the topology
INFO  2017-05-10 16:21:27,493 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to RUNNING.
INFO  2017-05-10 16:21:30,584 o.a.k.c.consumer.internals.ConsumerCoordinator  
Revoking previously assigned partitions [us.app.Trigger-0] for group 
evergage-app
INFO  2017-05-10 16:21:30,584 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from RUNNING to REBALANCING.
INFO  2017-05-10 16:21:30,588 o.a.k.c.consumer.internals.AbstractCoordinator  
(Re-)joining group evergage-app
INFO  2017-05-10 16:21:30,593 o.a.k.c.consumer.internals.AbstractCoordinator  
Successfully joined group evergage-app with generation 20
INFO  2017-05-10 16:21:30,594 

[jira] [Assigned] (KAFKA-5220) Application Reset Tool does not work with SASL

2017-05-12 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned KAFKA-5220:
-

Assignee: (was: Bharat Viswanadham)

> Application Reset Tool does not work with SASL
> --
>
> Key: KAFKA-5220
> URL: https://issues.apache.org/jira/browse/KAFKA-5220
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, tools
>Reporter: Matthias J. Sax
>
> Resetting an application with SASL enabled fails with
> {noformat}
> ERROR: Request GROUP_COORDINATOR failed on brokers List(localhost:9092 (id: 
> -1 rack: null))
> {noformat}
> We would need to allow additional configuration options that get picked up by 
> the internally used ZK client and KafkaConsumer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-12 Thread Ian Springer (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Springer updated KAFKA-5226:

Description: 
I saw the following NPE in our Kafka Streams app, which has 3 nodes running on 
3 separate machines.. Out of hundreds of messages processed, the NPE only 
occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
I'm hoping the Kafka Streams team can guess the cause based on the stack trace. 
If I can provide any additional details about our app, please let me know.
 

{code}
INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
Kafka version : 0.10.2.1
INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
Kafka commitId : e89bffd6b2eff799
INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
Starting beans in phase 0
INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from CREATED to RUNNING.
INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started Kafka 
Stream process
INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
null) for group evergage-app.
INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
Revoking previously assigned partitions [] for group evergage-app
INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from RUNNING to REBALANCING.
INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
(Re-)joining group evergage-app
INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
Successfully joined group evergage-app with generation 18
INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to REBALANCING.
INFO  2017-05-10 02:58:27,729 o.a.kafka.streams.processor.internals.StreamTask  
task [0_0] Initializing state stores
INFO  2017-05-10 02:58:27,731 o.a.kafka.streams.processor.internals.StreamTask  
task [0_0] Initializing processor nodes of the topology
INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to RUNNING.

[14 hours pass...]

INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
Revoking previously assigned partitions [us.app.Trigger-0] for group 
evergage-app
INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from RUNNING to REBALANCING.
INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
(Re-)joining group evergage-app
INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.AbstractCoordinator  
Successfully joined group evergage-app with generation 19
INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.ConsumerCoordinator  
Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
INFO  2017-05-10 16:21:27,489 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to REBALANCING.
INFO  2017-05-10 16:21:27,489 o.a.kafka.streams.processor.internals.StreamTask  
task [0_0] Initializing processor nodes of the topology
INFO  2017-05-10 16:21:27,493 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from REBALANCING to RUNNING.
INFO  2017-05-10 16:21:30,584 o.a.k.c.consumer.internals.ConsumerCoordinator  
Revoking previously assigned partitions [us.app.Trigger-0] for group 
evergage-app
INFO  2017-05-10 16:21:30,584 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
transition from RUNNING to REBALANCING.
INFO  2017-05-10 16:21:30,588 o.a.k.c.consumer.internals.AbstractCoordinator  
(Re-)joining group evergage-app
INFO  2017-05-10 16:21:30,593 o.a.k.c.consumer.internals.AbstractCoordinator  
Successfully joined group evergage-app with generation 20
INFO  2017-05-10 16:21:30,594 o.a.k.c.consumer.internals.ConsumerCoordinator  
Setting newly assigned partitions [demo.retail.Trigger-0, us.app.Trigger-0] for 
group evergage-app
INFO  2017-05-10 16:21:30,594 org.apache.kafka.streams.KafkaStreams  
stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] 

[jira] [Commented] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties

2017-05-12 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16008338#comment-16008338
 ] 

Bharat Viswanadham commented on KAFKA-5225:
---

Matthias will contribute to this Jira. 

> StreamsResetter doesn't allow custom Consumer properties
> 
>
> Key: KAFKA-5225
> URL: https://issues.apache.org/jira/browse/KAFKA-5225
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.1
>Reporter: Dustin Cote
>Assignee: Bharat Viswanadham
>
> The StreamsResetter doesn't let the user pass in any configurations to the 
> embedded consumer. This is a problem in secured environments because you 
> can't configure the embedded consumer to talk to the cluster. The tool should 
> take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a 
> config file to be passed in the command line for such operations.
> cc [~mjsax]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties

2017-05-12 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned KAFKA-5225:
-

Assignee: Bharat Viswanadham

> StreamsResetter doesn't allow custom Consumer properties
> 
>
> Key: KAFKA-5225
> URL: https://issues.apache.org/jira/browse/KAFKA-5225
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.1
>Reporter: Dustin Cote
>Assignee: Bharat Viswanadham
>
> The StreamsResetter doesn't let the user pass in any configurations to the 
> embedded consumer. This is a problem in secured environments because you 
> can't configure the embedded consumer to talk to the cluster. The tool should 
> take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a 
> config file to be passed in the command line for such operations.
> cc [~mjsax]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5220) Application Reset Tool does not work with SASL

2017-05-12 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned KAFKA-5220:
-

Assignee: Bharat Viswanadham

> Application Reset Tool does not work with SASL
> --
>
> Key: KAFKA-5220
> URL: https://issues.apache.org/jira/browse/KAFKA-5220
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, tools
>Reporter: Matthias J. Sax
>Assignee: Bharat Viswanadham
>
> Resetting an application with SASL enabled fails with
> {noformat}
> ERROR: Request GROUP_COORDINATOR failed on brokers List(localhost:9092 (id: 
> -1 rack: null))
> {noformat}
> We would need to allow additional configuration options that get picked up by 
> the internally used ZK client and KafkaConsumer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5220) Application Reset Tool does not work with SASL

2017-05-12 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5220.

Resolution: Duplicate

> Application Reset Tool does not work with SASL
> --
>
> Key: KAFKA-5220
> URL: https://issues.apache.org/jira/browse/KAFKA-5220
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, tools
>Reporter: Matthias J. Sax
>
> Resetting an application with SASL enabled fails with
> {noformat}
> ERROR: Request GROUP_COORDINATOR failed on brokers List(localhost:9092 (id: 
> -1 rack: null))
> {noformat}
> We would need to allow additional configuration options that get picked up by 
> the internally used ZK client and KafkaConsumer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >