Build failed in Jenkins: kafka-0.10.0-jdk7 #166

2016-07-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3185: [Streams] Added Kafka Streams Application Reset Tool

[ismael] KAFKA-3851; Automate release notes and include links to upgrade notes

--
[...truncated 843 lines...]
kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ProduceRequestTest > testSimpleProduceRequest PASSED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic PASSED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SaslSslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslSslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslSslConsumerTest > testListTopics PASSED

kafka.api.SaslSslConsumerTest > testAutoCommitOnRebalance PASSED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: Could not install JDK_1_7U51_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
 

[jira] [Updated] (KAFKA-3987) Allow configuration of the hash algorithm used by the LogCleaner's offset map

2016-07-27 Thread Luciano Afranllie (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luciano Afranllie updated KAFKA-3987:
-
Reviewer: Shikhar Bhushan
  Status: Patch Available  (was: Open)

> Allow configuration of the hash algorithm used by the LogCleaner's offset map
> -
>
> Key: KAFKA-3987
> URL: https://issues.apache.org/jira/browse/KAFKA-3987
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: Luciano Afranllie
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> In order to be able to do deployments of Kafka that are FIPS 140-2 
> (https://en.wikipedia.org/wiki/FIPS_140-2) complaint one of the requirements 
> is not to use MD5.
> Kafka is using MD5 to hash message keys in the offset map (SkimpyOffsetMap) 
> used by the log cleaner.
> The idea is to be able to configure this hash algorithm to something allowed 
> by FIPS using a new configuration property.
> The property could be named "log.cleaner.hash.algorithm" with a default value 
> equal to "MD5" and the idea is to use it in the constructor of CleanerConfig.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1445

2016-07-27 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3851; Automate release notes and include links to upgrade notes

--
[...truncated 11689 lines...]

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchBefore STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchBefore PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testInitialLoading STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testInitialLoading PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
shouldOnlyIterateOpenSegments STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
shouldOnlyIterateOpenSegments PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRestore 
STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRolling 
STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRolling 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testSegmentMaintenance STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testSegmentMaintenance PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutSameKeyTimestamp STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutSameKeyTimestamp PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchAfter STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchAfter PASSED

org.apache.kafka.streams.state.internals.WindowStoreUtilsTest > 
testSerialization STARTED

org.apache.kafka.streams.state.internals.WindowStoreUtilsTest > 
testSerialization PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnKVStoreWhenIsWindowStore STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnKVStoreWhenIsWindowStore PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfKVStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfKVStoreDoesntExist PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfWindowStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfWindowStoreDoesntExist PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnWindowStoreWhenIsKVStore STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnWindowStoreWhenIsKVStore PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnKVStoreWhenItExists STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnKVStoreWhenItExists PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnWindowStoreWhenItExists STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnWindowStoreWhenItExists PASSED

org.apache.kafka.streams.state.internals.StoreChangeLoggerTest > testAddRemove 
STARTED

org.apache.kafka.streams.state.internals.StoreChangeLoggerTest > testAddRemove 
PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldNotGetValuesFromOtherStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldNotGetValuesFromOtherStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldReturnEmptyIteratorIfNoData STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldReturnEmptyIteratorIfNoData PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFindValueForKeyWhenMultiStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFindValueForKeyWhenMultiStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFetchValuesFromWindowStore STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFetchValuesFromWindowStore PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnValueIfExists STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnValueIfExists PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldNotGetValuesFromOtherStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldNotGetValuesFromOtherStores PASSED


[jira] [Updated] (KAFKA-3190) KafkaProducer should not invoke callback in send()

2016-07-27 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3190:
-
Fix Version/s: (was: 0,10.0.2)
   0.10.0.2

> KafkaProducer should not invoke callback in send()
> --
>
> Key: KAFKA-3190
> URL: https://issues.apache.org/jira/browse/KAFKA-3190
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Critical
> Fix For: 0.10.0.2
>
>
> Currently KafkaProducer will invoke callback.onComplete() if it receives an 
> ApiException during send(). This breaks the guarantee that callback will be 
> invoked in order. It seems ApiException in send() only comes from metadata 
> refresh. If so, we can probably simply throw it instead of invoking 
> callback().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1444

2016-07-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3185: Allow users to cleanup internal Kafka Streams data

--
[...truncated 2629 lines...]
kafka.api.PlaintextConsumerTest > testCommitMetadata FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternSubscription FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance STARTED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testListTopics STARTED

kafka.api.PlaintextConsumerTest > testListTopics FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.PlaintextConsumerTest > testSimpleConsumption FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback STARTED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets STARTED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets FAILED
java.lang.OutOfMemoryError: unable to create new native thread

kafka.api.ApiUtilsTest > testShortStringNonASCII STARTED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII STARTED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures FAILED
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at 
kafka.server.KafkaRequestHandlerPool$$anonfun$1.apply$mcVI$sp(KafkaRequestHandler.scala:84)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
kafka.server.KafkaRequestHandlerPool.(KafkaRequestHandler.scala:81)
at kafka.server.KafkaServer.startup(KafkaServer.scala:221)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:120)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:83)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:83)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:83)
at 
kafka.api.ConsumerBounceTest.kafka$api$IntegrationTestHarness$$super$setUp(ConsumerBounceTest.scala:34)
at 
kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:58)
at kafka.api.ConsumerBounceTest.setUp(ConsumerBounceTest.scala:63)

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer STARTED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms STARTED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms PASSED

kafka.api.SaslMultiMechanismConsumerTest > 

[jira] [Created] (KAFKA-4002) task.open() should be invoked in case that 0 partitions is assigned to task.

2016-07-27 Thread Liquan Pei (JIRA)
Liquan Pei created KAFKA-4002:
-

 Summary: task.open() should be invoked in case that 0 partitions 
is assigned to task. 
 Key: KAFKA-4002
 URL: https://issues.apache.org/jira/browse/KAFKA-4002
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.0
Reporter: Liquan Pei
Assignee: Liquan Pei
 Fix For: 0.11.0.0


 In case that 0 partitions is assigned to a task, the open() call in task was 
not invoked, but then put() was called later. The put() call with empty data is 
to hand control to the task so that it can continue working on the buffered 
data.  

If task.open() is not invoked in case of 0 partitions are assigned, connector 
developers needs to do some special handling in this case, i.e. do not call any 
method in writer to avoid null pointer exceptions. To make the connector 
developers' life easy, it probably better to change the behavior so the call is 
made even 0 partitions are assigned .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3185) Allow users to cleanup internal data

2016-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396622#comment-15396622
 ] 

ASF GitHub Bot commented on KAFKA-3185:
---

Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/1671


> Allow users to cleanup internal data
> 
>
> Key: KAFKA-3185
> URL: https://issues.apache.org/jira/browse/KAFKA-3185
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>Priority: Blocker
>  Labels: user-experience
> Fix For: 0.10.0.1
>
>
> Currently the internal data is managed completely by Kafka Streams framework 
> and users cannot clean them up actively. This results in a bad out-of-the-box 
> user experience especially for running demo programs since it results 
> internal data (changelog topics, RocksDB files, etc) that need to be cleaned 
> manually. It will be better to add a
> {code}
> KafkaStreams.cleanup()
> {code}
> function call to clean up these internal data programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1671: KAFKA-3185: [Streams] Added Kafka Streams Applicat...

2016-07-27 Thread mjsax
Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/1671


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3851) Add references to important installation/upgrade notes to release notes

2016-07-27 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3851:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1670
[https://github.com/apache/kafka/pull/1670]

> Add references to important installation/upgrade notes to release notes 
> 
>
> Key: KAFKA-3851
> URL: https://issues.apache.org/jira/browse/KAFKA-3851
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.0.1
>
>
> Today we use the release notes exactly as exported from JIRA (see "Prepare 
> release notes" on 
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Process) and then 
> rely on users to dig through our documentation (none of which starts with 
> release-specific notes) to upgrade and installation notes.
> Especially for folks who aren't intimately familiar with our docs, the 
> information is *very* easy to miss. Ideally we could automate the release 
> notes process a bit and then have them automatically modified to *at least* 
> include links at the very top (it'd be nice if we had some other header 
> material added as well since the automatically generated release notes are 
> very barebones...). Even better would be if we could pull in the 
> version-specific installation/upgrade notes directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3851) Add references to important installation/upgrade notes to release notes

2016-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396599#comment-15396599
 ] 

ASF GitHub Bot commented on KAFKA-3851:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1670


> Add references to important installation/upgrade notes to release notes 
> 
>
> Key: KAFKA-3851
> URL: https://issues.apache.org/jira/browse/KAFKA-3851
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.0.1
>
>
> Today we use the release notes exactly as exported from JIRA (see "Prepare 
> release notes" on 
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Process) and then 
> rely on users to dig through our documentation (none of which starts with 
> release-specific notes) to upgrade and installation notes.
> Especially for folks who aren't intimately familiar with our docs, the 
> information is *very* easy to miss. Ideally we could automate the release 
> notes process a bit and then have them automatically modified to *at least* 
> include links at the very top (it'd be nice if we had some other header 
> material added as well since the automatically generated release notes are 
> very barebones...). Even better would be if we could pull in the 
> version-specific installation/upgrade notes directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1670: KAFKA-3851: Automate release notes and include lin...

2016-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1670


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Synchronized block in StreamTask

2016-07-27 Thread Guozhang Wang
Hello Pierre,

Thanks for pointing this out. Good question actually, I think it is safe to
remove the synchronization block.

Mind filing a one-liner PR?

Guozhang

On Wed, Jul 27, 2016 at 2:48 AM, Pierre Coquentin <
pierre.coquen...@gmail.com> wrote:

> Hi,
>
> I've a simple technical question about kafka streams and I don't know if I
> should use us...@kafka.apache.org or this mailing list.
> In class org.apache.kafka.streams.processor.internals.StreamTask, the
> method "process" use a synchronized block but I don't see why, the method
> doesn't seem to be called in a multithreaded environnement as it's created
> and owned by a specific thread
> org.apache.kafka.streams.processor.internals.StreamThread.
> Am I missing something ? Or, as the API is unstable, this class is meant in
> the future to be used by several threads ?
>
> Regards,
>
> Pierre
>



-- 
-- Guozhang


Re: Kafka Streams for Remote Server

2016-07-27 Thread Guozhang Wang
Misha,

Did you pre-create the sink topic before starting your application or you
are relying on the broker-side auto-create for that topic?

If you are relying on auto-create, then there is a transient period where
the topic is created but the metadata has not been propagated to the
brokers so they do not know they are the leader of the created topic
partitions yet. And I'd recommend not relying on it since it is really
meant for debugging environment only.

Guozhang


On Wed, Jul 27, 2016 at 5:45 AM, mishadoff  wrote:

> Hello,
>
> I’ve a simplest ever kafka streams application which just reads from one
> kafka topic A and write to another topic B.
>
> When I run it on my local environment (local zk, local kafka broker, local
> kafka streams app) everything works fine, topic B created and filled with
> messages from A
> If I run it on existing kafka cluster (remote zk, remote kafka, LOCAL
> kafka streams) my app is not working anymore.
>
> It succesfully read the remote topic A, succesfully process the message
> and generate a producer record, creates a B topic in remote kafka, bud
> during send I get an error
>
> ```
> 15:36:47.242 [kafka-producer-network-thread |
> example-message-counter3-1-StreamThread-1-producer] ERROR
> o.a.k.s.p.internals.RecordCollector - Error sending record: null
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server
> is not the leader for that topic-partition
> ```
>
> Could you point me to direction where to start debug or what problems
> might cause this behaviour?
>
> Thanks,
> — Misha




-- 
-- Guozhang


[jira] [Commented] (KAFKA-3185) Allow users to cleanup internal data

2016-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396538#comment-15396538
 ] 

ASF GitHub Bot commented on KAFKA-3185:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1671

KAFKA-3185: [Streams] Added Kafka Streams Application Reset Tool



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka resetTool-0.10.0.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1671.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1671


commit 75e01c87a5de8a64b9d4b3baf2e0dd061e138b25
Author: Matthias J. Sax 
Date:   2016-06-11T12:12:42Z

KAFKA-3185: [Streams] Added Kafka Streams Application Reset Tool

cherry-picked from trunk

Conflicts:
checkstyle/import-control.xml
streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java

streams/src/test/java/org/apache/kafka/streams/integration/utils/EmbeddedSingleNodeKafkaCluster.java

commit 1478653f08e3cfefe7f080572a41c38b3941775c
Author: Matthias J. Sax 
Date:   2016-07-27T22:28:28Z

fixed cherry-pick issues




> Allow users to cleanup internal data
> 
>
> Key: KAFKA-3185
> URL: https://issues.apache.org/jira/browse/KAFKA-3185
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>Priority: Blocker
>  Labels: user-experience
> Fix For: 0.10.0.1
>
>
> Currently the internal data is managed completely by Kafka Streams framework 
> and users cannot clean them up actively. This results in a bad out-of-the-box 
> user experience especially for running demo programs since it results 
> internal data (changelog topics, RocksDB files, etc) that need to be cleaned 
> manually. It will be better to add a
> {code}
> KafkaStreams.cleanup()
> {code}
> function call to clean up these internal data programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1671: KAFKA-3185: [Streams] Added Kafka Streams Applicat...

2016-07-27 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1671

KAFKA-3185: [Streams] Added Kafka Streams Application Reset Tool



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka resetTool-0.10.0.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1671.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1671


commit 75e01c87a5de8a64b9d4b3baf2e0dd061e138b25
Author: Matthias J. Sax 
Date:   2016-06-11T12:12:42Z

KAFKA-3185: [Streams] Added Kafka Streams Application Reset Tool

cherry-picked from trunk

Conflicts:
checkstyle/import-control.xml
streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java

streams/src/test/java/org/apache/kafka/streams/integration/utils/EmbeddedSingleNodeKafkaCluster.java

commit 1478653f08e3cfefe7f080572a41c38b3941775c
Author: Matthias J. Sax 
Date:   2016-07-27T22:28:28Z

fixed cherry-pick issues




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #780

2016-07-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3185: Allow users to cleanup internal Kafka Streams data

--
[...truncated 9062 lines...]

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild STARTED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls STARTED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck STARTED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete STARTED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange STARTED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException STARTED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap STARTED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll STARTED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog STARTED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck STARTED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation STARTED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints STARTED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset STARTED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testDeleteOldSegmentsMethod STARTED

kafka.log.LogTest > testDeleteOldSegmentsMethod PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull STARTED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild STARTED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved STARTED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages STARTED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload STARTED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog STARTED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset STARTED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate STARTED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName STARTED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles STARTED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll STARTED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter STARTED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName STARTED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo STARTED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile STARTED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid STARTED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testBuildOffsetMap STARTED


[jira] [Commented] (KAFKA-3705) Support non-key joining in KTable

2016-07-27 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396508#comment-15396508
 ] 

Guozhang Wang commented on KAFKA-3705:
--

Yeah currently we do not have a windowed-stream table join, and hence the 
stream is not materialized but only the table. We can add this join type though 
in the next release if we feel it is a common request.

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Liquan Pei
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3994) Deadlock between consumer heartbeat expiration and offset commit.

2016-07-27 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-3994:
--

Assignee: Jason Gustafson

> Deadlock between consumer heartbeat expiration and offset commit.
> -
>
> Key: KAFKA-3994
> URL: https://issues.apache.org/jira/browse/KAFKA-3994
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
> Fix For: 0.10.0.2
>
>
> I got the following stacktraces from ConsumerBounceTest
> {code}
> ...
> "Test worker" #12 prio=5 os_prio=0 tid=0x7fbb28b7f000 nid=0x427c runnable 
> [0x7fbb06445000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x0003d48bcbc0> (a sun.nio.ch.Util$2)
> - locked <0x0003d48bcbb0> (a 
> java.util.Collections$UnmodifiableSet)
> - locked <0x0003d48bbd28> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at org.apache.kafka.common.network.Selector.select(Selector.java:454)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:277)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:179)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:411)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1054)
> at 
> kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:103)
> at 
> kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:70)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
> at 
> 

[jira] [Commented] (KAFKA-3705) Support non-key joining in KTable

2016-07-27 Thread Jan Filipiak (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396498#comment-15396498
 ] 

Jan Filipiak commented on KAFKA-3705:
-

The change doesn't seem to be in all the places, I thought about going 
Ktable,T> but also settled with Ktable the other thing would 
just be generic overkill even though one can hide it from the user by having a 
PairSerde or something. 

Regarding your idea, I kind of fail to see how an update to table1 or table2 
would be reflected in the output with regards to republish, isn't the only 
option there to have the stream materialized based on a window? Need to take a 
deeper look though.

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Liquan Pei
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3705) Support non-key joining in KTable

2016-07-27 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396491#comment-15396491
 ] 

Guozhang Wang edited comment on KAFKA-3705 at 7/27/16 10:03 PM:


We are fixing this issue right now on the consumer layer with KIP-62: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread

Expecting to have it in the next minor release.


was (Author: guozhang):
We are fixing this issue right now on the consumer layer with KIP-62: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Liquan Pei
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3705) Support non-key joining in KTable

2016-07-27 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396491#comment-15396491
 ] 

Guozhang Wang commented on KAFKA-3705:
--

We are fixing this issue right now on the consumer layer with KIP-62: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Liquan Pei
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3290) WorkerSourceTask testCommit transient failure

2016-07-27 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3290.
--
Resolution: Fixed

> WorkerSourceTask testCommit transient failure
> -
>
> Key: KAFKA-3290
> URL: https://issues.apache.org/jira/browse/KAFKA-3290
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
>  Labels: transient-unit-test-failure
> Fix For: 0.10.1.0
>
>
> From recent failed build:
> {code}
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
> java.lang.AssertionError:
>   Expectation failure on verify:
> Listener.onStartup(job-0): expected: 1, actual: 1
> Listener.onShutdown(job-0): expected: 1, actual: 1
> at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
> at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
> at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
> at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommit(WorkerSourceTaskTest.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3290) WorkerSourceTask testCommit transient failure

2016-07-27 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396402#comment-15396402
 ] 

Ewen Cheslack-Postava commented on KAFKA-3290:
--

Going to close this as it hasn't reappeared in a very long time now so doesn't 
seem to be affecting the build anymore. We can reopen if it reappears.

> WorkerSourceTask testCommit transient failure
> -
>
> Key: KAFKA-3290
> URL: https://issues.apache.org/jira/browse/KAFKA-3290
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
>  Labels: transient-unit-test-failure
> Fix For: 0.10.1.0
>
>
> From recent failed build:
> {code}
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
> java.lang.AssertionError:
>   Expectation failure on verify:
> Listener.onStartup(job-0): expected: 1, actual: 1
> Listener.onShutdown(job-0): expected: 1, actual: 1
> at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
> at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
> at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
> at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommit(WorkerSourceTaskTest.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3290) WorkerSourceTask testCommit transient failure

2016-07-27 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reassigned KAFKA-3290:


Assignee: Ewen Cheslack-Postava  (was: Jason Gustafson)

> WorkerSourceTask testCommit transient failure
> -
>
> Key: KAFKA-3290
> URL: https://issues.apache.org/jira/browse/KAFKA-3290
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
>  Labels: transient-unit-test-failure
> Fix For: 0.10.1.0
>
>
> From recent failed build:
> {code}
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
> java.lang.AssertionError:
>   Expectation failure on verify:
> Listener.onStartup(job-0): expected: 1, actual: 1
> Listener.onShutdown(job-0): expected: 1, actual: 1
> at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
> at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
> at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
> at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommit(WorkerSourceTaskTest.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3185) Allow users to cleanup internal data

2016-07-27 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3185:
-
   Resolution: Fixed
Fix Version/s: (was: 0.10.1.0)
   0.10.0.1
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1636
[https://github.com/apache/kafka/pull/1636]

> Allow users to cleanup internal data
> 
>
> Key: KAFKA-3185
> URL: https://issues.apache.org/jira/browse/KAFKA-3185
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>Priority: Blocker
>  Labels: user-experience
> Fix For: 0.10.0.1
>
>
> Currently the internal data is managed completely by Kafka Streams framework 
> and users cannot clean them up actively. This results in a bad out-of-the-box 
> user experience especially for running demo programs since it results 
> internal data (changelog topics, RocksDB files, etc) that need to be cleaned 
> manually. It will be better to add a
> {code}
> KafkaStreams.cleanup()
> {code}
> function call to clean up these internal data programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3185) Allow users to cleanup internal data

2016-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396384#comment-15396384
 ] 

ASF GitHub Bot commented on KAFKA-3185:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1636


> Allow users to cleanup internal data
> 
>
> Key: KAFKA-3185
> URL: https://issues.apache.org/jira/browse/KAFKA-3185
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>Priority: Blocker
>  Labels: user-experience
> Fix For: 0.10.0.1
>
>
> Currently the internal data is managed completely by Kafka Streams framework 
> and users cannot clean them up actively. This results in a bad out-of-the-box 
> user experience especially for running demo programs since it results 
> internal data (changelog topics, RocksDB files, etc) that need to be cleaned 
> manually. It will be better to add a
> {code}
> KafkaStreams.cleanup()
> {code}
> function call to clean up these internal data programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #779

2016-07-27 Thread Apache Jenkins Server
See 



[jira] [Assigned] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-07-27 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote reassigned KAFKA-3590:
--

Assignee: Dustin Cote

> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>Assignee: Dustin Cote
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-07-27 Thread Dustin Cote (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Cote updated KAFKA-3590:
---
Component/s: (was: clients)
 consumer

> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-07-27 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15396213#comment-15396213
 ] 

Dustin Cote commented on KAFKA-3590:


[~salaev] it looks like this is occurring in a SyncGroup call so it's probably 
due to the fact that the new consumer needs to write to the __consumer_offsets 
topic and can't because __consumer_offsets isn't meeting the min ISR 
requirements for the cluster.  The error message isn't very clear, so maybe we 
can start by improving that.  I can take a shot at the pull request in the next 
couple of days.

> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>Assignee: Dustin Cote
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4001) Improving join semantics in Kafka Stremas

2016-07-27 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-4001:
--

 Summary: Improving join semantics in Kafka Stremas
 Key: KAFKA-4001
 URL: https://issues.apache.org/jira/browse/KAFKA-4001
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax
 Fix For: 0.10.1.0


Kafka Streams supports three types of joins:
* KStream-KStream
* KStream-KTable
* KTable-KTable

Furthermore, Kafka Streams supports the join variant, namely
* inner join
* left join
* outer join

Not all combination of "type" and "variant" are supported.

*The problem is, that the semantics of the different joins do use different 
semantics (and are thus inconsistent).*

With this ticket, we want to
* introduce unique semantics over all joins
* improve handling of "null"
* add missing inner KStream-KTable join




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-0.10.0-jdk7 #165

2016-07-27 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #778

2016-07-27 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3996; ByteBufferMessageSet.writeTo() should be non-blocking

--
[...truncated 1727 lines...]
kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.SaslPlainPlaintextConsumerTest > 
testPauseStateNotPreservedByRebalance STARTED

kafka.api.SaslPlainPlaintextConsumerTest > 
testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testListTopics STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testAutoCommitOnRebalance STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testPartitionReassignmentCallback 
STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testPartitionReassignmentCallback 
PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCommitSpecifiedOffsets STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED


Re: [DISCUSS] KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription Change

2016-07-27 Thread Ismael Juma
Sounds good Vahid, thanks for doing this. :)

Ismael

On Wed, Jul 27, 2016 at 6:29 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Ismael,
>
> I sent a message to user mailing lists of Spark and Storm a couple of days
> ago and have received one response so far.
> Thread on Spark mailing list:
> https://www.mail-archive.com/user@spark.apache.org/msg54309.html
> Thread on Storm mailing list:
> https://www.mail-archive.com/user@storm.apache.org/msg06850.html
>
> Regards,
> --Vahid
>
>
>
> From:   Ismael Juma 
> To: dev@kafka.apache.org
> Date:   07/22/2016 01:44 AM
> Subject:Re: [DISCUSS] KIP-70: Revise Partition Assignment
> Semantics on New Consumer's Subscription Change
> Sent by:isma...@gmail.com
>
>
>
> Thanks for the KIP Vahid. The change makes sense. On the compatibility
> front, could we check some of the advanced Kafka users like Storm and
> Spark
> in order to verify if they would be affected?
>
> Ismael
>
> On Wed, Jul 20, 2016 at 1:55 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi all,
> >
> > We have started a new KIP under
> >
> >
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-70%3A+Revise+Partition+Assignment+Semantics+on+New+Consumer%27s+Subscription+Change
>
> >
> > Your feedback is much appreciated.
> >
> > Regards,
> > Vahid Hashemian
> >
> >
>
>
>
>
>


Re: [DISCUSS] KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription Change

2016-07-27 Thread Vahid S Hashemian
Ismael,

I sent a message to user mailing lists of Spark and Storm a couple of days 
ago and have received one response so far.
Thread on Spark mailing list: 
https://www.mail-archive.com/user@spark.apache.org/msg54309.html
Thread on Storm mailing list: 
https://www.mail-archive.com/user@storm.apache.org/msg06850.html
 
Regards,
--Vahid



From:   Ismael Juma 
To: dev@kafka.apache.org
Date:   07/22/2016 01:44 AM
Subject:Re: [DISCUSS] KIP-70: Revise Partition Assignment 
Semantics on New Consumer's Subscription Change
Sent by:isma...@gmail.com



Thanks for the KIP Vahid. The change makes sense. On the compatibility
front, could we check some of the advanced Kafka users like Storm and 
Spark
in order to verify if they would be affected?

Ismael

On Wed, Jul 20, 2016 at 1:55 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi all,
>
> We have started a new KIP under
>
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-70%3A+Revise+Partition+Assignment+Semantics+on+New+Consumer%27s+Subscription+Change

>
> Your feedback is much appreciated.
>
> Regards,
> Vahid Hashemian
>
>






Build failed in Jenkins: kafka-trunk-jdk7 #1443

2016-07-27 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3996; ByteBufferMessageSet.writeTo() should be non-blocking

[junrao] KAFKA-3924; Replacing halt with exit upon LEO mismatch to trigger 
gra���

--
[...truncated 12971 lines...]
org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias STARTED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByShortAlias 
STARTED


[jira] [Created] (KAFKA-4000) Consumer per-topic metrics do not aggregate partitions from the same topic

2016-07-27 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4000:
--

 Summary: Consumer per-topic metrics do not aggregate partitions 
from the same topic
 Key: KAFKA-4000
 URL: https://issues.apache.org/jira/browse/KAFKA-4000
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.10.0.0, 0.9.0.1
Reporter: Jason Gustafson
Priority: Minor


In the Consumer Fetcher code, we have per-topic fetch metrics, but they seem to 
be computed from each partition separately. It seems like we should aggregate 
them by topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2016-07-27 Thread James Carnegie (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395915#comment-15395915
 ] 

James Carnegie commented on KAFKA-2729:
---

That's our experience, though the only other thing we've tried is leaving it 
for a while.

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #164

2016-07-27 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3996; ByteBufferMessageSet.writeTo() should be non-blocking

--
[...truncated 1700 lines...]

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testLZ4FramingV0 PASSED

kafka.message.MessageCompressionTest > testLZ4FramingV1 PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testWriteFullyTo PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.message.ByteBufferMessageSetTest > testRelativeOffsetAssignment PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > 

[jira] [Created] (KAFKA-3998) reuse connection to join group lead to not aware other members

2016-07-27 Thread Jerome Xue (JIRA)
Jerome Xue created KAFKA-3998:
-

 Summary: reuse connection to join group lead to not aware other 
members
 Key: KAFKA-3998
 URL: https://issues.apache.org/jira/browse/KAFKA-3998
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Jerome Xue


I am writing a non-blocking perl kafka client.

I create a connection pool to reuse connections in the same process.

Having one consumer in the group, I add another consumer to the group, after 
join request, the response says the new consumer are the only member in the 
group, server doesn't aware the other one. as a result, the other consumer 
received reblance error which is correct.

If I have two consumer in two process, it works.

Each request have a unique correlation_id and two consumer have two different 
client_id.

Will it be fixed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3924) Data loss due to halting when LEO is larger than leader's LEO

2016-07-27 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3924:
---
   Resolution: Fixed
Fix Version/s: 0.10.0.1
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1634
[https://github.com/apache/kafka/pull/1634]

> Data loss due to halting when LEO is larger than leader's LEO
> -
>
> Key: KAFKA-3924
> URL: https://issues.apache.org/jira/browse/KAFKA-3924
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Maysam Yabandeh
> Fix For: 0.10.0.1
>
>
> Currently the follower broker panics when its LEO is larger than its leader's 
> LEO,  and assuming that this is an impossible state to reach, halts the 
> process to prevent any further damage.
> {code}
> if (leaderEndOffset < replica.logEndOffset.messageOffset) {
>   // Prior to truncating the follower's log, ensure that doing so is not 
> disallowed by the configuration for unclean leader election.
>   // This situation could only happen if the unclean election 
> configuration for a topic changes while a replica is down. Otherwise,
>   // we should never encounter this situation since a non-ISR leader 
> cannot be elected if disallowed by the broker configuration.
>   if (!LogConfig.fromProps(brokerConfig.originals, 
> AdminUtils.fetchEntityConfig(replicaMgr.zkUtils,
> ConfigType.Topic, 
> topicAndPartition.topic)).uncleanLeaderElectionEnable) {
> // Log a fatal error and shutdown the broker to ensure that data loss 
> does not unexpectedly occur.
> fatal("...")
> Runtime.getRuntime.halt(1)
>   }
> {code}
> Firstly this assumption is invalid and there are legitimate cases (examples 
> below) that this case could actually occur. Secondly halt results into the 
> broker losing its un-flushed data, and if multiple brokers halt 
> simultaneously there is a chance that both leader and followers of a 
> partition are among the halted brokers, which would result into permanent 
> data loss.
> Given that this is a legit case, we suggest to replace it with a graceful 
> shutdown to avoid propagating data loss to the entire cluster.
> Details:
> One legit case that this could actually occur is when a troubled broker 
> shrinks its partitions right before crashing (KAFKA-3410 and KAFKA-3861). In 
> this case the broker has lost some data but the controller cannot still 
> elects the others as the leader. If the crashed broker comes back up, the 
> controller elects it as the leader, and as a result all other brokers who are 
> now following it halt since they have LEOs larger than that of shrunk topics 
> in the restarted broker.  We actually had a case that bringing up a crashed 
> broker simultaneously took down the entire cluster and as explained above 
> this could result into data loss.
> The other legit case is when multiple brokers ungracefully shutdown at the 
> same time. In this case both of the leader and the followers lose their 
> un-flushed data but one of them has HW larger than the other. Controller 
> elects the one who comes back up sooner as the leader and if its LEO is less 
> than its future follower, the follower will halt (and probably lose more 
> data). Simultaneous ungrateful shutdown could happen due to hardware issue 
> (e.g., rack power failure), operator errors, or software issue (e.g., the 
> case above that is further explained in KAFKA-3410 and KAFKA-3861 and causes 
> simultaneous halts in multiple brokers)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3924) Data loss due to halting when LEO is larger than leader's LEO

2016-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395838#comment-15395838
 ] 

ASF GitHub Bot commented on KAFKA-3924:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1634


> Data loss due to halting when LEO is larger than leader's LEO
> -
>
> Key: KAFKA-3924
> URL: https://issues.apache.org/jira/browse/KAFKA-3924
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Maysam Yabandeh
> Fix For: 0.10.0.1
>
>
> Currently the follower broker panics when its LEO is larger than its leader's 
> LEO,  and assuming that this is an impossible state to reach, halts the 
> process to prevent any further damage.
> {code}
> if (leaderEndOffset < replica.logEndOffset.messageOffset) {
>   // Prior to truncating the follower's log, ensure that doing so is not 
> disallowed by the configuration for unclean leader election.
>   // This situation could only happen if the unclean election 
> configuration for a topic changes while a replica is down. Otherwise,
>   // we should never encounter this situation since a non-ISR leader 
> cannot be elected if disallowed by the broker configuration.
>   if (!LogConfig.fromProps(brokerConfig.originals, 
> AdminUtils.fetchEntityConfig(replicaMgr.zkUtils,
> ConfigType.Topic, 
> topicAndPartition.topic)).uncleanLeaderElectionEnable) {
> // Log a fatal error and shutdown the broker to ensure that data loss 
> does not unexpectedly occur.
> fatal("...")
> Runtime.getRuntime.halt(1)
>   }
> {code}
> Firstly this assumption is invalid and there are legitimate cases (examples 
> below) that this case could actually occur. Secondly halt results into the 
> broker losing its un-flushed data, and if multiple brokers halt 
> simultaneously there is a chance that both leader and followers of a 
> partition are among the halted brokers, which would result into permanent 
> data loss.
> Given that this is a legit case, we suggest to replace it with a graceful 
> shutdown to avoid propagating data loss to the entire cluster.
> Details:
> One legit case that this could actually occur is when a troubled broker 
> shrinks its partitions right before crashing (KAFKA-3410 and KAFKA-3861). In 
> this case the broker has lost some data but the controller cannot still 
> elects the others as the leader. If the crashed broker comes back up, the 
> controller elects it as the leader, and as a result all other brokers who are 
> now following it halt since they have LEOs larger than that of shrunk topics 
> in the restarted broker.  We actually had a case that bringing up a crashed 
> broker simultaneously took down the entire cluster and as explained above 
> this could result into data loss.
> The other legit case is when multiple brokers ungracefully shutdown at the 
> same time. In this case both of the leader and the followers lose their 
> un-flushed data but one of them has HW larger than the other. Controller 
> elects the one who comes back up sooner as the leader and if its LEO is less 
> than its future follower, the follower will halt (and probably lose more 
> data). Simultaneous ungrateful shutdown could happen due to hardware issue 
> (e.g., rack power failure), operator errors, or software issue (e.g., the 
> case above that is further explained in KAFKA-3410 and KAFKA-3861 and causes 
> simultaneous halts in multiple brokers)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1634: KAFKA-3924: Replacing halt with exit upon LEO mism...

2016-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1634


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2016-07-27 Thread William Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395821#comment-15395821
 ] 

William Yu commented on KAFKA-2729:
---

We are also seeing this in our production cluster: Running on Kafka: 0.9.0.1

Is restarting the only solution?

{code}
[2016-07-27 14:36:15,807] INFO Partition [tasks,265] on broker 4: Cached 
zkVersion [182] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2016-07-27 14:36:15,807] INFO Partition [tasks,150] on broker 4: Shrinking ISR 
for partition [tasks,150] from 6,4,7 to 4 (kafka.cluster.Partition)
{code}

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3996) ByteBufferMessageSet.writeTo() should be non-blocking

2016-07-27 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3996:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1669
[https://github.com/apache/kafka/pull/1669]

> ByteBufferMessageSet.writeTo() should be non-blocking
> -
>
> Key: KAFKA-3996
> URL: https://issues.apache.org/jira/browse/KAFKA-3996
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.10.0.1
>
>
> Currently, in ByteBufferMessageSet.writeTo(), we try to finish writing all 
> bytes in the buffer in a single call. The code has been like that since 0.8. 
> This hasn't been a problem historically since the broker uses zero-copy to 
> send fetch responses and only use ByteBufferMessageSet to send produce 
> responses, which are small. However, in 0.10.0, if a consumer is before 
> 0.10.0, the broker has to down convert the message and use 
> ByteBufferMessageSet to send a fetch response to the consumer. If the client 
> is slow and there are lots of bytes in the ByteBufferMessageSet, we may not 
> be able to completely send all the bytes in the buffer for a long period of 
> time. When this happens, the Processor will be blocked and can't handle other 
> connections, which is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3996) ByteBufferMessageSet.writeTo() should be non-blocking

2016-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395794#comment-15395794
 ] 

ASF GitHub Bot commented on KAFKA-3996:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1669


> ByteBufferMessageSet.writeTo() should be non-blocking
> -
>
> Key: KAFKA-3996
> URL: https://issues.apache.org/jira/browse/KAFKA-3996
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.10.0.1
>
>
> Currently, in ByteBufferMessageSet.writeTo(), we try to finish writing all 
> bytes in the buffer in a single call. The code has been like that since 0.8. 
> This hasn't been a problem historically since the broker uses zero-copy to 
> send fetch responses and only use ByteBufferMessageSet to send produce 
> responses, which are small. However, in 0.10.0, if a consumer is before 
> 0.10.0, the broker has to down convert the message and use 
> ByteBufferMessageSet to send a fetch response to the consumer. If the client 
> is slow and there are lots of bytes in the ByteBufferMessageSet, we may not 
> be able to completely send all the bytes in the buffer for a long period of 
> time. When this happens, the Processor will be blocked and can't handle other 
> connections, which is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1669: KAFKA-3996: ByteBufferMessageSet.writeTo() should ...

2016-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1669


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Synchronized block in StreamTask

2016-07-27 Thread Pierre Coquentin
Hi,

I've a simple technical question about kafka streams and I don't know if I
should use us...@kafka.apache.org or this mailing list.
In class org.apache.kafka.streams.processor.internals.StreamTask, the
method "process" use a synchronized block but I don't see why, the method
doesn't seem to be called in a multithreaded environnement as it's created
and owned by a specific thread
org.apache.kafka.streams.processor.internals.StreamThread.
Am I missing something ? Or, as the API is unstable, this class is meant in
the future to be used by several threads ?

Regards,

Pierre


[jira] [Updated] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-3997:
-
Issue Type: Improvement  (was: Bug)

> Halting because log truncation is not allowed and suspicious logging
> 
>
> Key: KAFKA-3997
> URL: https://issues.apache.org/jira/browse/KAFKA-3997
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-3997.patch
>
>
> When follower wants to truncate partition and it is not allowed it prints the 
> following message:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
> leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}
> It is difficult to understand which partition is it.
> I suggest to log here partition instead of topic. For example:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for partition [rt3.fol--yabs-rt--bs-hit-log,0], 
> Current leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka Streams for Remote Server

2016-07-27 Thread mishadoff
Hello,

I’ve a simplest ever kafka streams application which just reads from one kafka 
topic A and write to another topic B.

When I run it on my local environment (local zk, local kafka broker, local 
kafka streams app) everything works fine, topic B created and filled with 
messages from A
If I run it on existing kafka cluster (remote zk, remote kafka, LOCAL kafka 
streams) my app is not working anymore.

It succesfully read the remote topic A, succesfully process the message and 
generate a producer record, creates a B topic in remote kafka, bud during send 
I get an error

```
15:36:47.242 [kafka-producer-network-thread | 
example-message-counter3-1-StreamThread-1-producer] ERROR 
o.a.k.s.p.internals.RecordCollector - Error sending record: null
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition
```

Could you point me to direction where to start debug or what problems might 
cause this behaviour?

Thanks,
— Misha

[jira] [Updated] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-3997:
-
Description: 
When follower wants to truncate partition and it is not allowed it prints the 
following message:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for partition [rt3.fol--yabs-rt--bs-hit-log,0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}



  was:
When follower wants to truncate partition and it is not allowed it print the 
following message:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for partition [rt3.fol--yabs-rt--bs-hit-log,0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}




> Halting because log truncation is not allowed and suspicious logging
> 
>
> Key: KAFKA-3997
> URL: https://issues.apache.org/jira/browse/KAFKA-3997
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-3997.patch
>
>
> When follower wants to truncate partition and it is not allowed it prints the 
> following message:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
> leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}
> It is difficult to understand which partition is it.
> I suggest to log here partition instead of topic. For example:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for partition [rt3.fol--yabs-rt--bs-hit-log,0], 
> Current leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-3997:
-
Description: 
When follower wants to truncate partition and it is not allowed it print the 
following message:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for partition [rt3.fol--yabs-rt--bs-hit-log,0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}



  was:
When follower wants to truncate partition and it is not allowed it print the 
following message:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log,0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}




> Halting because log truncation is not allowed and suspicious logging
> 
>
> Key: KAFKA-3997
> URL: https://issues.apache.org/jira/browse/KAFKA-3997
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-3997.patch
>
>
> When follower wants to truncate partition and it is not allowed it print the 
> following message:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
> leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}
> It is difficult to understand which partition is it.
> I suggest to log here partition instead of topic. For example:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for partition [rt3.fol--yabs-rt--bs-hit-log,0], 
> Current leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-3997:
-
Attachment: KAFKA-3997.patch

> Halting because log truncation is not allowed and suspicious logging
> 
>
> Key: KAFKA-3997
> URL: https://issues.apache.org/jira/browse/KAFKA-3997
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-3997.patch
>
>
> When follower wants to truncate partition and it is not allowed it print the 
> following message:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
> leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}
> It is difficult to understand which partition is it.
> I suggest to log here partition instead of topic. For example:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log,0], 
> Current leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-3997:
-
Status: Patch Available  (was: Open)

> Halting because log truncation is not allowed and suspicious logging
> 
>
> Key: KAFKA-3997
> URL: https://issues.apache.org/jira/browse/KAFKA-3997
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0, 0.9.0.1, 0.9.0.0
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-3997.patch
>
>
> When follower wants to truncate partition and it is not allowed it print the 
> following message:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
> leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}
> It is difficult to understand which partition is it.
> I suggest to log here partition instead of topic. For example:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log,0], 
> Current leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-3997:
-
Description: 
When follower wants to truncate partition and it is not allowed it print the 
following message:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log,0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}



  was:
When follower wants to truncate partition and it is not allowed it print the 
following message:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log, 0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}




> Halting because log truncation is not allowed and suspicious logging
> 
>
> Key: KAFKA-3997
> URL: https://issues.apache.org/jira/browse/KAFKA-3997
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: Alexey Ozeritskiy
>
> When follower wants to truncate partition and it is not allowed it print the 
> following message:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
> leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}
> It is difficult to understand which partition is it.
> I suggest to log here partition instead of topic. For example:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log,0], 
> Current leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-3997:
-
Description: 
When follower wants to truncate partition and it is not allowed it print the 
following message:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{noformat}
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log, 0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
{noformat}



  was:
When follower wants to truncate partition and it is not allowed it print the 
following message:
{{
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
}}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{{
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log, 0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
}}




> Halting because log truncation is not allowed and suspicious logging
> 
>
> Key: KAFKA-3997
> URL: https://issues.apache.org/jira/browse/KAFKA-3997
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: Alexey Ozeritskiy
>
> When follower wants to truncate partition and it is not allowed it print the 
> following message:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
> leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}
> It is difficult to understand which partition is it.
> I suggest to log here partition instead of topic. For example:
> {noformat}
> [2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
> log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log, 0], 
> Current leader 19's latest offset 50260815 is less
>  than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3997) Halting because log truncation is not allowed and suspicious logging

2016-07-27 Thread Alexey Ozeritskiy (JIRA)
Alexey Ozeritskiy created KAFKA-3997:


 Summary: Halting because log truncation is not allowed and 
suspicious logging
 Key: KAFKA-3997
 URL: https://issues.apache.org/jira/browse/KAFKA-3997
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.0.0, 0.9.0.1, 0.9.0.0
Reporter: Alexey Ozeritskiy


When follower wants to truncate partition and it is not allowed it print the 
following message:
{{
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic rt3.fol--yabs-rt--bs-hit-log, Current 
leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
}}

It is difficult to understand which partition is it.
I suggest to log here partition instead of topic. For example:
{{
[2016-07-27 14:07:37,617] FATAL [ReplicaFetcherThread-0-19], Halting because 
log truncation is not allowed for topic [rt3.fol--yabs-rt--bs-hit-log, 0], 
Current leader 19's latest offset 50260815 is less
 than replica 2's latest offset 50260816 (kafka.server.ReplicaFetcherThread)
}}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3705) Support non-key joining in KTable

2016-07-27 Thread Jan Filipiak (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395478#comment-15395478
 ] 

Jan Filipiak edited comment on KAFKA-3705 at 7/27/16 11:24 AM:
---

Something that starts happening to us is that for low cardinality columns on 
the join, the prefix scan on the rocks can return a big amount of values. That 
leads to to much time spent between poll() and us loosing group membership. One 
could check the need for a poll() on the consumer while context.forward() 
maybe, as we do context.forward() for every row that comes from the prefix 
scan. The fix with setting session time-out very high, that we are currently 
using is not that good IMO


was (Author: jfilipiak):
Something that starts happening to us is that for low cardinality columns on 
the join, the prefix scan on the rocks can return a big amount of values. That 
leads to to much time spent between poll() and us loosing group membership. One 
could check the need for a poll() on the consumer while context.forward() 
maybe. The fix with setting session time-out very high, that we are currently 
using is not that good IMO

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Liquan Pei
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3705) Support non-key joining in KTable

2016-07-27 Thread Jan Filipiak (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395478#comment-15395478
 ] 

Jan Filipiak commented on KAFKA-3705:
-

Something that starts happening to us is that for low cardinality columns on 
the join, the prefix scan on the rocks can return a big amount of values. That 
leads to to much time spent between poll() and us loosing group membership. One 
could check the need for a poll() on the consumer while context.forward() 
maybe. The fix with setting session time-out very high, that we are currently 
using is not that good IMO

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Liquan Pei
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3789) Upgrade Snappy to fix snappy decompression errors

2016-07-27 Thread Benoit Sigoure (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395441#comment-15395441
 ] 

Benoit Sigoure commented on KAFKA-3789:
---

Thanks, I'm not on kafka-dev so I hadn't seen this email.  I ended up deploying 
a snapshot build of trunk for the time being.  For some reason I wasn't able to 
rollback to v0.9.0.1, I was getting a weird {{NullPointerException}}, but now 
that I have the snapshot version of v0.10.0.1 deployed, things are working fine 
again.

> Upgrade Snappy to fix snappy decompression errors
> -
>
> Key: KAFKA-3789
> URL: https://issues.apache.org/jira/browse/KAFKA-3789
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> snappy-java recently fixed a bug where parsing the MAGIC HEADER was being 
> handled incorrectly: https://github.com/xerial/snappy-java/issues/142
> This issue caused "unknown broker exceptions" in the clients and prevented 
> these messages from being appended to the log when messages were written 
> using snappy c bindings in clients like librdkafka or ruby-kafka and read 
> using snappy-java in the broker.   
> The related librdkafka issue is here: 
> https://github.com/edenhill/librdkafka/issues/645
> I am able to regularly reproduce the issue with librdkafka in 0.10 and after 
> upgrading snappy-java to 1.1.2.6 the issue is resolved. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3789) Upgrade Snappy to fix snappy decompression errors

2016-07-27 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395376#comment-15395376
 ] 

Ismael Juma commented on KAFKA-3789:


[~tsuna], please see http://search-hadoop.com/m/uyzND1uBtDQ16hE9E1

> Upgrade Snappy to fix snappy decompression errors
> -
>
> Key: KAFKA-3789
> URL: https://issues.apache.org/jira/browse/KAFKA-3789
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> snappy-java recently fixed a bug where parsing the MAGIC HEADER was being 
> handled incorrectly: https://github.com/xerial/snappy-java/issues/142
> This issue caused "unknown broker exceptions" in the clients and prevented 
> these messages from being appended to the log when messages were written 
> using snappy c bindings in clients like librdkafka or ruby-kafka and read 
> using snappy-java in the broker.   
> The related librdkafka issue is here: 
> https://github.com/edenhill/librdkafka/issues/645
> I am able to regularly reproduce the issue with librdkafka in 0.10 and after 
> upgrading snappy-java to 1.1.2.6 the issue is resolved. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3789) Upgrade Snappy to fix snappy decompression errors

2016-07-27 Thread Benoit Sigoure (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395336#comment-15395336
 ] 

Benoit Sigoure commented on KAFKA-3789:
---

When is Kafka 0.10.0.1 going to be released?

> Upgrade Snappy to fix snappy decompression errors
> -
>
> Key: KAFKA-3789
> URL: https://issues.apache.org/jira/browse/KAFKA-3789
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> snappy-java recently fixed a bug where parsing the MAGIC HEADER was being 
> handled incorrectly: https://github.com/xerial/snappy-java/issues/142
> This issue caused "unknown broker exceptions" in the clients and prevented 
> these messages from being appended to the log when messages were written 
> using snappy c bindings in clients like librdkafka or ruby-kafka and read 
> using snappy-java in the broker.   
> The related librdkafka issue is here: 
> https://github.com/edenhill/librdkafka/issues/645
> I am able to regularly reproduce the issue with librdkafka in 0.10 and after 
> upgrading snappy-java to 1.1.2.6 the issue is resolved. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3851) Add references to important installation/upgrade notes to release notes

2016-07-27 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3851:
---
Status: Patch Available  (was: Open)

> Add references to important installation/upgrade notes to release notes 
> 
>
> Key: KAFKA-3851
> URL: https://issues.apache.org/jira/browse/KAFKA-3851
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.0.1
>
>
> Today we use the release notes exactly as exported from JIRA (see "Prepare 
> release notes" on 
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Process) and then 
> rely on users to dig through our documentation (none of which starts with 
> release-specific notes) to upgrade and installation notes.
> Especially for folks who aren't intimately familiar with our docs, the 
> information is *very* easy to miss. Ideally we could automate the release 
> notes process a bit and then have them automatically modified to *at least* 
> include links at the very top (it'd be nice if we had some other header 
> material added as well since the automatically generated release notes are 
> very barebones...). Even better would be if we could pull in the 
> version-specific installation/upgrade notes directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)