Build failed in Jenkins: kafka-trunk-jdk7 #1044

2016-02-16 Thread Apache Jenkins Server
See 

Changes:

[harsha] KAFKA-2547; Make DynamicConfigManager to use the 
ZkNodeChangeNotifica…

--
[...truncated 2382 lines...]

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator PASSED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure PASSED

kafka.consumer.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.consumer.MetricsTest > testMetricsLeak PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #372

2016-02-16 Thread Apache Jenkins Server
See 

Changes:

[harsha] KAFKA-2547; Make DynamicConfigManager to use the 
ZkNodeChangeNotifica…

--
[...truncated 1475 lines...]
kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED


[jira] [Commented] (KAFKA-725) Broker Exception: Attempt to read with a maximum offset less than start offset

2016-02-16 Thread Manu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149970#comment-15149970
 ] 

Manu Zhang commented on KAFKA-725:
--

I can reproduce this on 0.9.0.0. The error log is 

[2016-01-28 16:12:32,840] ERROR [Replica Manager on Broker 1]: Error processing 
fetch operation on partition [ad-events,1] offset 75510318 
(kafka.server.ReplicaManager)

I also print the sent offset from producer 

time   partition offset 
16:12:32.840   1   75510318

It seems the offset is produced and consumed at the same time. 


> Broker Exception: Attempt to read with a maximum offset less than start offset
> --
>
> Key: KAFKA-725
> URL: https://issues.apache.org/jira/browse/KAFKA-725
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>Assignee: Jay Kreps
>
> I have a simple consumer that's reading from a single topic/partition pair. 
> Running it seems to trigger these messages on the broker periodically:
> 2013/01/22 23:04:54.936 ERROR [KafkaApis] [kafka-request-handler-4] [kafka] 
> []  [KafkaApi-466] error when processing request (MyTopic,4,7951732,2097152)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (7951715) less than the start offset (7951732).
> at kafka.log.LogSegment.read(LogSegment.scala:105)
> at kafka.log.Log.read(Log.scala:390)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:372)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:330)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:326)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.map(Map.scala:93)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:326)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:165)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:164)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at 
> kafka.server.KafkaApis.maybeUnblockDelayedFetchRequests(KafkaApis.scala:164)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:186)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:185)
> at scala.collection.immutable.Map$Map2.foreach(Map.scala:127)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:185)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> at java.lang.Thread.run(Thread.java:619)
> When I shut the consumer down, I don't see the exceptions anymore.
> This is the code that my consumer is running:
>   while(true) {
> // we believe the consumer to be connected, so try and use it for 
> a fetch request
> val request = new FetchRequestBuilder()
>   .addFetch(topic, partition, nextOffset, fetchSize)
>   .maxWait(Int.MaxValue)
>   // TODO for super high-throughput, might be worth waiting for 
> more bytes
>   .minBytes(1)
>   .build
> debug("Fetching messages for stream %s and offset %s." format 
> (streamPartition, nextOffset))
> val messages = connectedConsumer.fetch(request)
> debug("Fetch complete for stream %s and offset %s. Got messages: 
> %s" format (streamPartition, nextOffset, messages))
> if (messages.hasError) {
>   warn("Got error code from broker for %s: %s. Shutting down 
> consumer to trigger a reconnect." format (streamPartition, 
> messages.errorCode(topic, partition)))
>   ErrorMapping.maybeThrowException(messages.errorCode(topic, 
> partition))
> }
> messages.messageSet(topic, partition).foreach(msg => {
>   watchers.foreach(_.onMessagesReady(msg.offset.toString, 
> msg.message.payload))
>   nextOffset = msg.nextOffset
> })
>   }
> Any idea what might be causing this error?



--
This message was sent by 

Re: 0.9.0.1 RC1

2016-02-16 Thread Jun Rao
Thanks everyone for voting. The results are:

+1 binding = 4 votes (Ewen Cheslack-Postava, Neha Narkhede, Joel Koshy and Jun
Rao)
+1 non-binding = 3 votes
-1 = 0 votes
0 = 0 votes

The vote passes.

I will release artifacts to maven central, update the dist svn and download
site. Will send out an announce after that.

Thanks,

Jun

On Thu, Feb 11, 2016 at 6:55 PM, Jun Rao  wrote:

> This is the first candidate for release of Apache Kafka 0.9.0.1. This a
> bug fix release that fixes 70 issues.
>
> Release Notes for the 0.9.0.1 release
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * scala-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
>
> * java-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
>
> * Documentation
> http://kafka.apache.org/090/documentation.html
>
> Thanks,
>
> Jun
>
>


Re: 0.9.0.1 RC1

2016-02-16 Thread Jun Rao
Since the unit test failures are transient. +1 from myself.

Thanks,

Jun

On Thu, Feb 11, 2016 at 6:55 PM, Jun Rao  wrote:

> This is the first candidate for release of Apache Kafka 0.9.0.1. This a
> bug fix release that fixes 70 issues.
>
> Release Notes for the 0.9.0.1 release
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * scala-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
>
> * java-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
>
> * Documentation
> http://kafka.apache.org/090/documentation.html
>
> Thanks,
>
> Jun
>
>


Re: [kafka-clients] 0.9.0.1 RC1

2016-02-16 Thread Grant Henke
+1 (non-binding)

On Tue, Feb 16, 2016 at 8:59 PM, Joel Koshy  wrote:

> +1
>
> On Thu, Feb 11, 2016 at 6:55 PM, Jun Rao  wrote:
>
> > This is the first candidate for release of Apache Kafka 0.9.0.1. This a
> > bug fix release that fixes 70 issues.
> >
> > Release Notes for the 0.9.0.1 release
> >
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > and sha2 (SHA256) checksum.
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
> >
> > * Maven artifacts to be voted upon prior to release:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * scala-doc
> > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
> >
> > * java-doc
> > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
> >
> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
> >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
> >
> > * Documentation
> > http://kafka.apache.org/090/documentation.html
> >
> > Thanks,
> >
> > Jun
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G8VbhZ5Q0nVnUAg8qR0yEO%3DqhYrHFtLySpJo1Nha%3DoOxA%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G8VbhZ5Q0nVnUAg8qR0yEO%3DqhYrHFtLySpJo1Nha%3DoOxA%40mail.gmail.com?utm_medium=email_source=footer
> >
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[jira] [Commented] (KAFKA-725) Broker Exception: Attempt to read with a maximum offset less than start offset

2016-02-16 Thread Manu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149886#comment-15149886
 ] 

Manu Zhang commented on KAFKA-725:
--

I can reproduce this on 0.9.0.0. The error log is 

[2016-01-28 16:12:32,840] ERROR [Replica Manager on Broker 1]: Error processing 
fetch operation on partition [ad-events,1] offset 75510318 
(kafka.server.ReplicaManager)

I also print the sent offset from producer 

time   partition offset 
16:12:32.840   1   75510318

It seems the offset is produced and consumed at the same time. 


> Broker Exception: Attempt to read with a maximum offset less than start offset
> --
>
> Key: KAFKA-725
> URL: https://issues.apache.org/jira/browse/KAFKA-725
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>Assignee: Jay Kreps
>
> I have a simple consumer that's reading from a single topic/partition pair. 
> Running it seems to trigger these messages on the broker periodically:
> 2013/01/22 23:04:54.936 ERROR [KafkaApis] [kafka-request-handler-4] [kafka] 
> []  [KafkaApi-466] error when processing request (MyTopic,4,7951732,2097152)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (7951715) less than the start offset (7951732).
> at kafka.log.LogSegment.read(LogSegment.scala:105)
> at kafka.log.Log.read(Log.scala:390)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:372)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:330)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:326)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.map(Map.scala:93)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:326)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:165)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:164)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at 
> kafka.server.KafkaApis.maybeUnblockDelayedFetchRequests(KafkaApis.scala:164)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:186)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:185)
> at scala.collection.immutable.Map$Map2.foreach(Map.scala:127)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:185)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> at java.lang.Thread.run(Thread.java:619)
> When I shut the consumer down, I don't see the exceptions anymore.
> This is the code that my consumer is running:
>   while(true) {
> // we believe the consumer to be connected, so try and use it for 
> a fetch request
> val request = new FetchRequestBuilder()
>   .addFetch(topic, partition, nextOffset, fetchSize)
>   .maxWait(Int.MaxValue)
>   // TODO for super high-throughput, might be worth waiting for 
> more bytes
>   .minBytes(1)
>   .build
> debug("Fetching messages for stream %s and offset %s." format 
> (streamPartition, nextOffset))
> val messages = connectedConsumer.fetch(request)
> debug("Fetch complete for stream %s and offset %s. Got messages: 
> %s" format (streamPartition, nextOffset, messages))
> if (messages.hasError) {
>   warn("Got error code from broker for %s: %s. Shutting down 
> consumer to trigger a reconnect." format (streamPartition, 
> messages.errorCode(topic, partition)))
>   ErrorMapping.maybeThrowException(messages.errorCode(topic, 
> partition))
> }
> messages.messageSet(topic, partition).foreach(msg => {
>   watchers.foreach(_.onMessagesReady(msg.offset.toString, 
> msg.message.payload))
>   nextOffset = msg.nextOffset
> })
>   }
> Any idea what might be causing this error?



--
This message was sent by 

Build failed in Jenkins: kafka-trunk-jdk8 #371

2016-02-16 Thread Apache Jenkins Server
See 

Changes:

[harsha] KAFKA-2508; Replace UpdateMetadata{Request,Response} with 
o.a.k.c.req���

--
[...truncated 2887 lines...]

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.ByteBufferMessageSetTest > testOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED


[jira] [Resolved] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-02-16 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-2547.
---
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 679
[https://github.com/apache/kafka/pull/679]

> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.1.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149843#comment-15149843
 ] 

ASF GitHub Bot commented on KAFKA-2547:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/679


> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.1.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2547: Make DynamicConfigManager to use t...

2016-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/679


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2236) offset request reply racing with segment rolling

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149808#comment-15149808
 ] 

ASF GitHub Bot commented on KAFKA-2236:
---

Github user jhspaybar closed the pull request at:

https://github.com/apache/kafka/pull/86


> offset request reply racing with segment rolling
> 
>
> Key: KAFKA-2236
> URL: https://issues.apache.org/jira/browse/KAFKA-2236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: Linux x86_64, java.1.7.0_72, discovered using librdkafka 
> based client.
>Reporter: Alfred Landrum
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: newbie
>
> My use case with kafka involves an aggressive retention policy that rolls 
> segment files frequently. My librdkafka based client sees occasional errors 
> to offset requests, showing up in the broker log like:
> [2015-06-02 02:33:38,047] INFO Rolled new log segment for 
> 'receiver-93b40462-3850-47c1-bcda-8a3e221328ca-50' in 1 ms. (kafka.log.Log)
> [2015-06-02 02:33:38,049] WARN [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis)
> java.lang.ArrayIndexOutOfBoundsException: 3
> at kafka.server.KafkaApis.fetchOffsetsBefore(KafkaApis.scala:469)
> at kafka.server.KafkaApis.fetchOffsets(KafkaApis.scala:449)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:411)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:402)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.server.KafkaApis.handleOffsetRequest(KafkaApis.scala:402)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
> at java.lang.Thread.run(Thread.java:745)
> quoting Guozhang Wang's reply to my query on the users list:
> "I check the 0.8.2 code and may probably find a bug related to your issue.
> Basically, segsArray.last.size is called multiple times during handling
> offset requests, while segsArray.last could get concurrent appends. Hence
> it is possible that in line 461, if(segsArray.last.size > 0) returns false
> while later in line 468, if(segsArray.last.size > 0) could return true."
> http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAHwHRrUK-3wdoEAaFbsD0E859Ea0gXixfxgDzF8E3%3D_8r7K%2Bpw%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2236; Offset request reply racing with s...

2016-02-16 Thread jhspaybar
Github user jhspaybar closed the pull request at:

https://github.com/apache/kafka/pull/86


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1043

2016-02-16 Thread Apache Jenkins Server
See 



[jira] [Comment Edited] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-16 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149068#comment-15149068
 ] 

Rekha Joshi edited comment on KAFKA-3238 at 2/17/16 3:46 AM:
-

anyone home? :) This has similarity to KAFKA-2048, but as per my investigation 
did not see the IllegalMonitorStateException in logs.The trace here suggest the 
underlying fetcher thread is stopped.Thankfully for having logs issue 
corrected, KAFKA-1891 is fixed in Kafka 0.9. Maybe with mirrormaker refactored 
in 0.9 and few other fixes, Kafka 0.9 is one option? Anyhow it requires uphaul 
of a production system at my end, so wondering if there are any other 
suggestions?
Would be great to have your inputs!  thanks [~nehanarkhede] [~junrao] [~jkreps]


was (Author: rekhajoshm):
anyone home? :) 
This has similarity to KAFKA-2048, but as per my investigation did not see the 
IllegalMonitorStateException in logs.But the underlying fetcher thread to be  
completely stopped.The trace substantiate that. Its not clearly mentioned but 
was KAFKA-2048 is fixed in 0.9?
would be great to have your inputs!  thanks [~nehanarkhede] [~junrao] [~jkreps]

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked 

Re: [kafka-clients] 0.9.0.1 RC1

2016-02-16 Thread Joel Koshy
+1

On Thu, Feb 11, 2016 at 6:55 PM, Jun Rao  wrote:

> This is the first candidate for release of Apache Kafka 0.9.0.1. This a
> bug fix release that fixes 70 issues.
>
> Release Notes for the 0.9.0.1 release
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * scala-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
>
> * java-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
>
> * Documentation
> http://kafka.apache.org/090/documentation.html
>
> Thanks,
>
> Jun
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G8VbhZ5Q0nVnUAg8qR0yEO%3DqhYrHFtLySpJo1Nha%3DoOxA%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


[jira] [Updated] (KAFKA-2508) Replace UpdateMetadata{Request,Response} with org.apache.kafka.common.requests equivalent

2016-02-16 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2508:
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 896
[https://github.com/apache/kafka/pull/896]

> Replace UpdateMetadata{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2508
> URL: https://issues.apache.org/jira/browse/KAFKA-2508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2508) Replace UpdateMetadata{Request,Response} with org.apache.kafka.common.requests equivalent

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149725#comment-15149725
 ] 

ASF GitHub Bot commented on KAFKA-2508:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/896


> Replace UpdateMetadata{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2508
> URL: https://issues.apache.org/jira/browse/KAFKA-2508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2508: Replace UpdateMetadata{Request,Res...

2016-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/896


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: release resources on abrupt terminatio...

2016-02-16 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/925

HOTFIX: release resources on abrupt termination of stream threads

Currently the resources, such as the state dir locks, are not release when 
a stream thread is abruptly terminated. ```KakfaStreams.close()``` does not 
release them for the failed threads.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka close_failed_streamthread

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/925.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #925


commit aeb858eba6cdf0e345d3a0363261ce06a5e3ce42
Author: Yasuhiro Matsuda 
Date:   2016-02-17T01:26:04Z

HOTFIX: release resources on abrupt termination of stream threads




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-02-16 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149659#comment-15149659
 ] 

Parth Brahmbhatt commented on KAFKA-1696:
-

[~gwenshap] [~harsha_ch] [~singhashish] I posted an initial KIP Draft 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka.
 I haven't yet opened a Discuss thread as I need to verify some assumptions I 
have made. 

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-16 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149619#comment-15149619
 ] 

Sriharsha Chintalapani commented on KAFKA-3199:
---

[~adam.kunicki] what happens when a thread from kafka is trying to renew the 
same subject and so does the client application. 

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Doc of topic-level configuration is mis...

2016-02-16 Thread sasakitoa
Github user sasakitoa closed the pull request at:

https://github.com/apache/kafka/pull/915


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3242) "Add Partition" log message doesn't actually indicate adding a partition

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149556#comment-15149556
 ] 

ASF GitHub Bot commented on KAFKA-3242:
---

GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/924

KAFKA-3242: minor rename / logging change to Controller

KAFKA-3242: minor rename / logging change to references to 'adding 
partitions' to indicate 'modifying partitions'

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka small_changes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/924.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #924


commit e2dfcf98b70dfd3a97b30cad25b6b74abbd191b1
Author: Ben Stopford 
Date:   2016-02-16T22:30:49Z

KAFKA-3242: minor rename / logging change to refererences to 'adding 
partitions' to indicate 'modifying partitions'




> "Add Partition" log message doesn't actually indicate adding a partition
> 
>
> Key: KAFKA-3242
> URL: https://issues.apache.org/jira/browse/KAFKA-3242
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> We log:
> "Add Partition triggered " ... " for path "...
> on every trigger of addPartitionListener
> The listener is triggered not just when partition is added but on any 
> modification of the partition assignment in ZK. So this is a bit misleading.
> Calling the listener updatePartitionListener and logging something more 
> meaningful will be nice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3242: minor rename / logging change to C...

2016-02-16 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/924

KAFKA-3242: minor rename / logging change to Controller

KAFKA-3242: minor rename / logging change to references to 'adding 
partitions' to indicate 'modifying partitions'

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka small_changes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/924.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #924


commit e2dfcf98b70dfd3a97b30cad25b6b74abbd191b1
Author: Ben Stopford 
Date:   2016-02-16T22:30:49Z

KAFKA-3242: minor rename / logging change to refererences to 'adding 
partitions' to indicate 'modifying partitions'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3243) Fix Kafka basic ops documentation for Mirror maker, blacklist is not supported for new consumers

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149513#comment-15149513
 ] 

ASF GitHub Bot commented on KAFKA-3243:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/923

KAFKA-3243: Fix Kafka basic ops documentation for Mirror maker, black…

…list is not supported for new consumers

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3243

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/923.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #923


commit cf8315c9ca600a4137e07fb95a0ffb477fe43a4b
Author: Ashish Singh 
Date:   2016-02-16T23:35:57Z

KAFKA-3243: Fix Kafka basic ops documentation for Mirror maker, blacklist 
is not supported for new consumers




> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers
> 
>
> Key: KAFKA-3243
> URL: https://issues.apache.org/jira/browse/KAFKA-3243
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3243) Fix Kafka basic ops documentation for Mirror maker, blacklist is not supported for new consumers

2016-02-16 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3243:
--
Status: Patch Available  (was: Open)

> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers
> 
>
> Key: KAFKA-3243
> URL: https://issues.apache.org/jira/browse/KAFKA-3243
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3243: Fix Kafka basic ops documentation ...

2016-02-16 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/923

KAFKA-3243: Fix Kafka basic ops documentation for Mirror maker, black…

…list is not supported for new consumers

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3243

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/923.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #923


commit cf8315c9ca600a4137e07fb95a0ffb477fe43a4b
Author: Ashish Singh 
Date:   2016-02-16T23:35:57Z

KAFKA-3243: Fix Kafka basic ops documentation for Mirror maker, blacklist 
is not supported for new consumers




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3243) Fix Kafka basic ops documentation for Mirror maker, blacklist is not supported for new consumers

2016-02-16 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3243:
-

 Summary: Fix Kafka basic ops documentation for Mirror maker, 
blacklist is not supported for new consumers
 Key: KAFKA-3243
 URL: https://issues.apache.org/jira/browse/KAFKA-3243
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh


Fix Kafka basic ops documentation for Mirror maker, blacklist is not supported 
for new consumers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3007) Implement max.poll.records for new consumer (KIP-41)

2016-02-16 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3007 started by Jason Gustafson.
--
> Implement max.poll.records for new consumer (KIP-41)
> 
>
> Key: KAFKA-3007
> URL: https://issues.apache.org/jira/browse/KAFKA-3007
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: aarti gupta
>Assignee: Jason Gustafson
>
> Currently, the consumer.poll(timeout)
> returns all messages that have not been acked since the last fetch
> The only way to process a single message, is to throw away all but the first 
> message in the list
> This would mean we are required to fetch all messages into memory, and this 
> coupled with the client being not thread-safe, (i.e. we cannot use a 
> different thread to ack messages, makes it hard to consume messages when the 
> order of message arrival is important, and a large number of messages are 
> pending to be consumed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3242) "Add Partition" log message doesn't actually indicate adding a partition

2016-02-16 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3242:
---

 Summary: "Add Partition" log message doesn't actually indicate 
adding a partition
 Key: KAFKA-3242
 URL: https://issues.apache.org/jira/browse/KAFKA-3242
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


We log:
"Add Partition triggered " ... " for path "...
on every trigger of addPartitionListener

The listener is triggered not just when partition is added but on any 
modification of the partition assignment in ZK. So this is a bit misleading.

Calling the listener updatePartitionListener and logging something more 
meaningful will be nice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-47 - Add timestamp-based log deletion policy

2016-02-16 Thread Jun Rao
Bill,

Thanks for the proposal. A couple of comments.

1. It seems that this new policy should work for CreateTime as well. If a
topic is configured with CreateTime, messages may not be added in strict
order in the log. However, to build a time-based index, we will be
maintaining the largest timestamp for all messages in a log segment. We can
delete a segment if its largest timestamp is less than
log.retention.min.timestamp. This guarantees that no messages newer than
log.retention.min.timestamp will be deleted, which is probably what the
user wants.

2. Right now, the user can specify "delete" as the retention policy and a
log segment will be deleted either when the size of a partition exceeds a
threshold or the timestamp of a segment is older than a relative period of
time (say 7 days) from now. What you are proposing is not a new retention
policy, but an additional check that will cause a segment to be deleted
when the timestamp of a segment is older than an absolute timestamp? If so,
could you update the wiki accordingly?

Jun



On Sat, Feb 13, 2016 at 3:23 PM, Bill Warshaw  wrote:

> Hello,
>
> That is a good catch, thanks for pointing it out.  If this KIP is accepted,
> we'd need to document this and make the log cleaner not run timestamp-based
> deletion unless message.timestamp.type=LogAppendTime.
>
> On Sat, Feb 13, 2016 at 5:38 AM, Andrew Schofield <
> andrew_schofield_j...@outlook.com> wrote:
>
> > This KIP is related to KIP-32, but I strikes me that it only makes sense
> > with one of the two proposed message timestamp types. If I understand
> > correctly, message timestamps are only certain to be monotonically
> > increasing in the log if message.timestamp.type=LogAppendTime.
> >
> >
> >
> > Does timestamp-based auto-expiration require use of
> > message.timestamp.type=LogAppendTime?
> >
> >
> >
> >
> > I think this KIP is a good idea, but I think it relies on strict ordering
> > of timestamps to be workable.
> >
> >
> >
> > Andrew Schofield
> >
> >
> >
> >
> > > Date: Fri, 12 Feb 2016 10:38:46 -0800
> > > Subject: Re: [DISCUSS] KIP-47 - Add timestamp-based log deletion policy
> > > From: n...@confluent.io
> > > To: dev@kafka.apache.org
> > >
> > > Adding a timestamp based auto-expiration is useful and this proposal
> > makes
> > > sense. Thx!
> > >
> > > On Wed, Feb 10, 2016 at 3:35 PM, Jay Kreps  wrote:
> > >
> > >> I think this makes a lot of sense and won't be hard to implement and
> > >> doesn't create too much in the way of new interfaces.
> > >>
> > >> -Jay
> > >>
> > >> On Tue, Feb 9, 2016 at 8:13 AM, Bill Warshaw  wrote:
> > >>
> > >>> Hello,
> > >>>
> > >>> I just submitted KIP-47 for adding a new log deletion policy based
> on a
> > >>> minimum timestamp of messages to retain.
> > >>>
> > >>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-47+-+Add+timestamp-based+log+deletion+policy
> > >>>
> > >>> I'm open to any comments or suggestions.
> > >>>
> > >>> Thanks,
> > >>> Bill Warshaw
> > >>>
> > >>
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Neha
> >
>


[jira] [Comment Edited] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-16 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149068#comment-15149068
 ] 

Rekha Joshi edited comment on KAFKA-3238 at 2/16/16 9:29 PM:
-

anyone home? :) 
This has similarity to KAFKA-2048, but as per my investigation did not see the 
IllegalMonitorStateException in logs.But the underlying fetcher thread to be  
completely stopped.The trace substantiate that. Its not clearly mentioned but 
was KAFKA-2048 is fixed in 0.9?
would be great to have your inputs!  thanks [~nehanarkhede] [~junrao] [~jkreps]


was (Author: rekhajoshm):
anyone home? :) would be great to have your inputs! thanks [~nehanarkhede] 
[~junrao] [~jkreps]

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at 

[jira] [Created] (KAFKA-3241) JmxReporter produces invalid JSON when a value is Infinity

2016-02-16 Thread Babak Behzad (JIRA)
Babak Behzad created KAFKA-3241:
---

 Summary: JmxReporter produces invalid JSON when a value is Infinity
 Key: KAFKA-3241
 URL: https://issues.apache.org/jira/browse/KAFKA-3241
 Project: Kafka
  Issue Type: Bug
Reporter: Babak Behzad


We recently realized that the when JmxReporter$KafkaMbean has some metrics with 
the value Infinity, the JSON created is invalid since the string value 
"Infinity" or "-Infinity" are not in double-quotes! Here's an example:

{noformat}
 {
"name" : 
"kafka.producer:type=producer-node-metrics,client-id=producer-1,node-id=node-1",
"modelerType" : "org.apache.kafka.common.metrics.JmxReporter$KafkaMbean",
"request-rate" : 0.0,
"request-size-avg" : 0.0,
"incoming-byte-rate" : 0.0,
"request-size-max" : -Infinity,
"outgoing-byte-rate" : 0.0,
"request-latency-max" : -Infinity,
"request-latency-avg" : 0.0,
"response-rate" : 0.0
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1042

2016-02-16 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Reconcile differences in .bat & .sh start scripts

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-us1 (docker Ubuntu ubuntu ubuntu-us) in workspace 

Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 3ee1878d80aa6e3198731159d4479ac393489378 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3ee1878d80aa6e3198731159d4479ac393489378
 > git rev-list f355918ec7edd6cfa1c0d74902a21d32d3065997 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson511635013363581.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
The message received from the daemon indicates that the daemon has disappeared.
Build request sent: BuildAndStop{id=5707499c-ff3d-43e6-9a37-6064e18ab6a7.1, 
currentDir=
Attempting to read last messages from the daemon log...
Daemon pid: 30371
  log file: /home/jenkins/.gradle/daemon/2.4-rc-2/daemon-30371.out.log
- Last  20 lines from daemon log file - daemon-30371.out.log -
20:39:20.985 [DEBUG] 
[org.gradle.launcher.daemon.registry.PersistentDaemonRegistry] Marking busy by 
address: [063603bb-ae4a-49be-9eed-a7af81b34576 port:56134, 
addresses:[/0:0:0:0:0:0:0:1%1, /127.0.0.1]]
20:39:20.987 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting 
to acquire exclusive lock on daemon addresses registry.
20:39:20.990 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock 
acquired.
20:39:21.004 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] 
Releasing lock on daemon addresses registry.
20:39:21.005 [DEBUG] [org.gradle.launcher.daemon.server.DaemonStateCoordinator] 
updating lastActivityAt to 1455655161005
20:39:21.007 [DEBUG] [org.gradle.launcher.daemon.server.DaemonStateCoordinator] 
Daemon is busy, sleeping until state changes.
20:39:21.008 [INFO] 
[org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy] Daemon is 
about to start building BuildAndStop{id=5707499c-ff3d-43e6-9a37-6064e18ab6a7.1, 
currentDir= Dispatching 
build started information...
20:39:21.011 [DEBUG] 
[org.gradle.launcher.daemon.server.SynchronizedDispatchConnection] thread 15: 
dispatching class org.gradle.launcher.daemon.protocol.BuildStarted
20:39:21.021 [DEBUG] 
[org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment] Configuring 
env variables: {JENKINS_HOME=/x1/jenkins/jenkins-home, 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51, 
ROOT_BUILD_CAUSE=SCMTRIGGER, BUILD_CAUSE_SCMTRIGGER=true, 
SSH_CLIENT=140.211.11.14 50416 22, HUDSON_URL=https://builds.apache.org/, 
MAIL=/var/mail/jenkins, GIT_COMMIT=3ee1878d80aa6e3198731159d4479ac393489378, 
JOB_URL=https://builds.apache.org/job/kafka-trunk-jdk7/, 
PWD= NODE_NAME=ubuntu-us1, 
NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat, BUILD_NUMBER=1042, EXECUTOR_NUMBER=0, 
GIT_PREVIOUS_SUCCESSFUL_COMMIT=330274ed1c8efd2b1aa9907860429d9d20f72c3c, 
GIT_COMMITTER_EMAIL=bui...@apache.org, 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51, 

[GitHub] kafka pull request: MINOR: Reconcile differences in .bat & .sh sta...

2016-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/908


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-16 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy closed KAFKA-3088.
-

Thanks for the patches - this has been pushed to trunk and 0.9

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-16 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3088:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149175#comment-15149175
 ] 

ASF GitHub Bot commented on KAFKA-3088:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/910


> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3088: (0.9 branch) broker crash on recei...

2016-02-16 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/910


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-16 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149068#comment-15149068
 ] 

Rekha Joshi commented on KAFKA-3238:


anyone home? :) would be great to have your inputs! thanks [~nehanarkhede] 
[~junrao] [~jkreps]

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
> - locked <5ae2fc40> (a java.lang.Object)
> at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
> at
> kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
> 6
> )
> at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
> at
> kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
> .
> scala:26)
> at 

Build failed in Jenkins: kafka-trunk-jdk8 #369

2016-02-16 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3227; Conservative update of Kafka dependencies

--
[...truncated 4162 lines...]
org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testLockStateDirectory PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssginmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED


Build failed in Jenkins: kafka-trunk-jdk7 #1041

2016-02-16 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3227; Conservative update of Kafka dependencies

--
[...truncated 4850 lines...]

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSourceConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSinkConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateAndStop PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 52 mins 29.623 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 

[jira] [Updated] (KAFKA-3176) Allow console consumer to consume from particular partitions when new consumer is used.

2016-02-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3176:
---
Status: Patch Available  (was: Open)

> Allow console consumer to consume from particular partitions when new 
> consumer is used.
> ---
>
> Key: KAFKA-3176
> URL: https://issues.apache.org/jira/browse/KAFKA-3176
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> Previously we have simple consumer shell which can consume from a particular 
> partition. Moving forward we will deprecate simple consumer, it would be 
> useful to allow console consumer to consumer from a particular partition when 
> new consumer is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-46: Self Healing

2016-02-16 Thread Aditya Auradkar
Thanks Neha. I'll discard this for now. We can pick it up once replica
throttling and the default policies are available and tested.

On Thu, Feb 11, 2016 at 5:45 PM, Neha Narkhede  wrote:

> >
> > 1. Replica Throttling - I agree this is rather important to get done.
> > However, it may also be argued that this problem is orthogonal. We do not
> > have these protections currently yet we do run partition reassignment
> > fairly often. Having said that, I'm perfectly happy to tackle KIP-46
> after
> > this problem is solved. I understand it is actively being discussed in
> > KAFKA-1464.
>
>
> I think we are saying the same thing here. Replica throttling is required
> to be able to pull off any partition reassignment action. It removes the
> guesswork that comes from picking a batch size that is expressed in terms
> of partition count, which is an annoying hack.
>
>
> > 2. Pluggable policies - Can you elaborate on the need for pluggable
> > policies in the partition reassignment tool? Even if we make it pluggable
> > to begin with, this needs to ship with a default policy that makes sense
> > for most users. IMO, partition count is the most intuitive default and is
> > analogous to how we stripe partitions for new topics.
>
>
> Agree about the default. I was arguing for making it pluggable so we make
> it easy to test multiple policies. For instance, partition count is a
> decent one but I can imagine how one would want a policy that optimizes for
> balancing data sizes for instance.
>
>
> > 3. Even if the trigger were fully manual (as it is now), we could still
> > have the controller generate the assignment as per a configured policy
> i.e.
> > effectively the tool is built into Kafka itself. Following this approach
> to
> > begin with makes it easier to fully automate in the future since we will
> > only need to automate the trigger later.
>
>
> I would be much more comfortable adding the capability to move large
> amounts of data to the controller, after we are very sure that the default
> policy is well tested and the replica throttling works. If so, then it is
> just a matter of placing the trigger in the controller vs in the tool. But
> I'm skeptical of adding more things to the already messy controller,
> especially without being sure about how well it works.
>
> Thanks,
> Neha
>
> On Tue, Feb 9, 2016 at 12:53 PM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
>
> > Hi Neha,
> >
> > Thanks for the detailed reply and apologies for my late response. I do
> have
> > a few comments.
> >
> > 1. Replica Throttling - I agree this is rather important to get done.
> > However, it may also be argued that this problem is orthogonal. We do not
> > have these protections currently yet we do run partition reassignment
> > fairly often. Having said that, I'm perfectly happy to tackle KIP-46
> after
> > this problem is solved. I understand it is actively being discussed in
> > KAFKA-1464.
> >
> > 2. Pluggable policies - Can you elaborate on the need for pluggable
> > policies in the partition reassignment tool? Even if we make it pluggable
> > to begin with, this needs to ship with a default policy that makes sense
> > for most users. IMO, partition count is the most intuitive default and is
> > analogous to how we stripe partitions for new topics.
> >
> > 3. Even if the trigger were fully manual (as it is now), we could still
> > have the controller generate the assignment as per a configured policy
> i.e.
> > effectively the tool is built into Kafka itself. Following this approach
> to
> > begin with makes it easier to fully automate in the future since we will
> > only need to automate the trigger later.
> >
> > Aditya
> >
> >
> >
> > On Wed, Feb 3, 2016 at 1:57 PM, Neha Narkhede  wrote:
> >
> > > Adi,
> > >
> > > Thanks for the write-up. Here are my thoughts:
> > >
> > > I think you are suggesting a way of automating resurrecting a topic’s
> > > replication factor in the presence of a specific scenario: in the event
> > of
> > > permanent broker failures. I agree that the partition reassignment
> > > mechanism should be used to add replicas when they are lost to
> permanent
> > > broker failures. But I think the KIP probably chews off more than we
> can
> > > digest.
> > >
> > > Before we automate detection of permanent broker failures and have the
> > > controller mitigate through automatic data balancing, I’d like to point
> > out
> > > that our current difficulty is not that but the ability to generate a
> > > workable partition assignment for rebalancing data in a cluster.
> > >
> > > There are 2 problems with partition rebalancing today:
> > >
> > >1. Lack of replica throttling for balancing data: In the absence of
> > >replica throttling, even if you come up with an assignment that
> might
> > be
> > >workable, it isn’t practical to kick it off without worrying about
> > > bringing
> > >the entire cluster down. I don’t think the hack of 

[jira] [Commented] (KAFKA-3176) Allow console consumer to consume from particular partitions when new consumer is used.

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148787#comment-15148787
 ] 

ASF GitHub Bot commented on KAFKA-3176:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/922


> Allow console consumer to consume from particular partitions when new 
> consumer is used.
> ---
>
> Key: KAFKA-3176
> URL: https://issues.apache.org/jira/browse/KAFKA-3176
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> Previously we have simple consumer shell which can consume from a particular 
> partition. Moving forward we will deprecate simple consumer, it would be 
> useful to allow console consumer to consumer from a particular partition when 
> new consumer is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3176: Add partition/offset options to bo...

2016-02-16 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/922


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3227) Conservative update of Kafka dependencies

2016-02-16 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3227.
--
Resolution: Fixed

Issue resolved by pull request 903
[https://github.com/apache/kafka/pull/903]

> Conservative update of Kafka dependencies
> -
>
> Key: KAFKA-3227
> URL: https://issues.apache.org/jira/browse/KAFKA-3227
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> Patch version bumps for bouncy castle, minikdc, snappy, slf4j, scalatest and 
> powermock. Notable fixes:
> * Snappy: fixes a resource leak
> * Bouncy castle: security fixes
> Also update Gradle to 2.11 where the notable change is improved IDE 
> integration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3227) Conservative update of Kafka dependencies

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148790#comment-15148790
 ] 

ASF GitHub Bot commented on KAFKA-3227:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/903


> Conservative update of Kafka dependencies
> -
>
> Key: KAFKA-3227
> URL: https://issues.apache.org/jira/browse/KAFKA-3227
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> Patch version bumps for bouncy castle, minikdc, snappy, slf4j, scalatest and 
> powermock. Notable fixes:
> * Snappy: fixes a resource leak
> * Bouncy castle: security fixes
> Also update Gradle to 2.11 where the notable change is improved IDE 
> integration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3227; Conservative update of Kafka depen...

2016-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/903


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3176) Allow console consumer to consume from particular partitions when new consumer is used.

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148788#comment-15148788
 ] 

ASF GitHub Bot commented on KAFKA-3176:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/922

KAFKA-3176: Add partition/offset options to both old and new console 
consumers

With this pull request both old and new console consumers can be provided 
with optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3176

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/922.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #922


commit 5164a0d7ae344bb6cded0975849f6d36f9596982
Author: Vahid Hashemian 
Date:   2016-02-16T13:24:38Z

Add partition/offset options to both old and new console consumers

With this change both old and new console consumers can be provided with 
optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.




> Allow console consumer to consume from particular partitions when new 
> consumer is used.
> ---
>
> Key: KAFKA-3176
> URL: https://issues.apache.org/jira/browse/KAFKA-3176
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> Previously we have simple consumer shell which can consume from a particular 
> partition. Moving forward we will deprecate simple consumer, it would be 
> useful to allow console consumer to consumer from a particular partition when 
> new consumer is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3176: Add partition/offset options to bo...

2016-02-16 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/922

KAFKA-3176: Add partition/offset options to both old and new console 
consumers

With this pull request both old and new console consumers can be provided 
with optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3176

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/922.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #922


commit 5164a0d7ae344bb6cded0975849f6d36f9596982
Author: Vahid Hashemian 
Date:   2016-02-16T13:24:38Z

Add partition/offset options to both old and new console consumers

With this change both old and new console consumers can be provided with 
optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2273) Add rebalance with a minimal number of reassignments to server-defined strategy list

2016-02-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-2273:
--

Assignee: Vahid Hashemian

> Add rebalance with a minimal number of reassignments to server-defined 
> strategy list
> 
>
> Key: KAFKA-2273
> URL: https://issues.apache.org/jira/browse/KAFKA-2273
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Olof Johansson
>Assignee: Vahid Hashemian
>  Labels: newbie++, newbiee
> Fix For: 0.10.0.0
>
>
> Add a new partitions.assignment.strategy to the server-defined list that will 
> do reassignments based on moving as few partitions as possible. This should 
> be a quite common reassignment strategy especially for the cases where the 
> consumer has to maintain state, either in memory, or on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3176) Allow console consumer to consume from particular partitions when new consumer is used.

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148654#comment-15148654
 ] 

ASF GitHub Bot commented on KAFKA-3176:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/922

KAFKA-3176: Add partition/offset options to both old and new console 
consumers

With this pull request both old and new console consumers can be provided 
with optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3176

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/922.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #922


commit 5164a0d7ae344bb6cded0975849f6d36f9596982
Author: Vahid Hashemian 
Date:   2016-02-16T13:24:38Z

Add partition/offset options to both old and new console consumers

With this change both old and new console consumers can be provided with 
optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.




> Allow console consumer to consume from particular partitions when new 
> consumer is used.
> ---
>
> Key: KAFKA-3176
> URL: https://issues.apache.org/jira/browse/KAFKA-3176
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> Previously we have simple consumer shell which can consume from a particular 
> partition. Moving forward we will deprecate simple consumer, it would be 
> useful to allow console consumer to consumer from a particular partition when 
> new consumer is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3176: Add partition/offset options to bo...

2016-02-16 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/922

KAFKA-3176: Add partition/offset options to both old and new console 
consumers

With this pull request both old and new console consumers can be provided 
with optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3176

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/922.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #922


commit 5164a0d7ae344bb6cded0975849f6d36f9596982
Author: Vahid Hashemian 
Date:   2016-02-16T13:24:38Z

Add partition/offset options to both old and new console consumers

With this change both old and new console consumers can be provided with 
optional --partition and --offset arguments so messages from a particular 
partition and starting from a particular offset are only consumed.

The following rules are also implemented to avoid invalid combinations of 
arguments:
- If --partition is provided --topic has to be provided too.
- If --partition is provided --bootstrap-server (and not --zookeeper) 
should be provided too.
- If --offset is provided --partition has to be provided too.
- --offset and --from-beginning cannot be used at the same time.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3240) Replication issues on FreeBSD

2016-02-16 Thread Jan Omar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148529#comment-15148529
 ] 

Jan Omar commented on KAFKA-3240:
-

Using UFS instead of ZFS works. But we'd really like to use ZFS instead. Any 
idea what might be causing this? The underlying filesystem shouldn't really 
matter I think.

> Replication issues on FreeBSD
> -
>
> Key: KAFKA-3240
> URL: https://issues.apache.org/jira/browse/KAFKA-3240
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.8.2.2, 0.9.0.1
> Environment: FreeBSD 10.2-RELEASE-p9
>Reporter: Jan Omar
>
> Hi,
> We are trying to replace our 3-broker cluster running on 0.6 with a new 
> cluster on 0.9.0.1 (but tried 0.8.2.2 and 0.9.0.0 as well).
> - 3 kafka nodes with one zookeeper instance on each machine
> - FreeBSD 10.2 p9
> - Nagle off (sysctl net.inet.tcp.delayed_ack=0)
> - all kafka machines write a ZFS ZIL to a dedicated SSD
> - 3 producers on 3 machines, writing to 1 topics, partitioning 3, replication 
> factor 3
> - acks all
> - 10 Gigabit Ethernet, all machines on one switch, ping 0.05 ms worst case.
> While using the ProducerPerformance or rdkafka_performance we are seeing very 
> strange Replication errors. Any hint on what's going on would be highly 
> appreciated. Any suggestion on how to debug this properly would help as well.
> This is what our broker config looks like:
> {code}
> broker.id=5
> auto.create.topics.enable=false
> delete.topic.enable=true
> listeners=PLAINTEXT://:9092
> port=9092
> host.name=kafka-five.acc
> advertised.host.name=10.5.3.18
> zookeeper.connect=zookeeper-four.acc:2181,zookeeper-five.acc:2181,zookeeper-six.acc:2181
> zookeeper.connection.timeout.ms=6000
> num.replica.fetchers=1
> replica.fetch.max.bytes=1
> replica.fetch.wait.max.ms=500
> replica.high.watermark.checkpoint.interval.ms=5000
> replica.socket.timeout.ms=30
> replica.socket.receive.buffer.bytes=65536
> replica.lag.time.max.ms=1000
> min.insync.replicas=2
> controller.socket.timeout.ms=3
> controller.message.queue.size=100
> log.dirs=/var/db/kafka
> num.partitions=8
> message.max.bytes=1
> auto.create.topics.enable=false
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.hours=168
> log.flush.interval.ms=1
> log.flush.interval.messages=2
> log.flush.scheduler.interval.ms=2000
> log.roll.hours=168
> log.retention.check.interval.ms=30
> log.segment.bytes=536870912
> zookeeper.connection.timeout.ms=100
> zookeeper.sync.time.ms=5000
> num.io.threads=8
> num.network.threads=4
> socket.request.max.bytes=104857600
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> queued.max.requests=10
> fetch.purgatory.purge.interval.requests=100
> producer.purgatory.purge.interval.requests=100
> replica.lag.max.messages=1000
> {code}
> These are the errors we're seeing:
> {code:borderStyle=solid}
> ERROR [Replica Manager on Broker 5]: Error processing fetch operation on 
> partition [test,0] offset 50727 (kafka.server.ReplicaManager)
> java.lang.IllegalStateException: Invalid message size: 0
>   at kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:141)
>   at kafka.log.LogSegment.translateOffset(LogSegment.scala:105)
>   at kafka.log.LogSegment.read(LogSegment.scala:126)
>   at kafka.log.Log.read(Log.scala:506)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:536)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:507)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:507)
>   at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)0
> {code}
> and 
> {code}
> ERROR Found invalid messages during fetch for partition [test,0] offset 2732 
> error Message found with corrupt size (0) in shallow iterator 
> (kafka.server.ReplicaFetcherThread)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3239:
--
   Resolution: Cannot Reproduce
Fix Version/s: (was: 0.9.1.0)
   Status: Resolved  (was: Patch Available)

The topic corresponding to the exception was stuck in marked-for-delete mode 
for over a month at the time this exception was thrown.  We have manually 
deleted the topic and unfortunately I don't remember what state it was in 
before the delete. Since there have been fixes for topic delete since this 
topic was marked for delete, I am closing this defect for now. It looks like 
the problem may have been with {{replicas}} being empty, which could also throw 
the same exception. Hopefully, with the fixes for topic delete, we shouldn't 
get into the same state again.

> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3240) Replication issues on FreeBSD

2016-02-16 Thread Jan Omar (JIRA)
Jan Omar created KAFKA-3240:
---

 Summary: Replication issues on FreeBSD
 Key: KAFKA-3240
 URL: https://issues.apache.org/jira/browse/KAFKA-3240
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0, 0.8.2.2, 0.9.0.1
 Environment: FreeBSD 10.2-RELEASE-p9
Reporter: Jan Omar


Hi,

We are trying to replace our 3-broker cluster running on 0.6 with a new cluster 
on 0.9.0.1 (but tried 0.8.2.2 and 0.9.0.0 as well).

- 3 kafka nodes with one zookeeper instance on each machine
- FreeBSD 10.2 p9
- Nagle off (sysctl net.inet.tcp.delayed_ack=0)
- all kafka machines write a ZFS ZIL to a dedicated SSD
- 3 producers on 3 machines, writing to 1 topics, partitioning 3, replication 
factor 3
- acks all
- 10 Gigabit Ethernet, all machines on one switch, ping 0.05 ms worst case.

While using the ProducerPerformance or rdkafka_performance we are seeing very 
strange Replication errors. Any hint on what's going on would be highly 
appreciated. Any suggestion on how to debug this properly would help as well.

This is what our broker config looks like:
{code}
broker.id=5
auto.create.topics.enable=false
delete.topic.enable=true
listeners=PLAINTEXT://:9092
port=9092
host.name=kafka-five.acc
advertised.host.name=10.5.3.18
zookeeper.connect=zookeeper-four.acc:2181,zookeeper-five.acc:2181,zookeeper-six.acc:2181
zookeeper.connection.timeout.ms=6000
num.replica.fetchers=1
replica.fetch.max.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=1000
min.insync.replicas=2
controller.socket.timeout.ms=3
controller.message.queue.size=100
log.dirs=/var/db/kafka
num.partitions=8
message.max.bytes=1
auto.create.topics.enable=false
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=168
log.flush.interval.ms=1
log.flush.interval.messages=2
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=30
log.segment.bytes=536870912
zookeeper.connection.timeout.ms=100
zookeeper.sync.time.ms=5000
num.io.threads=8
num.network.threads=4
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
queued.max.requests=10
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
replica.lag.max.messages=1000
{code}

These are the errors we're seeing:
{code:borderStyle=solid}
ERROR [Replica Manager on Broker 5]: Error processing fetch operation on 
partition [test,0] offset 50727 (kafka.server.ReplicaManager)
java.lang.IllegalStateException: Invalid message size: 0
at kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:141)
at kafka.log.LogSegment.translateOffset(LogSegment.scala:105)
at kafka.log.LogSegment.read(LogSegment.scala:126)
at kafka.log.Log.read(Log.scala:506)
at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:536)
at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:507)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at 
scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
at 
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:507)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)0

{code}

and 

{code}
ERROR Found invalid messages during fetch for partition [test,0] offset 2732 
error Message found with corrupt size (0) in shallow iterator 
(kafka.server.ReplicaFetcherThread)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148357#comment-15148357
 ] 

Rajini Sivaram commented on KAFKA-3239:
---

[~ijuma] Yes, you are absolutely right. I will close the PR. I am not sure why 
that exception was thrown though. Will take a look at the calling code.

> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3239: Synchronize removeTopic using cont...

2016-02-16 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/921


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148358#comment-15148358
 ] 

ASF GitHub Bot commented on KAFKA-3239:
---

Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/921


> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148291#comment-15148291
 ] 

Ismael Juma commented on KAFKA-3239:


The methods in `ControllerContext` are not thread-safe in general.

> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148288#comment-15148288
 ] 

Ismael Juma commented on KAFKA-3239:


[~rsivaram], the only usage of `removeTopic` in the Kafka codebase is done with 
`controllerContext.controllerLock` acquired. The stacktrace you pasted it from 
StatsdReporter, which is perhaps not acquiring the lock?

> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3239:
---
Fix Version/s: 0.9.1.0

> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3239:
--
Status: Patch Available  (was: Open)

> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15148268#comment-15148268
 ] 

ASF GitHub Bot commented on KAFKA-3239:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/921

KAFKA-3239: Synchronize removeTopic using controllerLock



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3239

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/921.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #921


commit c69a8ccdbac2ff2e31154a6196a61b1ecfc41ca8
Author: Rajini Sivaram 
Date:   2016-02-16T08:25:52Z

KAFKA-3239: Synchronize removeTopic using controllerLock




> Timing issue in controller metrics on topic delete
> --
>
> Key: KAFKA-3239
> URL: https://issues.apache.org/jira/browse/KAFKA-3239
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Noticed this exception in our logs:
> {quote}
> java.util.NoSuchElementException: key not found: [sometopic,0]
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:59)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at 
> kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
> at 
> kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
> at 
> com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
> at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
> at 
> com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
> at 
> com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
> at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
> {quote}
> The exception indicates that the topic was in 
> {{controllerContext.partitionReplicaAssignment}} but not in 
> {{controllerContext.partitionLeadershipInfo}}. This can occur during 
> {{KafkaController.removeTopic()}} since it is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3239: Synchronize removeTopic using cont...

2016-02-16 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/921

KAFKA-3239: Synchronize removeTopic using controllerLock



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3239

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/921.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #921


commit c69a8ccdbac2ff2e31154a6196a61b1ecfc41ca8
Author: Rajini Sivaram 
Date:   2016-02-16T08:25:52Z

KAFKA-3239: Synchronize removeTopic using controllerLock




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3239) Timing issue in controller metrics on topic delete

2016-02-16 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-3239:
-

 Summary: Timing issue in controller metrics on topic delete
 Key: KAFKA-3239
 URL: https://issues.apache.org/jira/browse/KAFKA-3239
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.9.0.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


Noticed this exception in our logs:
{quote}
java.util.NoSuchElementException: key not found: [sometopic,0]
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
at 
kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:209)
at 
kafka.controller.KafkaController$$anon$3$$anonfun$value$2$$anonfun$apply$mcI$sp$2.apply(KafkaController.scala:208)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at 
scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
at scala.collection.AbstractTraversable.count(Traversable.scala:104)
at 
kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply$mcI$sp(KafkaController.scala:208)
at 
kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
at 
kafka.controller.KafkaController$$anon$3$$anonfun$value$2.apply(KafkaController.scala:205)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at 
kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:204)
at 
kafka.controller.KafkaController$$anon$3.value(KafkaController.scala:202)
at 
com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:163)
at 
com.airbnb.metrics.StatsDReporter.processGauge(StatsDReporter.java:37)
at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
at 
com.airbnb.metrics.StatsDReporter.sendAMetric(StatsDReporter.java:131)
at 
com.airbnb.metrics.StatsDReporter.sendAllKafkaMetrics(StatsDReporter.java:119)
at com.airbnb.metrics.StatsDReporter.run(StatsDReporter.java:85)
{quote}

The exception indicates that the topic was in 
{{controllerContext.partitionReplicaAssignment}} but not in 
{{controllerContext.partitionLeadershipInfo}}. This can occur during 
{{KafkaController.removeTopic()}} since it is not synchronized.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)