[jira] [Commented] (KAFKA-1812) Allow IpV6 in configuration with parseCsvMap

2014-12-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243745#comment-14243745
 ] 

Gwen Shapira commented on KAFKA-1812:
-

[~joestein] - this patch & tests look good to me. 
If you can take a look and commit if it looks good to you as well (LGTY2?), I'd 
appreciate.

>  Allow IpV6 in configuration with parseCsvMap
> -
>
> Key: KAFKA-1812
> URL: https://issues.apache.org/jira/browse/KAFKA-1812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1812_2014-12-10_21:38:59.patch
>
>
> The current implementation of parseCsvMap in Utils expects k:v,k:v. This 
> modifies that function to accept a string with multiple ":" characters and 
> splitting on the last occurrence per pair. 
> This limitation is noted in the Reviewboard comments for KAFKA-1512



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1811) ensuring registered broker host:port is unique

2014-12-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243741#comment-14243741
 ] 

Gwen Shapira commented on KAFKA-1811:
-

Few notes that may help:

1. We like reviewing with review board. Here's a friendly instruction page on 
how to add the patch to JIRA and RB at once: 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

2. We also like unit tests :)

3. I'd consider pushing this check down to registerBrokerInZk. It seems like a 
natural place to ensure uniqueness of registered brokers before registering. 

4. Another thing to consider is race conditions - what if new broker registers 
while we are checking?
Perhaps we can even use ZK itself to enforce uniqueness?

(Note: I'm not a committer, so those suggestions are not binding. just ideas 
for improvements)



> ensuring registered broker host:port is unique
> --
>
> Key: KAFKA-1811
> URL: https://issues.apache.org/jira/browse/KAFKA-1811
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>  Labels: newbie
> Attachments: KAFKA_1811.patch
>
>
> Currently, we expect each of the registered broker to have a unique host:port 
> pair. However, we don't enforce that, which causes various weird problems. It 
> would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28793: Patch for KAFKA-1784

2014-12-11 Thread Neha Narkhede


> On Dec. 10, 2014, 12:37 a.m., Neha Narkhede wrote:
> > core/src/main/scala/kafka/tools/OffsetClient.scala, line 142
> > 
> >
> > This and a lot of the rest of the code exists in ClientUtils. Until the 
> > refactoring is complete, your admin tool can just utilize the existing 
> > APIst to expose commit/fetch through the tool. This class may be 
> > unnecessary.
> 
> Mayuresh Gharat wrote:
> I can pull out the code and use client utils as of now and once we get 
> the other patch with refactored code we can use this. Does that sound ok? I 
> will upload a new patch accordingly.
> 
> Mayuresh Gharat wrote:
> So do we remove this class completely and use it only when patch for 
> Kafka-1013 has been reviewed and ready to be checked in?

Yes. I'm wondering if you can write the admin tool using the current APIs as 
is. It will be most convenient to refactor this correctly in one go (as part of 
KAFKA-1013).


- Neha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28793/#review64477
---


On Dec. 7, 2014, 7:43 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28793/
> ---
> 
> (Updated Dec. 7, 2014, 7:43 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1784
> https://issues.apache.org/jira/browse/KAFKA-1784
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Offset Client for fetching and commiting offsets to kafka
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 
>   core/src/main/scala/kafka/tools/OffsetClientConfig.scala PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/28793/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Commented] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-12-11 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243646#comment-14243646
 ] 

Jiangjie Qin commented on KAFKA-1650:
-

[~joestein] It was my bad. In the ConsumerRebalanceListenerTest in 
ZookeeperConsumerConnectorTest, I forgot to shutdown the ZKconsumerConnector 
which causes the later on test failure. This problem is fixed by KAFKA-1815. 
Could you help check that in. 

> Mirror Maker could lose data on unclean shutdown.
> -
>
> Key: KAFKA-1650
> URL: https://issues.apache.org/jira/browse/KAFKA-1650
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1650.patch, KAFKA-1650_2014-10-06_10:17:46.patch, 
> KAFKA-1650_2014-11-12_09:51:30.patch, KAFKA-1650_2014-11-17_18:44:37.patch, 
> KAFKA-1650_2014-11-20_12:00:16.patch, KAFKA-1650_2014-11-24_08:15:17.patch, 
> KAFKA-1650_2014-12-03_15:02:31.patch, KAFKA-1650_2014-12-03_19:02:13.patch, 
> KAFKA-1650_2014-12-04_11:59:07.patch, KAFKA-1650_2014-12-06_18:58:57.patch, 
> KAFKA-1650_2014-12-08_01:36:01.patch
>
>
> Currently if mirror maker got shutdown uncleanly, the data in the data 
> channel and buffer could potentially be lost. With the new producer's 
> callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28793: Patch for KAFKA-1784

2014-12-11 Thread Mayuresh Gharat


> On Dec. 10, 2014, 12:37 a.m., Neha Narkhede wrote:
> > core/src/main/scala/kafka/tools/OffsetClient.scala, line 142
> > 
> >
> > This and a lot of the rest of the code exists in ClientUtils. Until the 
> > refactoring is complete, your admin tool can just utilize the existing 
> > APIst to expose commit/fetch through the tool. This class may be 
> > unnecessary.
> 
> Mayuresh Gharat wrote:
> I can pull out the code and use client utils as of now and once we get 
> the other patch with refactored code we can use this. Does that sound ok? I 
> will upload a new patch accordingly.

So do we remove this class completely and use it only when patch for Kafka-1013 
has been reviewed and ready to be checked in?


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28793/#review64477
---


On Dec. 7, 2014, 7:43 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28793/
> ---
> 
> (Updated Dec. 7, 2014, 7:43 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1784
> https://issues.apache.org/jira/browse/KAFKA-1784
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Offset Client for fetching and commiting offsets to kafka
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 
>   core/src/main/scala/kafka/tools/OffsetClientConfig.scala PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/28793/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Updated] (KAFKA-742) Existing directories under the Kafka data directory without any data cause process to not start

2014-12-11 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-742:

Fix Version/s: 0.8.3

> Existing directories under the Kafka data directory without any data cause 
> process to not start
> ---
>
> Key: KAFKA-742
> URL: https://issues.apache.org/jira/browse/KAFKA-742
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Chris Curtin
>Assignee: Ashish Kumar Singh
> Fix For: 0.8.3
>
>
> I incorrectly setup the configuration file to have the metrics go to 
> /var/kafka/metrics while the logs were in /var/kafka. On startup I received 
> the following error then the daemon exited:
> 30   [main] INFO  kafka.log.LogManager  - [Log Manager on Broker 0] Loading 
> log 'metrics'
> 32   [main] FATAL kafka.server.KafkaServerStartable  - Fatal error during 
> KafkaServerStable startup. Prepare to shutdown
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1937)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$parseTopicPartitionName(LogManager.scala:335)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:112)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:109)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:109)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:101)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> at kafka.log.LogManager.loadLogs(LogManager.scala:101)
> at kafka.log.LogManager.(LogManager.scala:62)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:59)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> 34   [main] INFO  kafka.server.KafkaServer  - [Kafka Server 0], shutting down
> This was on a brand new cluster so no data or metrics logs existed yet.
> Moving the metrics to their own directory (not a child of the logs) allowed 
> the daemon to start.
> Took a few minutes to figure out what was wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-742) Existing directories under the Kafka data directory without any data cause process to not start

2014-12-11 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243517#comment-14243517
 ] 

Ashish Kumar Singh commented on KAFKA-742:
--

[~jkreps] ok, I am on it then :)

> Existing directories under the Kafka data directory without any data cause 
> process to not start
> ---
>
> Key: KAFKA-742
> URL: https://issues.apache.org/jira/browse/KAFKA-742
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Chris Curtin
>Assignee: Ashish Kumar Singh
>
> I incorrectly setup the configuration file to have the metrics go to 
> /var/kafka/metrics while the logs were in /var/kafka. On startup I received 
> the following error then the daemon exited:
> 30   [main] INFO  kafka.log.LogManager  - [Log Manager on Broker 0] Loading 
> log 'metrics'
> 32   [main] FATAL kafka.server.KafkaServerStartable  - Fatal error during 
> KafkaServerStable startup. Prepare to shutdown
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1937)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$parseTopicPartitionName(LogManager.scala:335)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:112)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:109)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:109)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:101)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> at kafka.log.LogManager.loadLogs(LogManager.scala:101)
> at kafka.log.LogManager.(LogManager.scala:62)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:59)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> 34   [main] INFO  kafka.server.KafkaServer  - [Kafka Server 0], shutting down
> This was on a brand new cluster so no data or metrics logs existed yet.
> Moving the metrics to their own directory (not a child of the logs) allowed 
> the daemon to start.
> Took a few minutes to figure out what was wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-11 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243430#comment-14243430
 ] 

Jeff Holoman commented on KAFKA-1512:
-

[~jkreps], I noticed that the overrides are not fully implemented in this 
patch. Was the intent to leave that feature out? Based on your last comment I 
can see why but wanted to confirm. What are your thoughts on the override 
functionality now?



> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.2
>
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
> KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #352

2014-12-11 Thread Apache Jenkins Server
See 

--
[...truncated 970 lines...]
kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.test.ProducerSendTest > testSendOffset PASSED

kafka.api.test.ProducerSendTest > testClose PASSED

kafka.api.test.ProducerSendTest > testSendToPartition PASSED

kafka.api.test.ProducerSendTest > testAutoCreateTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.test.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.test.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.test.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.test.ProducerFailureHandlingTest > testBrokerFailure PASSED

kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic 
PASSED

kafka.api.test.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDe

[jira] [Commented] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-12-11 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243423#comment-14243423
 ] 

Joe Stein commented on KAFKA-1650:
--

I am getting a local failure running ./gradlew test 

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
java.lang.NullPointerException
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at 
scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:108)
at 
kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:141)

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled FAILED
java.lang.NullPointerException
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at 
scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:108)
at 
kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest.testCleanShutdownWithDeleteTopicEnabled(ServerShutdownTest.scala:114)

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdown FAILED
java.lang.NullPointerException
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at 
scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:108)
at 
kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest.testCleanShutdown(ServerShutdownTest.scala:101)

and the CI is broken too just ran another one just now to triple check 
https://builds.apache.org/view/All/job/Kafka-trunk/352/

I am a bit lost on this ticket it looks like the code is committed to trunk 
(commit 2801629964882015a9148e1c0ade22da46376faa) but this JIRA doesn't have 
resolved or which fix version (and more patches after commit) and tests are 
failing [~guozhang] can you take a look please (it looks like your commit)

> Mirror Maker could lose data on unclean shutdown.
> -
>
> Key: KAFKA-1650
> URL: https://issues.apache.org/jira/browse/KAFKA-1650
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1650.patch, KAFKA-1650_2014-10-06_10:17:46.patch, 
> KAFKA-1650_2014-11-12_09:51:30.patch, KAFKA-1650_2014-11-17_18:44:37.patch, 
> KAFKA-1650_2014-11-20_12:00:16.patch, KAFKA-1650_2014-11-24_08:15:17.patch, 
> KAFKA-1650_2014-12-03_15:02:31.patch, KAFKA-1650_2014-12-03_19:02:13.patch, 
> KAFKA-1650_2014-12-04_11:59:07.patch, KAFKA-1650_2014-12-06_18:58:57.patch, 
> KAFKA-1650_2014-12-08_01:36:01.patch
>
>
> Currently if mirror maker got shutdown uncleanly, the data in the data 
> channel and buffer could potentially be lost. With the new producer's 
> callback, this issue 

[jira] [Commented] (KAFKA-1811) ensuring registered broker host:port is unique

2014-12-11 Thread Dave Parfitt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243286#comment-14243286
 ] 

Dave Parfitt commented on KAFKA-1811:
-

Patch attached. 

> ensuring registered broker host:port is unique
> --
>
> Key: KAFKA-1811
> URL: https://issues.apache.org/jira/browse/KAFKA-1811
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>  Labels: newbie
> Attachments: KAFKA_1811.patch
>
>
> Currently, we expect each of the registered broker to have a unique host:port 
> pair. However, we don't enforce that, which causes various weird problems. It 
> would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1811) ensuring registered broker host:port is unique

2014-12-11 Thread Dave Parfitt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Parfitt updated KAFKA-1811:

Attachment: KAFKA_1811.patch

> ensuring registered broker host:port is unique
> --
>
> Key: KAFKA-1811
> URL: https://issues.apache.org/jira/browse/KAFKA-1811
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>  Labels: newbie
> Attachments: KAFKA_1811.patch
>
>
> Currently, we expect each of the registered broker to have a unique host:port 
> pair. However, we don't enforce that, which causes various weird problems. It 
> would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1812) Allow IpV6 in configuration with parseCsvMap

2014-12-11 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1812:
-
Reviewer: Gwen Shapira

>  Allow IpV6 in configuration with parseCsvMap
> -
>
> Key: KAFKA-1812
> URL: https://issues.apache.org/jira/browse/KAFKA-1812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1812_2014-12-10_21:38:59.patch
>
>
> The current implementation of parseCsvMap in Utils expects k:v,k:v. This 
> modifies that function to accept a string with multiple ":" characters and 
> splitting on the last occurrence per pair. 
> This limitation is noted in the Reviewboard comments for KAFKA-1512



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-742) Existing directories under the Kafka data directory without any data cause process to not start

2014-12-11 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243242#comment-14243242
 ] 

Jay Kreps commented on KAFKA-742:
-

No, I said that but then never did any work. Definitely take it!

> Existing directories under the Kafka data directory without any data cause 
> process to not start
> ---
>
> Key: KAFKA-742
> URL: https://issues.apache.org/jira/browse/KAFKA-742
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Chris Curtin
>Assignee: Ashish Kumar Singh
>
> I incorrectly setup the configuration file to have the metrics go to 
> /var/kafka/metrics while the logs were in /var/kafka. On startup I received 
> the following error then the daemon exited:
> 30   [main] INFO  kafka.log.LogManager  - [Log Manager on Broker 0] Loading 
> log 'metrics'
> 32   [main] FATAL kafka.server.KafkaServerStartable  - Fatal error during 
> KafkaServerStable startup. Prepare to shutdown
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1937)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$parseTopicPartitionName(LogManager.scala:335)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:112)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:109)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:109)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:101)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> at kafka.log.LogManager.loadLogs(LogManager.scala:101)
> at kafka.log.LogManager.(LogManager.scala:62)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:59)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> 34   [main] INFO  kafka.server.KafkaServer  - [Kafka Server 0], shutting down
> This was on a brand new cluster so no data or metrics logs existed yet.
> Moving the metrics to their own directory (not a child of the logs) allowed 
> the daemon to start.
> Took a few minutes to figure out what was wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1351) String.format is very expensive in Scala

2014-12-11 Thread Fabian Lange (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243223#comment-14243223
 ] 

Fabian Lange commented on KAFKA-1351:
-

Hi,
I was about to create a new issue, but figured this might host the comment 
almost as good.
As William outlined here: 
http://www.autoletics.com/posts/quick-performance-hotspot-analysis-apache-kafka
the Log.append function 
(https://github.com/apache/kafka/blob/7847e9c703f3a0b70519666cdb8a6e4c8e37c3a7/core/src/main/scala/kafka/log/Log.scala#L230)
is by far the biggest contributor to response time.
There are a few reasons for this:
A) it has a long synchronized block. I do not know much about the code, so at 
first glance I do not see much of an option to fix something
B) it has a trace logging call. Not sure if that trace call needs to be in 
synchronized though.

The trace call is implemented by
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/Logging.scala#L34

As you might notice this does doe a slow String.format for trace. a log level 
which is very likely to be off.
The above mentioned Log4j2 pattern would help there.

However there are more options, like implemented by typesafe:
https://github.com/typesafehub/scalalogging/tree/master/scalalogging-log4j/src/main/scala/com/typesafe/scalalogging/log4j
they use a macro, which would transform the code so that the actual string 
format would be moved into the log level checking. I think thats quite nifty, 
but maybe too much work to integrate.

So if you would check out that trace call, I would appreciate that :)

> String.format is very expensive in Scala
> 
>
> Key: KAFKA-1351
> URL: https://issues.apache.org/jira/browse/KAFKA-1351
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.7.2, 0.8.0, 0.8.1
>Reporter: Neha Narkhede
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1351.patch, KAFKA-1351_2014-04-07_18:02:18.patch, 
> KAFKA-1351_2014-04-09_15:40:11.patch
>
>
> As found in KAFKA-1350, logging is causing significant overhead in the 
> performance of a Kafka server. There are several info statements that use 
> String.format which is particularly expensive. We should investigate adding 
> our own version of String.format that merely uses string concatenation under 
> the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1664) Kafka does not properly parse multiple ZK nodes with non-root chroot

2014-12-11 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243024#comment-14243024
 ] 

Ashish Kumar Singh commented on KAFKA-1664:
---

[~nehanarkhede] Addressed your review comment. Kindly take a look.

> Kafka does not properly parse multiple ZK nodes with non-root chroot
> 
>
> Key: KAFKA-1664
> URL: https://issues.apache.org/jira/browse/KAFKA-1664
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Ricky Saltzer
>Assignee: Ashish Kumar Singh
>Priority: Minor
>  Labels: newbie
> Attachments: KAFKA-1664.1.patch, KAFKA-1664.2.patch, KAFKA-1664.patch
>
>
> When using a non-root ZK directory for Kafka, if you specify multiple ZK 
> servers, Kafka does not seem to properly parse the connection string. 
> *Error*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka,baelish-002.edh.cloudera.com:2181/kafka,baelish-003.edh.cloudera.com:2181/kafka
>  --topic test-topic
> [2014-10-01 15:31:04,629] ERROR Error processing message, stopping consumer:  
> (kafka.consumer.ConsoleConsumer$)
> java.lang.IllegalArgumentException: Path length must be > 0
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:48)
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:35)
>   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:766)
>   at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:213)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at kafka.utils.ZkUtils$.createParentPath(ZkUtils.scala:245)
>   at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:256)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:268)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:306)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:226)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:755)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:145)
>   at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
>   at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}
> *Working*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka --topic test-topic
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-313) Add JSON/CSV output and looping options to ConsumerOffsetChecker

2014-12-11 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243021#comment-14243021
 ] 

Ashish Kumar Singh commented on KAFKA-313:
--

[~jjkoshy], just a reminder :), still waiting for your review.

> Add JSON/CSV output and looping options to ConsumerOffsetChecker
> 
>
> Key: KAFKA-313
> URL: https://issues.apache.org/jira/browse/KAFKA-313
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dave DeMaagd
>Assignee: Ashish Kumar Singh
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-313-2012032200.diff, KAFKA-313.1.patch, 
> KAFKA-313.patch
>
>
> Adds:
> * '--loop N' - causes the program to loop forever, sleeping for up to N 
> seconds between loops (loop time minus collection time, unless that's less 
> than 0, at which point it will just run again immediately)
> * '--asjson' - display as a JSON string instead of the more human readable 
> output format.
> Neither of the above  depend on each other (you can loop in the human 
> readable output, or do a single shot execution with JSON output).  Existing 
> behavior/output maintained if neither of the above are used.  Diff Attached.
> Impacted files:
> core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-742) Existing directories under the Kafka data directory without any data cause process to not start

2014-12-11 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243017#comment-14243017
 ] 

Ashish Kumar Singh commented on KAFKA-742:
--

[~jkreps] Now that I actually started to work on this. I re-read your comment 
above and realized you mentioned that you intend to work on this. I missed the 
line when I assigned the JIRA to myself. My apologies for the same. Kindly feel 
free to take it on and assign it to yourself. However, if you are not planning 
to work on this, then let me know and then I can work on this. My apologies for 
the confusion.

> Existing directories under the Kafka data directory without any data cause 
> process to not start
> ---
>
> Key: KAFKA-742
> URL: https://issues.apache.org/jira/browse/KAFKA-742
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Chris Curtin
>Assignee: Ashish Kumar Singh
>
> I incorrectly setup the configuration file to have the metrics go to 
> /var/kafka/metrics while the logs were in /var/kafka. On startup I received 
> the following error then the daemon exited:
> 30   [main] INFO  kafka.log.LogManager  - [Log Manager on Broker 0] Loading 
> log 'metrics'
> 32   [main] FATAL kafka.server.KafkaServerStartable  - Fatal error during 
> KafkaServerStable startup. Prepare to shutdown
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1937)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$parseTopicPartitionName(LogManager.scala:335)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:112)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:109)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:109)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:101)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> at kafka.log.LogManager.loadLogs(LogManager.scala:101)
> at kafka.log.LogManager.(LogManager.scala:62)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:59)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> 34   [main] INFO  kafka.server.KafkaServer  - [Kafka Server 0], shutting down
> This was on a brand new cluster so no data or metrics logs existed yet.
> Moving the metrics to their own directory (not a child of the logs) allowed 
> the daemon to start.
> Took a few minutes to figure out what was wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1806) broker can still expose uncommitted data to a consumer

2014-12-11 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14242926#comment-14242926
 ] 

Neha Narkhede commented on KAFKA-1806:
--

[~lokeshbirla] We don't support that client. You may have to loop in the 
maintainer of that library. Let us know if you see this behavior with the 
java/scala client.

> broker can still expose uncommitted data to a consumer
> --
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: lokesh Birla
>Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still see this issue in 0.8.1.1. I am able to 
> reproducer the issue consistently. 
> [2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [mmetopic4,2] offset 1940029 from consumer with 
> correlation id 21 (kafka.server.Kaf
> kaApis)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (1818353) less than the start offset (1940029).
> at kafka.log.LogSegment.read(LogSegment.scala:136)
> at kafka.log.Log.read(Log.scala:386)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.map(Map.scala:107)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Way to check existence of topic

2014-12-11 Thread Joe Stein
Take a look at the entry point
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/admin/TopicCommand.scala#L32
behind kafka-topics.sh --describe --topic for how to-do that now .

We are working on a new CLI and Shell for that
https://issues.apache.org/jira/browse/KAFKA-1694 moving this to the server
side with wire protocol calls and client wrappers of them to-do moving
forward
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/

On Thu, Dec 11, 2014 at 2:59 AM, SasakiKai  wrote:

> Hi, all.
>
> I have a simple question.
> Is there anyway to check existence of some topic from consumer? I want to
> implement a consumer which checks before consuming messages efficiently.
> Has this type of API already been implemented? Or if you have any plan,
> please let me know.
>
> Thank you
>
> Sent from my tiny hand typewriter


Way to check existence of topic

2014-12-11 Thread SasakiKai
Hi, all.

I have a simple question. 
Is there anyway to check existence of some topic from consumer? I want to 
implement a consumer which checks before consuming messages efficiently. Has 
this type of API already been implemented? Or if you have any plan, please let 
me know.

Thank you

Sent from my tiny hand typewriter