Re: backup(dump) and restore environment

2015-06-09 Thread Stevo Slavić
Hello Jakub,

Maybe it will work for you to combine MirrorMaker
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
and Burrow: https://github.com/linkedin/Burrow
See recent announcement for Burrow
http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCACrdVJpS3k1ZxCVHGqs0H5d7gfUUsQXdZ66DMRUEAccPrCOzvg%40mail.gmail.com%3E
which mentions how it helps with monitoring MirrorMaker.

Kind regards,
Stevo Slavic.

On Tue, Jun 9, 2015 at 11:00 AM, Jakub Muszynski sirku...@gmail.com wrote:

 Hi

 I'm looking for the best way to dump current system state, and recreate
 it on the new, autonomic environment.
 Lets say I'd like to create a copy of Production, and based on that, create
 new, separate environment for testing.

 Can You suggest some solutions?

 greetings
 Jakub



Re: backup(dump) and restore environment

2015-06-09 Thread Jakub Muszynski
Thanks for the quick anwser.

The only think I'm not happy about is _no_ env separation.
As far as I do understand, this is online operation, am I right?

So the best would be to create mirror, and then cut it off form Production.

greetings
Jakub

2015-06-09 11:17 GMT+02:00 Stevo Slavić ssla...@gmail.com:

 Hello Jakub,

 Maybe it will work for you to combine MirrorMaker
 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
 and Burrow: https://github.com/linkedin/Burrow
 See recent announcement for Burrow

 http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCACrdVJpS3k1ZxCVHGqs0H5d7gfUUsQXdZ66DMRUEAccPrCOzvg%40mail.gmail.com%3E
 which mentions how it helps with monitoring MirrorMaker.

 Kind regards,
 Stevo Slavic.

 On Tue, Jun 9, 2015 at 11:00 AM, Jakub Muszynski sirku...@gmail.com
 wrote:

  Hi
 
  I'm looking for the best way to dump current system state, and recreate
  it on the new, autonomic environment.
  Lets say I'd like to create a copy of Production, and based on that,
 create
  new, separate environment for testing.
 
  Can You suggest some solutions?
 
  greetings
  Jakub
 



Query on kafka broker and topic metadata

2015-06-09 Thread Pavan Chenduluru
Hi,

I am new to kafka and I have a doubt.

How to read specified broker and topic statistics from kafka server?

I want to read below parameters about existing topic from kafka.

1) How many activeMessages
2) How many activeSubscriptions
3) How many totalMessages
4) How many totalSubscriptions
5) How mnay deliveryFaults
6) How many pendingDelivery

Pls do the needful.

Thanks  Regards,
Pavan


backup(dump) and restore environment

2015-06-09 Thread Jakub Muszynski
Hi

I'm looking for the best way to dump current system state, and recreate
it on the new, autonomic environment.
Lets say I'd like to create a copy of Production, and based on that, create
new, separate environment for testing.

Can You suggest some solutions?

greetings
Jakub


Mirror maker doesn't rebalance after getting ZkNoNodeException

2015-06-09 Thread tao xiao
Hi,

I have two mirror makers A and B both subscripting to the same whitelist.
During topic rebalancing one of the mirror maker A encountered
ZkNoNodeException and then stopped all connections. but mirror maker B
didn't pick up the topics that were consumed by A and left some of the
topics unassigned. I think this is due to A not releasing ownership of
those topics. My question is why A didn't release ownership upon receiving
error?

Here is the stack trace

[2015-06-08 15:55:46,379] INFO [test_some-ip-1433797157389-247c1fc4],
exception during rebalance  (kafka.consumer.ZookeeperConsumerConnector)
org.I0Itec.zkclient.exception.ZkNoNodeException:
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode =
NoNode for /consumers/test/ids/test_some-ip-1433797157389-247c1fc4
at
org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at
org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
at kafka.utils.ZkUtils$.readData(ZkUtils.scala:473)
at
kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:61)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperConsumerConnector.scala:657)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:629)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:620)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:620)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:620)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:619)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:572)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for
/consumers/test/ids/test_some-ip-1433797157389-247c1fc4
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:103)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:770)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:766)
at
org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
... 13 more



Here is the last part of the log

[2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399]
Stopping leader finder thread (kafka.consumer.ConsumerFetcherManager)

[2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399]
Stopping all fetchers (kafka.consumer.ConsumerFetcherManager)

[2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399] All
connections stopped (kafka.consumer.ConsumerFetcherManager)

[2015-06-08 15:55:49,848] INFO [test_some-ip-1433797157389-247c1fc4],
Cleared all relevant queues for this fetcher
(kafka.consumer.ZookeeperConsumerConnector)

[2015-06-08 15:55:49,849] INFO [test_some-ip-1433797157389-247c1fc4],
Cleared the data chunks in all the consumer message iterators
(kafka.consumer.ZookeeperConsumerConnector)

[2015-06-08 15:55:49,849] INFO [test_some-ip-1433797157389-247c1fc4],
Invoking rebalance listener before relasing partition ownerships.
(kafka.consumer.ZookeeperConsumerConnector)


As seen in the log Mirror maker A didn't release ownership and it didn't
attempt to trigger another round of rebalancing either. I checked zk. the
node that was reported missing actually existed and it was created at the
same time the error was thrown.


I use the latest trunk code


Re: Query on kafka topic metadata

2015-06-09 Thread Guozhang Wang
Hi Pavan,

1) you cannot read the current number of messages for a topic, but you can
read the number of bytes for a topic in Kafka metrics.

2) you cannot read the current number of subscribers directly. However you
can read from ZK and parse the results to get the subscriber counts.

3) / 4) I am not sure what you mean by active messages and subscription.

5) / 6) You can find this in the Kafka metrics.

You can take a look at the list of metrics here:
http://kafka.apache.org/documentation.html#monitoring

Guozhang


On Fri, Jun 5, 2015 at 11:02 AM, Pavan Chenduluru chendul...@gmail.com
wrote:

 Hi,

 I am new to kafka and I have a doubt.

 How to read specified topic statistics from kafka server?

 I want to read below parameters about existing topic from kafka.

 1) How many activeMessages
 2) How many activeSubscriptions
 3) How many totalMessages
 4) How many totalSubscriptions
 5) How mnay deliveryFaults
 6) How many pendingDelivery

 Pls do the needful.

 Thanks  Regards,
 Pavan




-- 
-- Guozhang


Re: Mirror maker doesn't rebalance after getting ZkNoNodeException

2015-06-09 Thread Jiangjie Qin
Which version of MM are you running?

On 6/9/15, 4:49 AM, tao xiao xiaotao...@gmail.com wrote:

Hi,

I have two mirror makers A and B both subscripting to the same whitelist.
During topic rebalancing one of the mirror maker A encountered
ZkNoNodeException and then stopped all connections. but mirror maker B
didn't pick up the topics that were consumed by A and left some of the
topics unassigned. I think this is due to A not releasing ownership of
those topics. My question is why A didn't release ownership upon receiving
error?

Here is the stack trace

[2015-06-08 15:55:46,379] INFO [test_some-ip-1433797157389-247c1fc4],
exception during rebalance  (kafka.consumer.ZookeeperConsumerConnector)
org.I0Itec.zkclient.exception.ZkNoNodeException:
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode =
NoNode for /consumers/test/ids/test_some-ip-1433797157389-247c1fc4
at
org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at
org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
at kafka.utils.ZkUtils$.readData(ZkUtils.scala:473)
at
kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:61)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consu
mer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperCo
nsumerConnector.scala:657)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
ncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerCon
nector.scala:629)
at 
scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
ncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:620)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
ncedRebalance$1.apply(ZookeeperConsumerConnector.scala:620)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
ncedRebalance$1.apply(ZookeeperConsumerConnector.scala:620)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebal
ance(ZookeeperConsumerConnector.scala:619)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run
(ZookeeperConsumerConnector.scala:572)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for
/consumers/test/ids/test_some-ip-1433797157389-247c1fc4
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
at 
org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:103)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:770)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:766)
at
org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
... 13 more



Here is the last part of the log

[2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399]
Stopping leader finder thread (kafka.consumer.ConsumerFetcherManager)

[2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399]
Stopping all fetchers (kafka.consumer.ConsumerFetcherManager)

[2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399] All
connections stopped (kafka.consumer.ConsumerFetcherManager)

[2015-06-08 15:55:49,848] INFO [test_some-ip-1433797157389-247c1fc4],
Cleared all relevant queues for this fetcher
(kafka.consumer.ZookeeperConsumerConnector)

[2015-06-08 15:55:49,849] INFO [test_some-ip-1433797157389-247c1fc4],
Cleared the data chunks in all the consumer message iterators
(kafka.consumer.ZookeeperConsumerConnector)

[2015-06-08 15:55:49,849] INFO [test_some-ip-1433797157389-247c1fc4],
Invoking rebalance listener before relasing partition ownerships.
(kafka.consumer.ZookeeperConsumerConnector)


As seen in the log Mirror maker A didn't release ownership and it didn't
attempt to trigger another round of rebalancing either. I checked zk. the
node that was reported missing actually existed and it was created at the
same time the error was thrown.


I use the latest trunk code



Re: offsets.storage=kafka, dual.commit.enabled=false still requires ZK

2015-06-09 Thread Gwen Shapira
The existing consumer uses Zookeeper both to commit offsets and to
assign partitions to different consumers and threads in the same
consumer group.
While offsets can be committed to Kafka in 0.8.2 releases, which
greatly reduces the load on Zookeeper, the consumer still requires
Zookeeper to manage group membership and partition ownership.
The new consumer (which we hope to have ready for 0.8.3 release) will
completely remove the Zookeeper dependency, managing both offsets and
partition ownership within Kafka itself.

Gwen

On Tue, Jun 9, 2015 at 10:26 AM, noah iamn...@gmail.com wrote:
 We are setting up a new Kafka project (0.8.2.1) and are trying to go
 straight to consumer offsets stored in Kafka. Unfortunately it looks like
 the Java consumer will try to connect to ZooKeeper regardless of the
 settings.

 Will/When will this dependency go away completely? It would simplify our
 deployments if our consumers didn't have to connect to ZooKeeper at all.

 P.S. I've asked this on Stack Overflow, if you would like to answer there
 for posterity:
 http://stackoverflow.com/questions/30719331/kafka-0-8-2-1-offsets-storage-kafka-still-requires-zookeeper


Re: offsets.storage=kafka, dual.commit.enabled=false still requires ZK

2015-06-09 Thread Ewen Cheslack-Postava
The new consumer implementation, which should be included in 0.8.3, only
needs a bootstrap.servers setting and does not use a zookeeper connection.

On Tue, Jun 9, 2015 at 1:26 PM, noah iamn...@gmail.com wrote:

 We are setting up a new Kafka project (0.8.2.1) and are trying to go
 straight to consumer offsets stored in Kafka. Unfortunately it looks like
 the Java consumer will try to connect to ZooKeeper regardless of the
 settings.

 Will/When will this dependency go away completely? It would simplify our
 deployments if our consumers didn't have to connect to ZooKeeper at all.

 P.S. I've asked this on Stack Overflow, if you would like to answer there
 for posterity:

 http://stackoverflow.com/questions/30719331/kafka-0-8-2-1-offsets-storage-kafka-still-requires-zookeeper




-- 
Thanks,
Ewen


Re: [ANNOUNCE] Burrow - Consumer Lag Monitoring as a Service

2015-06-09 Thread Jason Rosenberg
Hi Todd,

Thanks for open sourcing this, I'm excited to take a look.

It looks like it's specific to offsets stored in kafka (and not zookeeper)
correct?  I assume by that that LinkedIn is using the kafka storage now in
production?

Jason

On Thu, Jun 4, 2015 at 9:43 PM, Todd Palino tpal...@gmail.com wrote:

 I am very happy to introduce Burrow, an application to provide Kafka
 consumer status as a service. Burrow is different than just a lag
 checker:

 * Multiple Kafka cluster support - Burrow supports any number of Kafka
 clusters in a single instance. You can also run multiple copies of Burrow
 in parallel and only one of them will send out notifications.

 * All consumers, all partitions - If the consumer is committing offsets to
 Kafka (not Zookeeper), it will be available in Burrow automatically. Every
 partition it consumes will be monitored simultaneously, avoiding the trap
 of just watching the worst partition (MaxLag) or spot checking individual
 topics.

 * Status can be checked via HTTP request - There's an internal HTTP server
 that provides topic and consumer lists, can give you the latest offsets for
 a topic either from the brokers or from the consumer, and lets you check
 consumer status.

 * Continuously monitor groups with output via email or a call to an
 external HTTP endpoint - Configure emails to send for bad groups, checked
 continuously. Or you can have Burrow call an HTTP endpoint into another
 system for handling alerts.

 * No thresholds - Status is determined over a sliding window and does not
 rely on a fixed limit. When a consumer is checked, it has a status
 indicator that tells whether it is OK, a warning, or an error, and the
 partitions that caused it to be bad are provided.


 Burrow was created to address specific problems that LinkedIn has with
 monitoring consumers, in particular wildcard consumers like mirror makers
 and our audit consumers. Instead of checking offsets for specific consumers
 periodically, it monitors the stream of all committed offsets
 (__consumer_offsets) and continually calculates lag over a sliding window.

 We welcome all feedback, comments, and contributors. This project is very
 much under active development for us (we're using it in some of our
 environments now, and working on getting it running everywhere to replace
 our previous monitoring system).

 Burrow is written in Go, published under the Apache License, and hosted on
 GitHub at:
 https://github.com/linkedin/Burrow

 Documentation is on the GitHub wiki at:
 https://github.com/linkedin/Burrow/wiki

 -Todd



producer metadata behavior when topic not created

2015-06-09 Thread Steven Wu
Hi,

I am talking about the 0.8.2 java producer.

In our deployment, we disables auto topic creation, because we would like
to control the precise number of partitions created for each topic and the
placement of partitions (e.g. zone-aware).

I did some experimentation and checked the code. metadata request to broker
(for non-exist topic) will got a successful response. should broker return
failure or partial failure if queried topic doesn't exist? can we add
metric at broker side for querying non-exist topics?

The net behavior is that there are more metadata queries from producer,
throttled by the backoff config (default is 100ms). can we add a metric for
metadata request and response rate? rate should normally be very low during
steady state, as default refresh interval is 5 mins.

basically, I am trying to detect this scenario (non-exist topic) and be
able to alert on some metrics. any other suggestions?

Thanks,
Steven


Stale topic for existing producer

2015-06-09 Thread Casey Daniell
BackgroundWe have a few consumer that *used* to consume a particular topic.  
This topic still exist and the consumers still exist, however, these consumer 
no longer subscribe to the topic. When we check the offset for the Consumer 
Group, kafka.tools.ConsumerOffsetChecker,  ZK believes this tier is falling 
behind even though it's no longer attempting to consume this topic. 
Kafka StackKafka 0.8.1 with Zookeeper 3.3.4; Simple Consumer
Question We keep track of the lag using the  kafka.tools.ConsumerOffsetChecker 
to ensure we don't have a systematic issue we need to investigate. Is there any 
way to expire  stale, or no longer consumed, topics per consumer group?
Casey


  

Re: [ANNOUNCE] Burrow - Consumer Lag Monitoring as a Service

2015-06-09 Thread Todd Palino
For mirror maker and our audit application, we've been using
Kafka-committed offsets for some time now. We've got a few other consumers
who are using it, but we haven't actively worked on moving the bulk of them
over. It's been less critical since we put the ZK transaction logs on SSD.

And yeah, this is specific for kafka-committed offsets. I'm looking at some
options for handling Zookeeper as well, but since our goal with this was to
monitor our own infrastructure applications and move forwards, it hasn't
gotten a lot of my attention yet.

-Todd


On Tue, Jun 9, 2015 at 11:53 AM, Jason Rosenberg j...@squareup.com wrote:

 Hi Todd,

 Thanks for open sourcing this, I'm excited to take a look.

 It looks like it's specific to offsets stored in kafka (and not zookeeper)
 correct?  I assume by that that LinkedIn is using the kafka storage now in
 production?

 Jason

 On Thu, Jun 4, 2015 at 9:43 PM, Todd Palino tpal...@gmail.com wrote:

  I am very happy to introduce Burrow, an application to provide Kafka
  consumer status as a service. Burrow is different than just a lag
  checker:
 
  * Multiple Kafka cluster support - Burrow supports any number of Kafka
  clusters in a single instance. You can also run multiple copies of Burrow
  in parallel and only one of them will send out notifications.
 
  * All consumers, all partitions - If the consumer is committing offsets
 to
  Kafka (not Zookeeper), it will be available in Burrow automatically.
 Every
  partition it consumes will be monitored simultaneously, avoiding the trap
  of just watching the worst partition (MaxLag) or spot checking individual
  topics.
 
  * Status can be checked via HTTP request - There's an internal HTTP
 server
  that provides topic and consumer lists, can give you the latest offsets
 for
  a topic either from the brokers or from the consumer, and lets you check
  consumer status.
 
  * Continuously monitor groups with output via email or a call to an
  external HTTP endpoint - Configure emails to send for bad groups, checked
  continuously. Or you can have Burrow call an HTTP endpoint into another
  system for handling alerts.
 
  * No thresholds - Status is determined over a sliding window and does not
  rely on a fixed limit. When a consumer is checked, it has a status
  indicator that tells whether it is OK, a warning, or an error, and the
  partitions that caused it to be bad are provided.
 
 
  Burrow was created to address specific problems that LinkedIn has with
  monitoring consumers, in particular wildcard consumers like mirror makers
  and our audit consumers. Instead of checking offsets for specific
 consumers
  periodically, it monitors the stream of all committed offsets
  (__consumer_offsets) and continually calculates lag over a sliding
 window.
 
  We welcome all feedback, comments, and contributors. This project is very
  much under active development for us (we're using it in some of our
  environments now, and working on getting it running everywhere to replace
  our previous monitoring system).
 
  Burrow is written in Go, published under the Apache License, and hosted
 on
  GitHub at:
  https://github.com/linkedin/Burrow
 
  Documentation is on the GitHub wiki at:
  https://github.com/linkedin/Burrow/wiki
 
  -Todd
 



Re: Mirror maker doesn't rebalance after getting ZkNoNodeException

2015-06-09 Thread tao xiao
I use commit 9e894aa0173b14d64a900bcf780d6b7809368384 from trunk code

On Wed, 10 Jun 2015 at 01:09 Jiangjie Qin j...@linkedin.com.invalid wrote:

 Which version of MM are you running?

 On 6/9/15, 4:49 AM, tao xiao xiaotao...@gmail.com wrote:

 Hi,
 
 I have two mirror makers A and B both subscripting to the same whitelist.
 During topic rebalancing one of the mirror maker A encountered
 ZkNoNodeException and then stopped all connections. but mirror maker B
 didn't pick up the topics that were consumed by A and left some of the
 topics unassigned. I think this is due to A not releasing ownership of
 those topics. My question is why A didn't release ownership upon receiving
 error?
 
 Here is the stack trace
 
 [2015-06-08 15:55:46,379] INFO [test_some-ip-1433797157389-247c1fc4],
 exception during rebalance  (kafka.consumer.ZookeeperConsumerConnector)
 org.I0Itec.zkclient.exception.ZkNoNodeException:
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode =
 NoNode for /consumers/test/ids/test_some-ip-1433797157389-247c1fc4
 at
 org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
 at
 org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
 at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
 at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
 at kafka.utils.ZkUtils$.readData(ZkUtils.scala:473)
 at
 kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:61)
 at
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consu
 mer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperCo
 nsumerConnector.scala:657)
 at
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
 ncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerCon
 nector.scala:629)
 at
 scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
 at
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
 ncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:620)
 at
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
 ncedRebalance$1.apply(ZookeeperConsumerConnector.scala:620)
 at
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$sy
 ncedRebalance$1.apply(ZookeeperConsumerConnector.scala:620)
 at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
 at
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebal
 ance(ZookeeperConsumerConnector.scala:619)
 at
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run
 (ZookeeperConsumerConnector.scala:572)
 Caused by: org.apache.zookeeper.KeeperException$NoNodeException:
 KeeperErrorCode = NoNode for
 /consumers/test/ids/test_some-ip-1433797157389-247c1fc4
 at
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
 at
 org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:103)
 at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:770)
 at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:766)
 at
 org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
 ... 13 more
 
 
 
 Here is the last part of the log
 
 [2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399]
 Stopping leader finder thread (kafka.consumer.ConsumerFetcherManager)
 
 [2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399]
 Stopping all fetchers (kafka.consumer.ConsumerFetcherManager)
 
 [2015-06-08 15:55:49,848] INFO [ConsumerFetcherManager-1433797157399] All
 connections stopped (kafka.consumer.ConsumerFetcherManager)
 
 [2015-06-08 15:55:49,848] INFO [test_some-ip-1433797157389-247c1fc4],
 Cleared all relevant queues for this fetcher
 (kafka.consumer.ZookeeperConsumerConnector)
 
 [2015-06-08 15:55:49,849] INFO [test_some-ip-1433797157389-247c1fc4],
 Cleared the data chunks in all the consumer message iterators
 (kafka.consumer.ZookeeperConsumerConnector)
 
 [2015-06-08 15:55:49,849] INFO [test_some-ip-1433797157389-247c1fc4],
 Invoking rebalance listener before relasing partition ownerships.
 (kafka.consumer.ZookeeperConsumerConnector)
 
 
 As seen in the log Mirror maker A didn't release ownership and it didn't
 attempt to trigger another round of rebalancing either. I checked zk. the
 node that was reported missing actually existed and it was created at the
 same time the error was thrown.
 
 
 I use the latest trunk code




Usage of Kafka Mirror Maker

2015-06-09 Thread nitin sharma
Hi Team,

I would like to know under which all circumstances I can use MirrorMaker
process ?

i read on a website that : *it is not really intended as a fault-tolerance
mechanism* , so curious to know where all i can think of using MirrorMaker.

Also, how to know the version of MirrorMaker process running in one of my
datacenter? I see in below link that a better version is proposed

https://cwiki.apache.org/confluence/display/KAFKA/KIP-3+-+Mirror+Maker+Enhancement


Regards,
Nitin Kumar Sharma.