Hi,

We are using Kafka 0.6 and testing it on EC2.  We have an issue where some
processes running the ZK/High level consumer (Scala consumer) get killed
before they have a chance to call ConsumerConnector.shutdown().
It seems like they leave nodes hanging around in ZK.
If we restart the process, the consumer in the new process will error out
because it cannot claim any partitions.

The only way I know of getting around this is to use a ZK client and
manually delete nodes.

Is there any way for the high level consumer nodes in ZK to be made
ephemeral so that if a process gets killed, the state won't last forever
and cause subsequent nodes to not be able to claim partitions?
Any chance this has been fixed in 0.7?

thanks,
Evan

-- 
--
*Evan Chan*
Senior Software Engineer |
e...@ooyala.com | (650) 996-4600
www.ooyala.com | blog <http://www.ooyala.com/blog> |
@ooyala<http://www.twitter.com/ooyala>

Reply via email to