Yes, could you file a jira to track that?

Thanks,

Jun


On Thu, Dec 12, 2013 at 8:58 AM, Barto Rael <abcduser...@gmail.com> wrote:

> Thanks Jun for response.
>
> So the problem i am facing is expected behavior of kafka.
>
> I have just a suggestion that in case of live brokers less than # replicas:
> There should be topic created so atleast live brokers can receive the data.
> They can replicate data to other broker once any down broker comes up.
> Because now in case of live brokers less than # replicas, there is complete
> loss of data.
>
>
>
> On Thu, Dec 12, 2013 at 9:41 PM, Jun Rao <jun...@gmail.com> wrote:
>
> > Currently, when a topic is created, we require # live brokers to be at
> > least # replicas. Once the topic is created, live brokers can be less
> than
> > # replicas.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Dec 12, 2013 at 6:27 AM, Barto Rael <abcduser...@gmail.com>
> wrote:
> >
> > > Hi All,
> > >
> > > We are having kafka cluster of 2 nodes. (Using Kafka 0.8.0 version)
> > > Replication Factor: 2
> > > Number of partitions: 2
> > >
> > > If any one node goes down then topic is not created in kafka. Since we
> > are
> > > using two nodes so second node is automatically elected as leader node
> if
> > > down node was leader.
> > >
> > > Below is exception trace:
> > >
> > > 2013-12-12 19:37:19 0 [WARN ] ClientUtils$ - Fetching topic metadata
> with
> > > correlation id 3 for topics [Set(test-topic)] from broker
> > > [id:0,host:122.98.12.11,port:9092] failed
> > > java.net.ConnectException: Connection refused
> > >     at sun.nio.ch.Net.connect(Native Method)
> > >     at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500)
> > >     at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
> > >     at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
> > >     at
> > > kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
> > >     at
> > >
> > >
> >
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
> > >     at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
> > >     at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
> > >     at
> > >
> >
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> > >     at
> > >
> > >
> >
> kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49)
> > >     at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:186)
> > >     at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
> > >     at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:149)
> > >     at
> > >
> > >
> >
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> > >     at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> > >     at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:149)
> > >     at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:95)
> > >     at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
> > >     at kafka.producer.Producer.send(Producer.scala:76)
> > >     at kafka.javaapi.producer.Producer.send(Producer.scala:33)
> > >
> > >
> > > Please Help !!
> > >
> >
>

Reply via email to