[ 
https://issues.apache.org/jira/browse/KAFKA-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574920#comment-16574920
 ] 

Simon Cooper commented on KAFKA-1182:
-------------------------------------

This continues to be a serious problem for us, 4 1/2 years after it was first 
reported - we still need our software to be functional even if the cluster is 
degraded - and that functionality includes creating kafka topics.

Could there be an option to force topic creation, even if it's not possible to 
satisfy the replication factor? Then when the nodes come back online, the 
partitions will be replicated properly.

> Topic not created if number of live brokers less than # replicas
> ----------------------------------------------------------------
>
>                 Key: KAFKA-1182
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1182
>             Project: Kafka
>          Issue Type: Improvement
>          Components: producer 
>    Affects Versions: 0.8.0
>         Environment: Centos 6.3
>            Reporter: Hanish Bansal
>            Assignee: Jun Rao
>            Priority: Major
>
> We are having kafka cluster of 2 nodes. (Using Kafka 0.8.0 version)
> Replication Factor: 2
> Number of partitions: 2
> Actual Behaviour:
> Out of two nodes, if any one node goes down then topic is not created in 
> kafka. 
> Steps to Reproduce:
> 1. Create a 2 node kafka cluster with replication factor 2
> 2. Start the Kafka cluster
> 3. Kill any one node
> 4.  Start the producer to write on a new topic
> 5. Observe the exception stated below:
> 2013-12-12 19:37:19 0 [WARN ] ClientUtils$ - Fetching topic metadata with
> correlation id 3 for topics [Set(test-topic)] from broker
> [id:0,host:122.98.12.11,port:9092] failed
> java.net.ConnectException: Connection refused
>     at sun.nio.ch.Net.connect(Native Method)
>     at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500)
>     at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
>     at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
>     at
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
>     at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>     at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
>     at
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>     at
> kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49)
>     at
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:186)
>     at
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
>     at
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:149)
>     at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
>     at
> kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:149)
>     at
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:95)
>     at
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
>     at kafka.producer.Producer.send(Producer.scala:76)
>     at kafka.javaapi.producer.Producer.send(Producer.scala:33)
> Expected Behaviour: 
> In case of live brokers less than # replicas:
> There should be topic created so at least live brokers can receive the data.
> They can replicate data to other broker once any down broker comes up.
> Because now in case of live brokers less than # replicas, there is complete
> loss of data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to