Re: 答复: kafka.admin.TopicCommand Failing

2017-11-16 Thread Abhimanyu Nagrath
I am absolutely new to these technologies . Can you assist me with the below mentioned queries : 1. How to decide the value of zookeeper.connection.timeout.ms ? 2. How to check GC log to see if the STW pauses expired the Zk sessions 3. How to tune GC . Regards, Abhimanyu On Fri, Nov

答复: kafka.admin.TopicCommand Failing

2017-11-16 Thread Hu Xi
Increasing `zookeeper.connection.timeout.ms` to a relatively larger value might be a help. Besides, you could check GC log to see if the STW pauses expired the Zk sessions. 发件人: Abhimanyu Nagrath 发送时间: 2017年11月17日 13:51 收件人:

Re: kafka.admin.TopicCommand Failing

2017-11-16 Thread Abhimanyu Nagrath
One more thing was checking my Kafka-server.log its fill with the warning Attempting to send response via channel for which there is no open connection, connection id 2 (Kafka.network.Processor) IS this the reason for the above issue? How to resolve this. Need help production is breaking.

Re: Queryable state

2017-11-16 Thread Guozhang Wang
Hello Boris, The reason of this check is to make sure that the cluster metadata has been updated at least once, meaning that the instance has gone through the initialization phase of the rebalance and have received the assignment information already. Before this phase, any metadata returned may

Re: How to handle messages that don't find a join partner in Streams?

2017-11-16 Thread Guozhang Wang
Hello Michael, sorry for the late reply. If your application logic is the following: 1) output (msgA, msgB) when msgA is under processing and msgB is already available, or 2) output (msgA, null) when processing msgA while msgB for the same topic does not exist, then the pattern you are going

Connect continuously calling commit on failed task

2017-11-16 Thread Luke Steensen
Hello, We're developing a Kafka Connect plugin and seeing some strange behavior around error handling. When an exception is thrown in the task's poll method, the task transitions into the failed state as expected. However, when I watch the logs, I still see errors being logged from the commit

log retention policy issure

2017-11-16 Thread 张明富
Hi, From kafka's document I found: "The Kafka cluster retains all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for

The KafkaConsumer reads randomly from the offset 0

2017-11-16 Thread dali.midou2...@gmail.com
I want to test a Kafka example. I am using Kafka 0.10.0.1. The producer: object ProducerApp extends App { val topic = "topicTest" val props = new Properties() props.put("bootstrap.servers", "localhost:9092") props.put(ConsumerConfig.GROUP_ID_CONFIG, "consumer") props.put("key.serializer",

Re: What are reasonable limits for max number of consumer groups per partition and per broker?

2017-11-16 Thread Avi Levi
I think you will find this article https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/ useful On Tue, Nov 14, 2017 at 6:49 PM, Viktor Somogyi wrote: > Hi Jeff, > > I think it's also worth considering that 1K consumer

kafka.admin.TopicCommand Failing

2017-11-16 Thread Abhimanyu Nagrath
Hi, I am using a single node Kafka V 0.10.2 (16 GB RAM, 8 cores) and a single node zookeeper V 3.4.9 (4 GB RAM, 1 core ). I am having 64 consumer groups and 500 topics each having 250 partitions. I am able to execute the commands which require only Kafka broker and its running fine ex. >

Re: How to set result value Serdes Class in Kafka stream join

2017-11-16 Thread sy.pan
Get it , thank you Damian > 在 2017年11月16日,18:55,Damian Guy 写道: > > Hi, > > You don't need to set the serde until you do another operation that > requires serialization, i.e., if you followed the join with a `to()`, > `groupBy()` etc, you would pass in the serde to that

Re: How to set result value Serdes Class in Kafka stream join

2017-11-16 Thread Damian Guy
Hi, You don't need to set the serde until you do another operation that requires serialization, i.e., if you followed the join with a `to()`, `groupBy()` etc, you would pass in the serde to that operation. Thanks, Damian On Thu, 16 Nov 2017 at 10:53 sy.pan wrote: > Hi,

How to set result value Serdes Class in Kafka stream join

2017-11-16 Thread sy.pan
Hi, all: Recently I have read kafka streams join document(https://docs.confluent.io/current/streams/developer-guide.html#kafka-streams-dsl ). The sample code is pasted below: import

Re: [VOTE] 0.11.0.2 RC0

2017-11-16 Thread Rajini Sivaram
Correction from previous note: Vote closed with 3 binding PMC votes (Gwen, Guozhang, Ismael ) and 4 non-binding votes. On Thu, Nov 16, 2017 at 10:03 AM, Rajini Sivaram wrote: > +1 from me > > The vote has passed with 4 binding votes (Gwen, Guozhang, Ismael and >

Re: [VOTE] 0.11.0.2 RC0

2017-11-16 Thread Rajini Sivaram
+1 from me The vote has passed with 4 binding votes (Gwen, Guozhang, Ismael and Rajini) and 3 non-binding votes (Ted, Satish and Tim). I will close the voting thread and complete the release process. Many thanks to everyone for voting. Regards, Rajini On Thu, Nov 16, 2017 at 3:01 AM, Ismael

The KafkaConsumer reads randomly from the offset 0

2017-11-16 Thread dali dali
Hi, I want to test a Kafka example. I am using Kafka 0.10.0.1. The producer: object ProducerApp extends App { val topic = "topicTest" val props = new Properties() props.put("bootstrap.servers", "localhost:9092") props.put(ConsumerConfig.GROUP_ID_CONFIG, "consumer") props.put("key.serializer",