Re: questions

2015-05-22 Thread ram kumar
thanks for the reply, but i need to know the per partition size(bytes) On Fri, May 22, 2015 at 11:47 AM, Warren Henning warren.henn...@gmail.com wrote: Yes, you can specify the partition count when creating a topic. http://kafka.apache.org/documentation.html#quickstart On Thu, May 21, 2015

Re: questions

2015-05-22 Thread ram kumar
max size of partition On Fri, May 22, 2015 at 12:04 PM, Carles Sistaré carles.sist...@googlemail.com wrote: Hi, If I understood it good, u need to specify the maximal size of a partition? Or you just need to know the actual size of your partitions.? Le 22 mai 2015 8:25 AM, ram kumar

Replica manager exception in broker

2015-05-22 Thread tao xiao
Hi team, One of the brokers keeps getting below exception. [2015-05-21 23:56:52,687] ERROR [Replica Manager on Broker 15]: Error when processing fetch request for partition [test1,0] offset 206845418 from consumer with correlation id 93748260. Possible cause: Request for offset 206845418 but we

Re: questions

2015-05-22 Thread Carles Sistaré
Hi, If I understood it good, u need to specify the maximal size of a partition? Or you just need to know the actual size of your partitions.? Le 22 mai 2015 8:25 AM, ram kumar ramkumarro...@gmail.com a écrit : thanks for the reply, but i need to know the per partition size(bytes) On Fri, May

Re: questions

2015-05-22 Thread Carles Sistare
I am afraid you can’t, the number of partitions can be added manually, but not dynamicly in function of your partition size. On 22 May 2015, at 09:42, ram kumar ramkumarro...@gmail.com wrote: max size of partition On Fri, May 22, 2015 at 12:04 PM, Carles Sistaré

questions

2015-05-22 Thread ram kumar
Hi, can v specify the partition size of a particular topic

How to verify /update offsets in 0.8.2.1 ?

2015-05-22 Thread Marina
Hi, I would like to inspect current offsets for my topic/partitions from a command line, and update them when needed. I can use the kafka.tools.ConsumerOffsetChecker to view the offsets as following: ./bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group elastic_search_group

Kafka broker - Ip-address instead of host naem

2015-05-22 Thread Achanta Vamsi Subhash
Hi, Currently Kakfa brokers register the hostname in zookeeper. [zk: localhost:2181(CONNECTED) 5] get /varadhi/kafka/brokers/ids/0 {jmx_port:,timestamp:1427704934158,host:currHostName,version:1,port:9092} ​Is there any config to make it use ip-address instead so that we don't make a DNS

Mirrormaker stops consuming

2015-05-22 Thread Rajasekar Elango
We recently upgraded to kafka 0.8.2.1 and found issues with mirrormaker that randomly stops consuming. We had to restart the mirrormaker process to resolve the problem. This problem has occurred several times in past two weeks. Here is what I found in analysis: When this problem happens:

Re: Mirrormaker stops consuming

2015-05-22 Thread Joel Koshy
The issue is that multiple consumers feed into all the data channels. So they will all eventually block if any data channel becomes full. The mirror maker on trunk is significantly different so this is not an issue on trunk. On Fri, May 22, 2015 at 12:37:01PM -0400, Rajasekar Elango wrote:

Re: Replica manager exception in broker

2015-05-22 Thread Joel Koshy
When you say keeps getting below exception I'm assuming that the error offset (206845418) keeps changing - right? We saw a similar issue in the past and it turned out to be due to a NIC issue - i.e., it negotiated at a low speed. So the replica fetcher couldn't keep up with the leader. i.e., while

Re: Mirrormaker stops consuming

2015-05-22 Thread Rajasekar Elango
Thanks for pointers Joel. Will look into SSLSocketChannel. Yes this was working fine before upgrade. If its just one producer thread stuck on write, it might affect only one consumer thread/partition. But we found consuming stopped for all topic/partitions. Or Is it only single data channel

Re: Mirrormaker stops consuming

2015-05-22 Thread Joel Koshy
The threaddump suggests that one of the producers (mirrormaker-producer-6) is blocked on write for some reason. So the data-channel for that producer (which sits between the consumers and the producer) is full which blocks the consumers from progressing. This appears to be in your (custom)

Re: Mirrormaker stops consuming

2015-05-22 Thread tao xiao
It is possible that the message produced rate is slower than the consumed message rate which results in insufficient space left for the internal data channel mirror maker uses to buffer the data from consumer to producer. You can check histogram MirrorMaker-DataChannel-Size to see if any space

Re: questions

2015-05-22 Thread Lance Laursen
Maximum size of a partition is defined by log.retention.bytes . You can define this in your server.properties as well as upon topic creation with --config retention.bytes=12345 You can also define log.retention.bytes.per.topic https://kafka.apache.org/08/configuration.html On Fri, May 22, 2015

Re: Replica manager exception in broker

2015-05-22 Thread Joel Koshy
Sorry about that - I thought this was the follower since you mentioned This is the follower broker of topic test1... in your email. So this is a different issue. The consumer requests should go to the leader. For some reason, this particular broker does not seem to know that - it would have

Re: Replica manager exception in broker

2015-05-22 Thread tao xiao
Hi Joel, The error offset 206845418 didn't change. The only thing that changed is the correlation id and it was incrementing. The broker is the follower and I saw similar error messages for other topics the broker was a follower for. As indicated by the log this is a request coming from a