Re: Offset storage

2015-10-29 Thread pushkar priyadarshi
Storing offsets in Kafka frees up zookeeper writes for offset sync.so I think it's preferred one to use whenever possible On Thursday, October 29, 2015, Mayuresh Gharat wrote: > You can use either of them. > The new kafka consumer (still under development) does not

Re: Consumer of multiple topic

2015-10-23 Thread pushkar priyadarshi
Currently there are no partitions based subscription inside topic.So when you subscribe to both topics your consumer will get data from each partitions in these two topics, i dont think you would be missing anything. On Fri, Oct 23, 2015 at 11:35 AM, Fajar Maulana Firdaus

Re: What happens when ISR is behind leader

2015-10-01 Thread pushkar priyadarshi
Hi, There are two properties which determines when does a replica falls off sync.Look for replica.lag.time.max.ms and replica.lag.max.messages .If a replica goes out of sync then it would not be even considered for leader election. Regards, Pushkar On Wed, Sep 30, 2015 at 9:44 AM, Shushant

Kafka BrokerTopicMetrics MessageInPerSec rate

2015-07-15 Thread pushkar priyadarshi
Hi, While benchmarking new producer and consumer syncing offset in zookeeper i see that MessageInRate reported in BrokerTopicMetrics is not same as rate at which i am able to publish and consume messages. Using my own custom reporter i can see the rate at which messages are published and

Re: Fetching details from Kafka Server

2015-07-13 Thread pushkar priyadarshi
2)You need to implement MetricReporter and provider that implementation class name against producer side configuration metric.reporters On Mon, Jul 13, 2015 at 9:08 PM, Swati Suman swatisuman1...@gmail.com wrote: Hi Team, We are using Kafka 0.8.2 I have two questions: 1)Is there any Java

Kafka New Producer setting acks=2 in 0.8.2.1

2015-05-14 Thread pushkar priyadarshi
Hi, The documentation for new producer allows passing ack=2(or any other numeric value) but when i actually pass anything other than 0,1,-1 in broker log i see following warning. Client producer-1 from /X.x.x.x:50105 sent a produce request with request.required.acks of 2, which is now deprecated

Re: Kafka New Producer setting acks=2 in 0.8.2.1

2015-05-14 Thread pushkar priyadarshi
, pushkar priyadarshi priyadarshi.push...@gmail.com wrote: Hi, The documentation for new producer allows passing ack=2(or any other numeric value) but when i actually pass anything other than 0,1,-1 in broker log i see following warning. Client producer-1 from /X.x.x.x:50105 sent

Re: Kafka Zookeeper queries

2015-04-21 Thread pushkar priyadarshi
In my knowledge if you are using 0.8.2.1 which is latest stable you can sync up your consumer offsets in kafka itself instead of Zk which further brings down write load on ZKs. Regards, Pushkar On Tue, Apr 21, 2015 at 1:13 PM, Jiangjie Qin j...@linkedin.com.invalid wrote: 2 partitions

Re: Warn No Checkpointed highwatermark is found for partition

2015-04-21 Thread pushkar priyadarshi
, Apr 21, 2015 at 3:07 PM, pushkar priyadarshi priyadarshi.push...@gmail.com wrote: I Get warnings in server log saying No checkpointed highwatermark is found for partition in server.log when trying to create a new topic. What does this mean?Though this is warning was curious to know

Warn No Checkpointed highwatermark is found for partition

2015-04-21 Thread pushkar priyadarshi
I Get warnings in server log saying No checkpointed highwatermark is found for partition in server.log when trying to create a new topic. What does this mean?Though this is warning was curious to know if it implies of any potential problem. Thanks And Regards, Pushkar

Re: Which version works for kafka 0.8.2 as consumer?

2015-04-01 Thread pushkar priyadarshi
So in 0.8.2.0/0.8.2.1 high level consumer can not make use of offset syncing in kafka? On Wed, Apr 1, 2015 at 12:51 PM, Jiangjie Qin j...@linkedin.com.invalid wrote: Yes, KafkaConsumer in 0.8.2 is still in development. You probably still want to use ZookeeperConsumerConnector for now. On

using 0.8.2 in production

2015-03-30 Thread pushkar priyadarshi
Hi, I remember some time back people were asked not to upgrade to 0.8.2.Wanted to know if issues pertaining to that are resolved now and is it safe now to migrate to 0.8.2? Thanks And Regards, Pushkar

Re: Interested in contributing to Kafka?

2014-07-16 Thread pushkar priyadarshi
I have been using kafka for quite some time now and would really be interested to contribute to this awesome code base. Regards, Pushkar On Thu, Jul 17, 2014 at 7:17 AM, Joe Stein joe.st...@stealth.ly wrote: ./gradlew scaladoc Builds the scala doc, perhaps we can start to publish this again

Re: Help is processing huge data through Kafka-storm cluster

2014-06-15 Thread pushkar priyadarshi
what throughput are you getting from your kafka cluster alone?Storm throughput can be dependent on what processing you are actually doing from inside it.so must look at each component starting from kafka first. Regards, Pushkar On Sat, Jun 14, 2014 at 8:44 PM, Shaikh Ahmed rnsr.sha...@gmail.com

Re: Help is processing huge data through Kafka-storm cluster

2014-06-15 Thread pushkar priyadarshi
effected as if consumer lags behind too much then it will result into disk seeks while consuming the older messages. On Sun, Jun 15, 2014 at 8:16 PM, pushkar priyadarshi priyadarshi.push...@gmail.com wrote: what throughput are you getting from your kafka cluster alone?Storm throughput can

Re: Sync Producer

2014-06-08 Thread pushkar priyadarshi
setting the config is the way to use async.it throws an exception when unable to send a message. On Sun, Jun 8, 2014 at 12:46 PM, Achanta Vamsi Subhash achanta.va...@flipkart.com wrote: - Is setting type in config of the producer to sync the way? - Is the exception thrown a Runtime

Re: New Metrics Reporter for Graphite

2014-05-22 Thread pushkar priyadarshi
Hello Damien, Im also using same thing for pushing to graphite(forked from gangalia) but i dont see default jvm paramaters like OS metrics being pushed to graphite?Have you checked your version.Are you able to push these metrices as well. On Thu, May 22, 2014 at 8:02 PM, Jun Rao jun...@gmail.com

Re: Kafka: writing custom Encoder/Serializer

2014-05-20 Thread pushkar priyadarshi
you can send byte[] that you get by using your own serializer ; through kafka ().On the reciving side u can deseraialize from the byte[] and read back your object.for using this you will have to supply serializer.class=kafka.serializer.DefaultEncoder in the properties. On Tue, May 20, 2014 at

Re: Kafka: writing custom Encoder/Serializer

2014-05-20 Thread pushkar priyadarshi
) throws IOException, ClassNotFoundException { ByteArrayInputStream b = new ByteArrayInputStream(bytes); ObjectInputStream o = new ObjectInputStream(b); return o.readObject(); } } pushkar priyadarshi priyadarshi.push...@gmail.com 5/20/2014 5:11 PM

Re: Kafka Performance Tuning

2014-04-24 Thread pushkar priyadarshi
you can use the kafka-list-topic.sh to find out if leader for particual topic is available.-1 in leader column might indicate trouble. On Fri, Apr 25, 2014 at 6:34 AM, Guozhang Wang wangg...@gmail.com wrote: Could you double check if the topic LOGFILE04 is already created on the servers?

Re: Review for the new consumer APIs

2014-04-08 Thread pushkar priyadarshi
Was trying to understand when we have subscribe then why poll is a separate API.Why cant we pass a callback in subscribe itself? On Mon, Apr 7, 2014 at 9:51 PM, Neha Narkhede neha.narkh...@gmail.comwrote: Hi, I'm looking for people to review the new consumers APIs. Patch is posted at

Re: Puppet module for deploying Kafka released

2014-02-26 Thread pushkar priyadarshi
i have been using one from here. https://github.com/whisklabs/puppet-kafka but had to fix few small problem like when this starts kafka as upstart service it does not provide log path so kafka logs never appear since as a service they dont have default terminal. Thanks for sharing.Will start

Re: Kafka High Level Consumer Fetch All Messages From Topic Using Java API (Equivalent to --from-beginning)

2014-02-14 Thread pushkar priyadarshi
I don't think there is any direct high level API equivalent to this.every time you read messages using high level api your offset gets synced in zoo keeper .auto offset is for cases where last read offset for example have been purged n rather than getting exception you want to just fall back to

Pattern for using kafka producer API

2014-02-09 Thread pushkar priyadarshi
What is the most appropriate design for using kafka producer from performance view point.I had few in my mind. 1.Since single kafka producer object have synchronization; using single producer object from multiple thread might not be efficient.so one way would be to use multiple kafka producer

Re: which zookeeper version

2014-01-02 Thread pushkar priyadarshi
Thanks Jason. On Thu, Jan 2, 2014 at 7:04 PM, Jason Rosenberg j...@squareup.com wrote: Hi Pushkar, We've been using zk 3.4.5 for several months now, without any problems, in production. Jason On Thu, Jan 2, 2014 at 1:15 AM, pushkar priyadarshi priyadarshi.push...@gmail.com wrote: Hi

which zookeeper version

2014-01-01 Thread pushkar priyadarshi
Hi, I am starting a fresh deployment of kafka + zookeeper.Looking at zookeeper releases find 3.4.5 old and stable enough.Has anyone used this before in production? kafka ops wiki page says at Linkedin deployment still uses 3.3.4.Any specific reason for the same. Thanks And Regards, Pushkar

Re: doubt regarding the metadata.brokers.list parameter in producer properties

2013-12-19 Thread pushkar priyadarshi
1.When you start producing : at this time if any of your supplied broker is alive system will continue to work. 2.Broker going down and coming up with new IP : producer API refreshes metadata information on failures(configurable) so they should be able to detect new brokers. But i dont think it's

Re: doubt regarding the metadata.brokers.list parameter in producer properties

2013-12-19 Thread pushkar priyadarshi
:33 PM, pushkar priyadarshi wrote: 1.When you start producing : at this time if any of your supplied broker is alive system will continue to work. 2.Broker going down and coming up with new IP : producer API refreshes metadata information on failures(configurable) so they should be able

kafka build error scala 2.10

2013-12-18 Thread pushkar priyadarshi
While doing dev setup as described in https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup im getting following build errors. immutable is already defined as class immutable Annotations_2.9+.scala /KafkaEclipse/core/src/main/scala/kafka/utils line 38 Scala Problem threadsafe is

Re: regarding run-simulator.sh

2013-12-18 Thread pushkar priyadarshi
i see many tools mentioned for perf here https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing of all these what all already exist in 0.8 release? e.g. i was not able to find jmx-dump.sh , R script etc anywhere. On Wed, Dec 18, 2013 at 11:01 AM, pushkar priyadarshi

Re: Data loss in case of request.required.acks set to -1

2013-12-18 Thread pushkar priyadarshi
You can try setting a higher value for message.send.max.retries in producer config. Regards, Pushkar On Wed, Dec 18, 2013 at 5:34 PM, Hanish Bansal hanish.bansal.agar...@gmail.com wrote: Hi All, We are having kafka cluster of 2 nodes. (using 0.8.0 final release) Replication Factor: 2

Re: Data loss in case of request.required.acks set to -1

2013-12-18 Thread pushkar priyadarshi
with configuring message.send.max.retries to 10. Default value for this is 3. But still facing data loss. On Wed, Dec 18, 2013 at 12:44 PM, pushkar priyadarshi priyadarshi.push...@gmail.com wrote: You can try setting a higher value for message.send.max.retries

Re: kafka build error scala 2.10

2013-12-18 Thread pushkar priyadarshi
, 2013 at 12:16 AM, pushkar priyadarshi priyadarshi.push...@gmail.com wrote: While doing dev setup as described in https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup im getting following build errors. immutable is already defined as class immutable Annotations_2.9+.scala

regarding run-simulator.sh

2013-12-17 Thread pushkar priyadarshi
i am not able to find run-simulator.sh in 0.8 even after building perf.if this tool has been deprecated what are other alternatives available now for perf testing? Regards, Pushkar

Re: regarding run-simulator.sh

2013-12-17 Thread pushkar priyadarshi
thanks Jun. On Wed, Dec 18, 2013 at 10:47 AM, Jun Rao jun...@gmail.com wrote: You can run kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh. Thanks, Jun On Tue, Dec 17, 2013 at 8:44 PM, pushkar priyadarshi priyadarshi.push...@gmail.com wrote: i am not able to find run