Storing offsets in Kafka frees up zookeeper writes for offset sync.so I
think it's preferred one to use whenever possible
On Thursday, October 29, 2015, Mayuresh Gharat
wrote:
> You can use either of them.
> The new kafka consumer (still under development) does not
Currently there are no partitions based subscription inside topic.So when
you subscribe to both topics your consumer will get data from each
partitions in these two topics, i dont think you would be missing anything.
On Fri, Oct 23, 2015 at 11:35 AM, Fajar Maulana Firdaus
Hi,
There are two properties which determines when does a replica falls off
sync.Look for replica.lag.time.max.ms and replica.lag.max.messages .If a
replica goes out of sync then it would not be even considered for leader
election.
Regards,
Pushkar
On Wed, Sep 30, 2015 at 9:44 AM, Shushant
Hi,
While benchmarking new producer and consumer syncing offset in zookeeper i
see that MessageInRate reported in BrokerTopicMetrics is not same as rate
at which i am able to publish and consume messages.
Using my own custom reporter i can see the rate at which messages are
published and
2)You need to implement MetricReporter and provider that implementation
class name against producer side configuration metric.reporters
On Mon, Jul 13, 2015 at 9:08 PM, Swati Suman swatisuman1...@gmail.com
wrote:
Hi Team,
We are using Kafka 0.8.2
I have two questions:
1)Is there any Java
Hi,
The documentation for new producer allows passing ack=2(or any other
numeric value) but when i actually pass anything other than 0,1,-1 in
broker log i see following warning.
Client producer-1 from /X.x.x.x:50105 sent a produce request with
request.required.acks of 2, which is now deprecated
, pushkar priyadarshi
priyadarshi.push...@gmail.com wrote:
Hi,
The documentation for new producer allows passing ack=2(or any other
numeric value) but when i actually pass anything other than 0,1,-1 in
broker log i see following warning.
Client producer-1 from /X.x.x.x:50105 sent
In my knowledge if you are using 0.8.2.1 which is latest stable you can
sync up your consumer offsets in kafka itself instead of Zk which further
brings down write load on ZKs.
Regards,
Pushkar
On Tue, Apr 21, 2015 at 1:13 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
2 partitions
, Apr 21, 2015 at 3:07 PM, pushkar priyadarshi
priyadarshi.push...@gmail.com wrote:
I Get warnings in server log saying No checkpointed highwatermark is
found for partition in server.log when trying to create a new topic.
What does this mean?Though this is warning was curious to know
I Get warnings in server log saying No checkpointed highwatermark is found
for partition in server.log when trying to create a new topic.
What does this mean?Though this is warning was curious to know if it
implies of any potential problem.
Thanks And Regards,
Pushkar
So in 0.8.2.0/0.8.2.1 high level consumer can not make use of offset
syncing in kafka?
On Wed, Apr 1, 2015 at 12:51 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
Yes, KafkaConsumer in 0.8.2 is still in development. You probably still
want to use ZookeeperConsumerConnector for now.
On
Hi,
I remember some time back people were asked not to upgrade to 0.8.2.Wanted
to know if issues pertaining to that are resolved now and is it safe now to
migrate to 0.8.2?
Thanks And Regards,
Pushkar
I have been using kafka for quite some time now and would really be
interested to contribute to this awesome code base.
Regards,
Pushkar
On Thu, Jul 17, 2014 at 7:17 AM, Joe Stein joe.st...@stealth.ly wrote:
./gradlew scaladoc
Builds the scala doc, perhaps we can start to publish this again
what throughput are you getting from your kafka cluster alone?Storm
throughput can be dependent on what processing you are actually doing from
inside it.so must look at each component starting from kafka first.
Regards,
Pushkar
On Sat, Jun 14, 2014 at 8:44 PM, Shaikh Ahmed rnsr.sha...@gmail.com
effected as if consumer lags behind too much then it will result into disk
seeks while consuming the older messages.
On Sun, Jun 15, 2014 at 8:16 PM, pushkar priyadarshi
priyadarshi.push...@gmail.com wrote:
what throughput are you getting from your kafka cluster alone?Storm
throughput can
setting the config is the way to use async.it throws an exception when
unable to send a message.
On Sun, Jun 8, 2014 at 12:46 PM, Achanta Vamsi Subhash
achanta.va...@flipkart.com wrote:
- Is setting type in config of the producer to sync the way?
- Is the exception thrown a Runtime
Hello Damien,
Im also using same thing for pushing to graphite(forked from gangalia) but
i dont see default jvm paramaters like OS metrics being pushed to
graphite?Have you checked your version.Are you able to push these metrices
as well.
On Thu, May 22, 2014 at 8:02 PM, Jun Rao jun...@gmail.com
you can send byte[] that you get by using your own serializer ; through
kafka ().On the reciving side u can deseraialize from the byte[] and read
back your object.for using this you will have to
supply serializer.class=kafka.serializer.DefaultEncoder in the properties.
On Tue, May 20, 2014 at
) throws IOException,
ClassNotFoundException {
ByteArrayInputStream b = new ByteArrayInputStream(bytes);
ObjectInputStream o = new ObjectInputStream(b);
return o.readObject();
}
}
pushkar priyadarshi priyadarshi.push...@gmail.com 5/20/2014 5:11
PM
you can use the kafka-list-topic.sh to find out if leader for particual
topic is available.-1 in leader column might indicate trouble.
On Fri, Apr 25, 2014 at 6:34 AM, Guozhang Wang wangg...@gmail.com wrote:
Could you double check if the topic LOGFILE04 is already created on the
servers?
Was trying to understand when we have subscribe then why poll is a separate
API.Why cant we pass a callback in subscribe itself?
On Mon, Apr 7, 2014 at 9:51 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
Hi,
I'm looking for people to review the new consumers APIs. Patch is posted at
i have been using one from here.
https://github.com/whisklabs/puppet-kafka
but had to fix few small problem like when this starts kafka as upstart
service it does not provide log path so kafka logs never appear since as a
service they dont have default terminal.
Thanks for sharing.Will start
I don't think there is any direct high level API equivalent to this.every
time you read messages using high level api your offset gets synced in zoo
keeper .auto offset is for cases where last read offset for example have
been purged n rather than getting exception you want to just fall back to
What is the most appropriate design for using kafka producer from
performance view point.I had few in my mind.
1.Since single kafka producer object have synchronization; using single
producer object from multiple thread might not be efficient.so one way
would be to use multiple kafka producer
Thanks Jason.
On Thu, Jan 2, 2014 at 7:04 PM, Jason Rosenberg j...@squareup.com wrote:
Hi Pushkar,
We've been using zk 3.4.5 for several months now, without any
problems, in production.
Jason
On Thu, Jan 2, 2014 at 1:15 AM, pushkar priyadarshi
priyadarshi.push...@gmail.com wrote:
Hi
Hi,
I am starting a fresh deployment of kafka + zookeeper.Looking at zookeeper
releases find 3.4.5 old and stable enough.Has anyone used this before in
production?
kafka ops wiki page says at Linkedin deployment still uses 3.3.4.Any
specific reason for the same.
Thanks And Regards,
Pushkar
1.When you start producing : at this time if any of your supplied broker is
alive system will continue to work.
2.Broker going down and coming up with new IP : producer API refreshes
metadata information on failures(configurable) so they should be able to
detect new brokers.
But i dont think it's
:33 PM, pushkar priyadarshi wrote:
1.When you start producing : at this time if any of your supplied broker
is
alive system will continue to work.
2.Broker going down and coming up with new IP : producer API refreshes
metadata information on failures(configurable) so they should be able
While doing dev setup as described in
https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
im getting following build errors.
immutable is already defined as class immutable Annotations_2.9+.scala
/KafkaEclipse/core/src/main/scala/kafka/utils line 38 Scala Problem
threadsafe is
i see many tools mentioned for perf here
https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing
of all these what all already exist in 0.8 release?
e.g. i was not able to find jmx-dump.sh , R script etc anywhere.
On Wed, Dec 18, 2013 at 11:01 AM, pushkar priyadarshi
You can try setting a higher value for message.send.max.retries in
producer config.
Regards,
Pushkar
On Wed, Dec 18, 2013 at 5:34 PM, Hanish Bansal
hanish.bansal.agar...@gmail.com wrote:
Hi All,
We are having kafka cluster of 2 nodes. (using 0.8.0 final release)
Replication Factor: 2
with configuring message.send.max.retries to 10. Default
value
for this is 3.
But still facing data loss.
On Wed, Dec 18, 2013 at 12:44 PM, pushkar priyadarshi
priyadarshi.push...@gmail.com wrote:
You can try setting a higher value for message.send.max.retries
, 2013 at 12:16 AM, pushkar priyadarshi
priyadarshi.push...@gmail.com wrote:
While doing dev setup as described in
https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
im getting following build errors.
immutable is already defined as class immutable Annotations_2.9+.scala
i am not able to find run-simulator.sh in 0.8 even after building perf.if
this tool has been deprecated what are other alternatives available now for
perf testing?
Regards,
Pushkar
thanks Jun.
On Wed, Dec 18, 2013 at 10:47 AM, Jun Rao jun...@gmail.com wrote:
You can run kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh.
Thanks,
Jun
On Tue, Dec 17, 2013 at 8:44 PM, pushkar priyadarshi
priyadarshi.push...@gmail.com wrote:
i am not able to find run
35 matches
Mail list logo