Hello Everyone,
Thank you for the help!
Preface: I've created producers/consumers before and they have worked. I
have also made consumers/producers using java programs, but they have all
been locally.
1) I have a Zookeeper/Kafka Server running on an EC2 instance called A
2) I started the
Two things:
1. The OOM happened on the consumer, right? So the memory that matters
is the RAM on the consumer machine, not on the Kafka cluster nodes.
2. If the consumers belong to the same consumer group, each will
consume a subset of the partitions and will only need to allocate
memory for
Thanks Natty.
Is there any config which I need to change on the client side as well?
Also, currently I am trying with only 1 consumer thread. Does the equation
changes to
(#partitions)*(fetchsize)*(#consumer_threads) in case I try to read with
1000 threads from from topic2(1000 partitions)?
Thanks Jun. I don't see any error code and the fetch size is large enough
to than the largest single message. Actually, when I call
response.messageSet(topic, partition).toBuffer.size the value is the number
of messages I've produced to Kafka.
On Tue Jan 20 2015 at 上午12:31:53 Jun Rao
Yonghui, which version of Kafka are you using? And does your cluster only
have one (broker-0) server?
Guozhang
On Sat, Jan 17, 2015 at 11:53 PM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Hi,
our kafka cluster is shut down automatically today, here is the log.
I don't find any error log.
This is the second candidate for release of Apache Kafka 0.8.2.0. There has
been some changes since the 0.8.2 beta release, especially in the new java
producer api and jmx mbean names. It would be great if people can test this
out thoroughly.
Release Notes for the 0.8.2.0 release
Hi Su,
How exactly did you start the Kafka server on instance A? Are you sure
the services on it are bound to non localhost IP? What does the
following command result from instance B:
telnet public.ip.of.A 9092
-Jaikiran
On Tuesday 20 January 2015 07:16 AM, Su She wrote:
Hello Everyone,
Hi,
I use such tool
Consumer Offset Checker
Displays the: Consumer Group, Topic, Partitions, Offset, logSize, Lag,
Owner for the specified set of Topics and Consumer Group
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
To be able to know the consumer group, in zkCli.sh
[zk:
Hi Suhas,
Without seeing the actual output of the stacktrace, I'd suspect that
spark-submit is doing some classpath magic that is covering some
dependencies you may not have included. Depending on your use case, it
might be easier to deal with this by just having maven output a pre-built
The fetch.message.max.size is actually a client-side configuration. With
regard to increasing the number of threads, I think the calculation may be
a little more subtle than what you're proposing, and frankly, it's unlikely
that your servers can handle allocating 200MB x 1000 threads = 200GB of
Dillian,
Currently we do not have a script tool to list / verify all the brokers
directly. The best practice would be checking the /brokers/ids path on ZK.
This situation could be improved though, could you file a JIRA for adding
admin tool on listing / verifying online brokers?
Guozhang
On
There is a property config you can set via bin/kafka-console-consumer.sh to
commit offsets to ZK, you can use bin/kafka-console-consumer.sh --help to
list all the properties.
Guozhang
On Mon, Jan 19, 2015 at 5:15 PM, Sa Li sal...@gmail.com wrote:
Guozhang,
Currently we are in the stage to
Sa,
Did your consumer ever commit offsets to Kafka? If not then no
corresponding ZK path will be created.
Guozhang
On Mon, Jan 19, 2015 at 3:58 PM, Sa Li sal...@gmail.com wrote:
Hi,
I use such tool
Consumer Offset Checker
Displays the: Consumer Group, Topic, Partitions, Offset,
Vish,
I am assuming by delay queue support you mean sth. like:
http://activemq.apache.org/delay-and-schedule-message-delivery.html
Kafka uses a client-pull based consumption model, i.e. the consumers will
determine when to fetch the next message after it has, for example, waited
for some time
Thanks a lot Natty.
I am using this Ruby gem on the client side with all the default config
https://github.com/joekiller/jruby-kafka/blob/master/lib/jruby-kafka/group.rb
and the value fetch.message.max.bytes is set to 1MB.
Currently I only have 3 nodes setup in the Kafka cluster (with 8 GB RAM)
Hi Jaikiran,
Thanks for the reply!
1) I started Kafka server on instance A by simply downloading
Kafka_2.10-0.8.2-beta.tgz from the kafka website, and using the scripts
mentioned here: http://kafka.apache.org/documentation.html#introduction.
This is the same way I downloaded Kafka on B, except I
Hi,
As a former DBA, I hear you on backups :)
Technically, you could copy all log.dir files somewhere safe
occasionally. I'm pretty sure we don't guarantee the consistency or
safety of this copy. You could find yourself with a corrupt backup
by copying files that are either in the middle of
Please subscribe myself.
Thanks for reporting the issues in RC1. I will prepare RC2 and start a new
vote.
Jun
On Tue, Jan 13, 2015 at 7:16 PM, Jun Rao j...@confluent.io wrote:
This is the first candidate for release of Apache Kafka 0.8.2.0. There
has been some changes since the 0.8.2 beta release, especially in the
Hi Pranay,
I think the JIRA you're referencing is a bit orthogonal to the OOME that
you're experiencing. Based on the stacktrace, it looks like your OOME is
coming from a consumer request, which is attempting to allocate 200MB.
There was a thread (relatively recently) that discussed what I think
This is a good point, even though you mentioned that you also have latency
issues locally. I just migrated a 3-node test cluster from m3.large
instances to c4.xlarge instances (3-node ZK migrated from m3.medium to
c4.large) in an EC2 placement group (better network IO and more consistent
Hi All,
I am having an issue when using kafka with librdkafka. I've changed the
message.max.bytes to 2MB in my server.properties config file, that is the size
of my message, when I run the command line ./rdkafka_performance -C -t test -p
0 -b computer49:9092, after consume some messages the
Did you get any error code in the response? Also, make sure fetchSize is
larger than the largest single message.
Thanks,
Jun
On Sun, Jan 18, 2015 at 4:54 PM, Manu Zhang owenzhang1...@gmail.com wrote:
Hi all,
I'm using Kafka low level consumer api and find in the below codes
Continue this kafka-web-console thread, I follow such page:
http://mungeol-heo.blogspot.ca/2014/12/kafka-web-console.html
I run the command:
play start -Dhttp.port=8080
It works good for a while, but getting such error :
at
Hi guys,
Ok, I’ve proved this and it was fine.
Thanks
On Jan 19, 2015, at 19:10, Joe Stein joe.st...@stealth.ly wrote:
If you increase the size of the messages for producing then you **MUST** also
change *replica.fetch.max.bytes i*n the broker* server.properties *otherwise
none of your
(duplicating the github answer for reference)
Hi Eduardo,
the default maximum fetch size is 1 Meg which means your 2 Meg messages
will not fit the fetch request.
Try increasing it by appending -X fetch.message.max.bytes=400 to your
command line.
Regards,
Magnus
2015-01-19 17:52 GMT+01:00
If you increase the size of the messages for producing then you **MUST** also
change *replica.fetch.max.bytes i*n the broker* server.properties *otherwise
none of your replicas will be able to fetch from the leader and they will
all fall out of the ISR. You also then need to change your consumers
Hello, Jun
I run such command:
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test-rep-three 100 3000 -1 acks=-1 bootstrap.servers=10.100.98.100:9092,
10.100.98.101:9092 http://10.100.98.100:9092/, 10.100.98.102:9092
http://10.100.98.100:9092/
28 matches
Mail list logo