Hello Apache Kafka community,
In Kafka 0.8.1.1, are Kafka metrics updated/tracked/marked by simple
consumer implementation or only by high level one?
Kind regards,
Stevo Slavic.
I was going to make a separate email thread for this question but this
thread's topic echoes what my own would have been.
How can I query a broker or zookeeper for the number of partitions in a
given topic? I'm trying to write a custom partitioner that sends a message
to every partition within a
Sorry, i found the controller log. it shows some log like this:
[2015-02-16 11:10:03,094] DEBUG [Controller 0]: Removing replica 1 from ISR 2,0
for partition [wx_rtdc_PageViewData_bolt1,4].
(kafka.controller.KafkaController)[2015-02-16 11:10:03,096] WARN [Controller
0]: Cannot remove replica 1
Alex,
You can get partition from MessageAndMetadata as partition is exported via
constructor parameter
On Fri, Feb 27, 2015 at 2:12 PM, Alex Melville amelvi...@g.hmc.edu wrote:
Tao and Gaurav,
After looking through the source code in Kafka v8.2.0, I don't see any
partition() function on
that's fine to me , you can open a separate thread , But the original
question when the consumerconnector got connected to a separate topic ,
Whether KafkaStream will have all the information of the partitions for
that corresponding topic , Please confirm
Thanks
On Fri, Feb 27, 2015 at 11:20 AM,
I am writing a custom producer that needs to know information about the
topic it's about to produce to. In particular it needs to know the number
of partitions on the topic. Is there some utility method that returns such
data? I am using scala v2.9.2 and kafka v8.2.0.
Alex
Tao and Gaurav,
After looking through the source code in Kafka v8.2.0, I don't see any
partition() function on the MessageAndMetadata object. Here's the class's
source:
package kafka.message
import kafka.serializer.Decoder
import kafka.utils.Utils
case class MessageAndMetadata[K, V](topic:
Thanks Joestein for the reply. I would be glad if I get some references of java
code for the same.
Secondly will it be a good approach to send complete files rather than the url
of the files on kafka queue.
Thanks,
Udbhav Agarwal
-Original Message-
From: Joe Stein
Hi,
I know that Netflix might be talking about Kafka on AWS at the March meetup,
but I wanted to bring up the topic anyway.
I'm sure that some people are running Kafka in AWS. Is anyone running Kafka
within docker in production? How does that work?
For both of these, how do you persist data?
metadata fetch only happens/blocks for the first time you call send. after
the metadata is retrieved can cached in memory. it will not block again. so
yes, there is a possibility it can block. of course, if cluster is down and
metadata was never fetched, then every send can block.
metadata is
I was actually referring to the metadata fetch. Sorry I should have been
more descriptive. I know we can decrease the metadata.fetch.timeout.ms
setting to be a lot lower, but it's still blocking if it can't get the
metadata. And I believe that the metadata fetch happens every time we call
send()?
It can be done, sure. We built a prototype a while back
https://github.com/stealthly/f2k though I can't say I have bumped into a
use case where the tradeoffs worked to do it. Chunking the file in each
message and reconstructing it is going to be an overhead or your going to
block on waiting to
Hi,
Can we send pdf, image etc files in kafka queue. Not the url containing the
address of the pdf etc files but actualy pdf etc files. I want to send pdf etc
files from kafka producer to kafka consumer where I want to put the files in
hdfs.
Thanks,
Udbhav Agarwal
That may be enough. What's the RequestQueueSize and RequestQueueTimeMs?
Thanks,
Jun
On Wed, Feb 25, 2015 at 10:24 PM, Zakee kzak...@netzero.net wrote:
Well currently I have configured 14 thread for both io and network. Do you
think we should consider more?
Thanks
-Zakee
On Wed, Feb 25,
Kafka can accept any type of data, you just pass a byte[] to the producer
and get a byte[] back from the consumer. How you interpret it is entirely
up to your application.
But it does have limits on message size (see the message.max.bytes and
replica.fetch.max.bytes setting for brokers) and
Akshat,
Produce.batch_size is in bytes and if your messages avg size is
310 bytes and your current number of messages per batch is 46 you
are getting close to the max batch size 16384. Did you try
increasing the producer batch_size bytes?
-Harsha
On Thu, Feb 26, 2015,
Thanks Steven. We changed the code to ensure that the producer is only
created one and reused so that the metadata fetch doesn't happen every
send() call.
On 26 February 2015 at 12:44, Steven Wu stevenz...@gmail.com wrote:
metadata fetch only happens/blocks for the first time you call send.
Hi,
I am using the new Producer API in Kafka 0.8.2. I am writing messages to
Kafka that are ~310 bytes long with the same partition key to one single .
I'm mostly using the default Producer config, which sets the max batch size
to 16,384. However, looking at the JMX stats on the broker side, I
Oh, that makes a lot more sense! I assumed that the batch size was in
terms of number of messages, not the number of bytes because it was so
small. What would be a reasonable value to use? Will 1-2 MB be too large
and bursty?
On Thu, Feb 26, 2015 at 10:07 AM, Harsha ka...@harsha.io wrote:
Hi,
With 0.8.2 out I thought it might be useful for everyone to see which
version(s) of Kafka people are using.
Here's a quick poll:
http://blog.sematext.com/2015/02/23/kafka-poll-version-you-use/
We'll publish the results next week.
Thanks,
Otis
--
Monitoring * Alerting * Anomaly Detection *
Hello
After retrieving a kafka stream or kafka message how to get the
corresponding partition number to which it belongs ? I am using kafka
version 0.8.1.
More specifically kafka.consumer.KafkaStream and
kafka.message.MessageAndMetaData classes, does not provide API to retrieve
partition number.
After retrieving a kafka stream or kafka message how to get the
corresponding partition number to which it belongs ? I am using kafka
version 0.8.1.
More specifically kafka.consumer.KafkaStream and
kafka.message.MessageAndMetaData classes, does not provide API to retrieve
partition number. Are
Right, you need to look into why the restarted broker is not sync-ed up.
Any error in the controller and state-change log? Also, what version of
Kafka are you on?
Thanks,
Jun
On Wed, Feb 25, 2015 at 5:46 PM, ZhuGe t...@outlook.com wrote:
we did not have this setting in the property file, so
The partition api is exposed to the consumer in 0.8.2.
Thanks,
Jun
On Thu, Feb 26, 2015 at 10:53 AM, Gaurav Agarwal gaurav130...@gmail.com
wrote:
After retrieving a kafka stream or kafka message how to get the
corresponding partition number to which it belongs ? I am using kafka
version
Hi ,
Can you please let me know if we can send a file as in a pdf,jpg or Jpeg as
a content of a message which we send via Kafka?
Thanks,
Siddharth Ubale
Hi,
I am using kafka_2.9.2-0.8.1.1
Intended to run performance test using run-simulator.sh file.
I started zookeeper and kafka server and finally run the below command
/run-simulator.sh -kafkaServer=localhost -numTopic=10
-reportFile=report-html/data -time=15 -numConsumer=20 -numProducer=40
Hi Jun,
There are too many KPIs generated by kafka that I can’t monitor them all, I
have monitoring a filtered list that I can understand so far.
Which keys are you talking of? Currently I am not monitoring any of below keys
but will add only those that are useful.
All,
There exists code in the sample console consumer that ships with kafka,
that will remove consumer group id's from zookeeper, for the case where
it's just a short-lived session using an auto-generated groupid. It's a
bit of a hack, but it works (keeps the number of groupids from
This is the second candidate for release of Apache Kafka 0.8.2.1. This
fixes 4 critical issue in 0.8.2.0.
Release Notes for the 0.8.2.1 release
https://people.apache.org/~junrao/kafka-0.8.2.1-candidate2/RELEASE_NOTES.html
*** Please download, test and vote by Monday, Mar 2, 3pm PT
Kafka's KEYS
We have one topic with 4 partitions, but sometimes only get metadata of 2
partitions, did anyone meet this kind of situation before?If some partition has
no leader at that moment, will it cause this problem? How to make some
partition has no leader?If 6 brokers has some partitions of the
Some times an ephemeral ZK path does not go away after a consumer is
closed. You can check the log for each rebalance to see if it complains
some conflict data of ZK Path. If all the complaints are pointing to the
same consumer, bounce that consumer. Otherwise you can try to remove the
ZK path
Just give you some more debugging context, we noticed that the consumers
path becomes empty after all the JVMs have exited because of this error.
So, when we restart, there are no visible entries in ZK.
On Thu, Feb 26, 2015 at 6:04 PM, Ashwin Jayaprakash
ashwin.jayaprak...@gmail.com wrote:
Hello, we have a set of JVMs that consume messages from Kafka topics. Each
JVM creates 4 ConsumerConnectors that are used by 4 separate threads.
These JVMs also create and use the CuratorFramework's Path children cache
to watch and keep a sub-tree of the ZooKeeper in sync with other JVMs. This
Thanks a bunch for the detailed response and tips!! Looks like I have a
couple of knobs one of which should work, I will be doing some runs to
figure out what works best for my use case.
Thanks again.
On Thu, Feb 26, 2015 at 9:03 AM, Jeff Wartes jwar...@whitepages.com wrote:
A note on
Gaurav,
You can get the partition number the message belongs to via
MessageAndMetadata.partition()
On Fri, Feb 27, 2015 at 5:16 AM, Jun Rao j...@confluent.io wrote:
The partition api is exposed to the consumer in 0.8.2.
Thanks,
Jun
On Thu, Feb 26, 2015 at 10:53 AM, Gaurav Agarwal
35 matches
Mail list logo