Hi,
I am having a very difficult time trying to report kafka 8 metrics to
Graphite. Nothing is listening on and and no data in graphite. If
this method of graphite reporting is know to not work is there an
alternative to jmxtrans to get data to graphite?
I am using the deb file to install
makes it hard to reason about what type of data is being sent to Kafka and
also makes it hard to share an implementation of the serializer. For
example, to support Avro, the serialization logic could be quite involved
since it might need to register the Avro schema in some remote registry and
I checked the max lag and it was 0.
I grep state-change logs about topic-partition [org.nginx,32], and
extract some related to broker 24 and broker 29 (controller switched from
broker 24 to 29)
- on broker 29 (current controller):
[2014-11-22 06:20:20,377] TRACE Controller 29 epoch 7
Hi all,
I'm using kafka 0.8.0 release now. And I often encounter the problem
OffsetOutOfRangeException when cosuming message by simple consumer API.
But I'm sure that the consuming offset is smaller than the latest offset got
from OffsetRequest.
Can it be caused by that new messages are wrote to
Hi,
I have written the basic program to send String or byte[] messages to
consumer from producer by using java Kafka 0.8.1 .
It Works perfectly.But i wanted to send serialized object(Java Bean Object).
Is it possible to send the serialized object from producer to consumer?
if possible, please
Jmxtrans should connect to the jmxremote port.
Try to run ps -aux |grep kafka, and find the process contain
-Dcom.sun.management.jmxremote.port or not.
If not, try to edit kafka-server-start.sh, add export JMX_PORT=.
I was talking about consumer config fetch.message.max.bytes
https://kafka.apache.org/08/configuration.html
by default its 1048576 bytes
On Mon, Dec 1, 2014, at 08:09 PM, Palur Sandeep wrote:
Yeah I did. I made the following changes to server.config:
message.max.bytes=10485800
Maybe also set:
-Dcom.sun.management.jmxremote.port=
?
On Dec 2, 2014, at 02:59, David Montgomery davidmontgom...@gmail.com wrote:
Hi,
I am having a very difficult time trying to report kafka 8 metrics to
Graphite. Nothing is listening on and and no data in graphite. If
Hi all,
I'm using kafka 0.8.0 release now. And I often encounter the problem
OffsetOutOfRangeException when cosuming message by simple consumer API.
But I'm sure that the consuming offset is smaller than the latest offset got
from OffsetRequest.
Can it be caused by that new messages are wrote to
Thank you!
Chico
Hello,
In a multi-broker Kafka 0.8.1.1 setup, I had one broker crashed. I
restarted it after some noticeable time, so it started catching up the
leader very intensively. During the replication, I see that the disk load
on the ZK leader bursts abnormally, resulting in ZK performance
degradation.
You can check the latest/earliest offsets of a given topic by running
GetOffsetShell.
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-GetOffsetShell
On Tue, Dec 2, 2014 at 2:05 PM, yuanjia8947 yuanjia8...@163.com wrote:
Hi all,
I'm using kafka 0.8.0 release now. And
Joel,
Thanks for the feedback.
Yes, the raw bytes interface is simpler than the Generic api. However, it
just pushes the complexity of dealing with the objects to the application.
We also thought about the layered approach. However, this may confuse the
users since there is no single entry point
Re: pushing complexity of dealing with objects: we're talking about
just a call to a serialize method to convert the object to a byte
array right? Or is there more to it? (To me) that seems less
cumbersome than having to interact with parameterized types. Actually,
can you explain more clearly
It's not clear to me from your initial email what exactly can't be done
with the raw accept bytes API. Serialization libraries should be share able
outside of kafka. I honestly like the simplicity of the raw bytes API and
feel like serialization should just remain outside of the base Kafka APIs.
Hello, while we do not currently use the Java API, we are writing a C#/.net
client (https://github.com/ntent-ad/kafka4net). FWIW, we also chose to keep the
API simpler accepting just byte arrays. We did not want to impose even a simple
interface onto users of the library, feeling that users
Hey Joel, you are right, we discussed this, but I think we didn't think
about it as deeply as we should have. I think our take was strongly shaped
by having a wrapper api at LinkedIn that DOES do the serialization
transparently so I think you are thinking of the producer as just an
implementation
Yuanjia,
I am not sure that pagecache can be the cause of this, could you attach
your full stack trace and use the GetOffset tool Manikumar mentioned to
make sure the offset does exist in the broker?
Guozhang
On Tue, Dec 2, 2014 at 7:50 AM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
You can
Ramesh,
Which producer are you using in 0.8.1? kafka.api.producer or
org.apache.kafka.clients.producer?
Guozhang
On Tue, Dec 2, 2014 at 2:12 AM, Ramesh K krame...@gmail.com wrote:
Hi,
I have written the basic program to send String or byte[] messages to
consumer from producer by using java
I'm not sure I agree with this. I feel that the need to have a consistent, well
documented, shared serialization approach at the organization level is
important no matter what. How you structure the API doesn't change that or make
it any easier or automatic than before. It is still possible for
Thanks for the follow-up Jay. I still don't quite see the issue here
but maybe I just need to process this a bit more. To me packaging up
the best practice and plug it in seems to be to expose a simple
low-level API and give people the option to plug in a (possibly
shared) standard serializer in
Hi guys,
I'm interested in the new Consumer API.
http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
I have couple of question.
1. In this doc it says kafka consumer will automatically do load balance.
Is it based on throughtput or same as what we have now balance the
Joel, Rajiv, Thunder,
The issue with a separate ser/deser library is that if it's not part of the
client API, (1) users may not use it or (2) different users may use it in
different ways. For example, you can imagine that two Avro implementations
have different ways of instantiation (since it's
1. In this doc it says kafka consumer will automatically do load balance.
Is it based on throughtput or same as what we have now balance the
cardinality among all consumers in same ConsumerGroup? In a real case
different partitions could have different peak time.
Load balancing is still based on
Hi,
I have a light load scenerio but I am starting off with kafka because I
like how the messages are durable etc.
If I have 4-5 topics, am I required to create the same # of consumers? I
am assuming each consumer runs in a long-running jvm process correct?
Are there any consumer examples
Why can't the organization package the Avro implementation with a kafka
client and distribute that library though? The risk of different users
supplying the kafka client with different serializer/deserializer
implementations still exists.
On Tue, Dec 2, 2014 at 12:11 PM, Jun Rao jun...@gmail.com
Yeah totally, far from preventing it, making it easy to specify/encourage a
custom serializer across your org is exactly the kind of thing I was hoping
to make work well. If there is a config that gives the serializer you can
just default this to what you want people to use as some kind of
The issue with a separate ser/deser library is that if it's not part of the
client API, (1) users may not use it or (2) different users may use it in
different ways. For example, you can imagine that two Avro implementations
have different ways of instantiation (since it's not enforced by the
It also makes it possible to do validation on the server
side or make other tools that inspect or display messages (e.g. the various
command line tools) and do this in an easily pluggable way across tools.
I agree that it's valuable to have a standard way to plugin serialization
across many
Is there an easy way to reproduce the issues that you saw?
Thanks,
Jun
On Mon, Dec 1, 2014 at 6:31 AM, Karol Nowak gryw...@gmail.com wrote:
Hi,
I observed some error messages / exceptions while running partition
reassignment on kafka 0.8.1.1 cluster. Being fairly new to this system I'm
Did you run the --verify option (
http://kafka.apache.org/documentation.html#basic_ops_restarting) to check
if the reassignment process completes? Also, what version of Kafka are you
using?
Thanks,
Jun
On Mon, Dec 1, 2014 at 7:16 PM, Andrew Jorgensen
ajorgen...@twitter.com.invalid wrote:
I
Thanks Neha, another question, so if offsets are stored under group.id,
dose it mean in one group, there should be at most one subscriber for each
topic partition?
Best,
Siyuan
On Tue, Dec 2, 2014 at 12:55 PM, Neha Narkhede neha.narkh...@gmail.com
wrote:
1. In this doc it says kafka consumer
I am using kafka 0.8.
Yes I did run —verify, but got some weird output from it I had never seen
before that looked something like:
Status of partition reassignment:
ERROR: Assigned replicas (5,2) don't match the list of replicas for
reassignment (5) for partition [topic-1,248]
ERROR: Assigned
For (1), yes, but it's easier to make a config change than a code change.
If you are using a third party library, one may not be able to make any
code change.
For (2), it's just that if most consumers always do deserialization after
getting the raw bytes, perhaps it would be better to have these
Rajiv,
Yes, that's possible within an organization. However, if you want to share
that implementation with other organizations, they will have to make code
changes, instead of just a config change.
Thanks,
Jun
On Tue, Dec 2, 2014 at 1:06 PM, Rajiv Kurian ra...@signalfuse.com wrote:
Why can't
Has the message successfully produced to broker? You might need to change
producer settings as well. Otherwise the message could have been dropped.
‹Jiangjie (Becket) Qin
On 12/1/14, 8:09 PM, Palur Sandeep psand...@hawk.iit.edu wrote:
Yeah I did. I made the following changes to server.config:
The offsets are keyed on group, topic, partition so if you have more than
one owner per partition, they will rewrite each other's offsets and lead to
incorrect state.
On Tue, Dec 2, 2014 at 2:32 PM, hsy...@gmail.com hsy...@gmail.com wrote:
Thanks Neha, another question, so if offsets are stored
Will doing one broker at
a time by brining the broker down, updating the code, and restarting it be
sufficient?
Yes this should work for the upgrade.
On Mon, Dec 1, 2014 at 10:23 PM, Yu Yang yuyan...@gmail.com wrote:
Hi,
We have a kafka cluster that runs Kafka 0.8.1 that we are considering
Yu,
Are you enabling message compression in 0.8.1 now? If you have already then
upgrading to 0.8.2 will not change its behavior.
Guozhang
On Tue, Dec 2, 2014 at 4:21 PM, Yu Yang yuyan...@gmail.com wrote:
Hi Neha,
Thanks for the reply! We know that Kafka 0.8.2 will be released soon. If
we
Rajiv,
That's probably a very special use case. Note that even in the new consumer
api w/o the generics, the client is only going to get the byte array back.
So, you won't be able to take advantage of reusing the ByteBuffer in the
underlying responses.
Thanks,
Jun
On Tue, Dec 2, 2014 at 5:26
Yeah I am kind of sad about that :(. I just mentioned it to show that there
are material use cases for applications where you expose the underlying
ByteBuffer (I know we were talking about byte arrays) instead of
serializing/deserializing objects - performance is a big one.
On Tue, Dec 2, 2014
Hi Guozhang,
My kafka works in product environment, a large messages are produced or
consumed. So it is not easy to get the accurate offset through the GetOffset
tool when an OffsetOutOfRangeException happens.But in my application, I have
coded comparing the consuming offset with the latest
Hello Everyone,
I would very much appreciate if someone could provide me a real world
examplewhere it is more convenient to implement the serializers instead
of just making sure to provide bytearrays.
The code we came up with explicitly avoids the serializer api. I think
it is common
Hi,
You can make use of this documentation aimed at JMX and monitoring:
https://sematext.atlassian.net/wiki/display/PUBSPM/SPM+Monitor+-+Standalone
There is a section about Kafka and the information is not SPM-specific.
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log
In our case, we use protocol buffers for all messages, and these have
simple serialization/deserialization builtin to the protobuf libraries
(e.g. MyProtobufMessage.toByteArray()). Also, we often produce/consume
messages without conversion to/from protobuf Objects (e.g. in cases where
we are just
fwiw, we wrap the kafka server in our java service container framework.
This allows us to use the default GraphiteReporter class that is part of
the yammer metrics library (which is used by kafka directly). So it works
seemlessly. (We've since changed our use of GraphiteReporter to instead
send
Hi,
Thanks for the help. I found the issue.I was appending to the bottom
when I should have placed the below line at the top of the file.
echo 'KAFKA_JMX_OPTS=-Dcom.sun.
management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false' |
47 matches
Mail list logo