Jason, Kyle,
Added comments to the jira. Let me know if they make sense. The dot
convention is a bit tricky to follow since we allow dots in topic and
clientId, etc. Also, we probably don't want to use a convention too
specific for Graphite since other systems may have other conventions.
Thanks,
Hi, Everyone,
We identified a blocker issue related to Yammer jmx reporter (
https://issues.apache.org/jira/browse/KAFKA-1902). We are addressing this
issue right now. Once it's resolved, we will roll out a new RC for voting.
Thanks,
Jun
On Mon, Jan 26, 2015 at 5:14 PM, Otis Gospodnetic
I am using 0.8.1. The source is here:
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala
Here is the definition of disconnect():
private def disconnect() = {
if(blockingChannel.isConnected) {
debug(Disconnecting from + host + : + port)
Hi,
I am new to this forum and I am not sure this is the correct mailing list for
sending question. If not, please let me know and I will stop.
I am looking for help to resolve replication issue. Replication stopped working
a while back.
Kafka environment: Kafka 0.8.1.1, Centos 6.5, 7 node
Hi Jason - can you describe how you verify that the metrics are not
coming through to the metrics registry? Looking at the metrics code
it seems that the mbeans are only registered by the yammer jmx
reporter only after being added to the metrics registry.
Thanks,
Joel
On Tue, Jan 27, 2015 at
Ok,
It looks like the yammer MetricName is not being created correctly for the
sub metrics that include a topic. E.g. a metric with an mbeanName like:
kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec,topic=mytopic
appears to be malformed. A yammer MetricName has 4 fields that are
Which version of the broker are you using?
On Mon, Jan 26, 2015 at 10:27:14PM -0800, Sumit Rangwala wrote:
While running kafka in production I found an issue where a topic wasn't
getting created even with auto topic enabled. I then went ahead and created
the topic manually (from the command
Is it actually getting double-counted? I tried reproducing this
locally but the BrokerTopicMetrics.Count lines up with the sum of the
PerTopic.Counts for various metrics.
On Tue, Jan 27, 2015 at 03:29:37AM -0500, Jason Rosenberg wrote:
Ok,
It looks like the yammer MetricName is not being
I'm a new user/admin to kafka. I'm running a 3 node ZK and a 6 brokers on
aws.
The performance I'm seeing is shockingly bad. I need some advice!
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test2 5000 100 -1 acks=1 bootstrap.servers=5myloadbalancer:9092
Remember multiple people have reported this issue. Per topic metrics no
longer appear in graphite (or in any system modeled after the yammer
GraphiteReporter). They are not being seen as unique.
While these metrics are registered in the registry as separate ‘MetricName’
instances (varying only by
Hi ,
I am new to Kafka, I have a use case in which My Consumer can't use auto
commit offset feature, I have to go with option of manual Commit. With High
level Consumer I have have constrain that consumer can commit only current
offset, but in my case I will be committing some previous off-set
Here is the relevant stack trace:
java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:127) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
~[na:1.7.0_55]
at
I am using 0.8.2-beta on brokers 0.8.1.1 for client (producer and
consumers). delete.topic.enable=true on all brokers. replication factor is
number of brokers. I see this issue with just one single topic, all other
topics are fine (creation and deletion). Even after a day it is still in
marked
Ewen, you are right, the patch is committed on Feb.20th last year, I will
leave a comment and close that ticket.
On Tue, Jan 27, 2015 at 7:24 PM, Ewen Cheslack-Postava e...@confluent.io
wrote:
This was fixed in commit 6ab9b1ecd8 for KAFKA-1235 and it looks like that
will only be included in
Steven,
You are right, I was wrong about the previous email: it will not set the
flag, the Sender thread will trigger Client.poll() but with Int.Max select
time, hence this should not be an issue. I am closing this discussion now.
Guozhang
On Mon, Jan 26, 2015 at 4:42 PM, Steven Wu
Do you still have the controller and state change logs from the time
you originally tried to delete the topic?
On Tue, Jan 27, 2015 at 03:11:48PM -0800, Sumit Rangwala wrote:
I am using 0.8.2-beta on brokers 0.8.1.1 for client (producer and
consumers). delete.topic.enable=true on all brokers.
I was running the performance command from a virtual box server, so that
seems like it was part of the problem. I'm getting better results running
this on a server on aws, but that's kind of expected. Can you look at
these results, and comment on the occasional warning I see? I appreciate
it!
This can happen as a result of unclean leader elections - there are
mbeans on the controller that give the unclean leader election rate -
or you can check the controller logs to determine if this happened.
On Tue, Jan 27, 2015 at 09:54:38PM -0800, Liju John wrote:
Hi ,
I have query regarding
Hi ,
I have query regarding partition offset .
While running kafka cluster for some time ,I noticed that the partition
offset keeps on increasing and at some point the offset decreased by some
number .
In what scenarios does the offset of a topic partition reduces ?
The problem I am facing is
Hi Ajeet,
Which version of Kafka are you using? I remember the OffsetCommitRequest's
requestInfo should be a map of topicPartition - OffsetAndMetadata, not
OffsetMetadataAndError.
Guozhang
On Tue, Jan 27, 2015 at 3:31 AM, ajeet singh ajeetpr.si...@gmail.com
wrote:
Hi ,
I am new to Kafka, I
Mahesh,
Could you reformat your attached logs?
Guozhang
On Mon, Jan 26, 2015 at 8:08 PM, Mahesh Kumar maheshsanka...@outlook.com
wrote:
Hi all, I am currently working on logstash to mongodb.logstash's
input (producer) is kafka and output (consumer) is mongodb.It is worked
fine for
Rajiv,
Which version of Kafka are you using? I just checked SimpleConsumer's code,
and in its close() function, disconnect() is called, which will close the
socket.
Guozhang
On Mon, Jan 26, 2015 at 2:36 PM, Rajiv Kurian ra...@signalfuse.com wrote:
Meant to write a run loop.
void run() {
I have enabled yammer's ConsoleReporter and I am getting all the metrics
(including per-topic metrics).
Yammer's MetricName object implements equals/hashcode methods using
mBeanName . We are constructing a unique mBeanName for each metric, So we
are not missing/overwriting any metrics.
Current
Jason,
So, this sounds like a real issue. Perhaps we can fix it just by setting
the tag name as the scope. For example, for mbean kafka.server:type=
BrokerTopicMetrics,name=BytesInPerSec,topic=test, we can have
group: kafka.server
type: BrokerTopicMetrics
name: BytesInPerSec
scope: topic=test
Thanks for the quick response, Jun (and many thanks to Jason for confirming
and further investigating the issue). I've tested the patch, and it does
fix the fundamental issue, but I do have a few comments that I'll leave on
the ticket.
On Tue, Jan 27, 2015 at 9:19 AM, Jun Rao j...@confluent.io
I added a comment to the ticket. I think it will work getting data
disambiguated (didn't actually test end to end to graphite).
However, the naming scheme is not ideal for how metric ui's typically would
present the metric tree (e.g. jmx tag syntax doesn't really translate).
Jason
On Tue, Jan
Allen, which version of Kafka are you using? And if you have multiple
brokers, is there a controller migration happened before?
Guozhang
On Fri, Jan 23, 2015 at 3:56 PM, Allen Wang aw...@netflix.com.invalid
wrote:
Hello,
We tried the ReassignPartitionsCommand to move partitions to new
Jason, Kyle,
I created an 0.8.2 blocker https://issues.apache.org/jira/browse/KAFKA-1902
and attached a patch there. Could you test it out and see if it fixes the
issue with the reporter? The patch adds tags as scope in MetricName.
Thanks,
Jun
On Tue, Jan 27, 2015 at 7:39 AM, Jun Rao
Thanks Rajiv, looking forward to your prototype.
Guozhang
On Mon, Jan 26, 2015 at 2:30 PM, Rajiv Kurian ra...@signalfuse.com wrote:
Hi Guozhang,
I am a bit busy at work. When I get the change I'll definitely try to get a
proof of concept going. Not the kafka protocol, but just the buffering
Hi Nitin,
a. The follower replica will be kicked out of the ISR (i.e. causing the
partition to be under-replicated) when 1) it has lagged much behind the
leader replica in terms of number of messages (controlled by config
replica.lag.max.messages), or 2) it has not fetched from leader for some
Also, do you have delete.topic.enable=true on all brokers?
The automatic topic creation can fail if the default number of
replicas is greater than number of available brokers. Check the
default.replication.factor parameter.
Gwen
On Tue, Jan 27, 2015 at 12:29 AM, Joel Koshy jjkosh...@gmail.com
On my brokers I am seeing this error log message:
Closing socket for /X because of error (X is the ip address of a consumer)
2015-01-27_17:32:58.29890 java.io.IOException: Connection reset by peer
2015-01-27_17:32:58.29890 at
sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
Where are you running ProducerPerformance in relation to ZK and the Kafka
brokers? You should definitely see much higher performance than this.
A couple of other things I can think of that might be going wrong: Are all
your VMs in the same AZ? Are you storing Kafka data in EBS or local
ephemeral
This was fixed in commit 6ab9b1ecd8 for KAFKA-1235 and it looks like that
will only be included in 0.8.2.
Guozhang, it looks like you wrote the patch, Jun reviewed it, but the bug
is still open and there's a comment that moved it to 0.9 after the commit
was already made. Was the commit a mistake
35 matches
Mail list logo