Yes, Jiangjie, I do see lots of these errors Starting preferred replica leader
election for partitions” in logs. I also see lot of Produce request failure
warnings in with the NotLeader Exception.
I tried switching off the auto.leader.relabalance to false. I am still noticing
the rebalance
Yes, the rebalance should not happen in that case. That is a little bit
strange. Could you try to launch a clean Kafka cluster with
auto.leader.election disabled and try push data?
When leader migration occurs, NotLeaderForPartition exception is expected.
Jiangjie (Becket) Qin
On 3/6/15, 3:14
Hi Kafka masters,
Wondering if any open source solutions, to transfer message received from
Kakfa to Hadoop HDFS? Thanks.
regards,
Lin
Thanks, Jiangjie, I will try with a clean cluster again.
Thanks
Zakee
On Mar 6, 2015, at 3:51 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
Yes, the rebalance should not happen in that case. That is a little bit
strange. Could you try to launch a clean Kafka cluster with
Hi,
I think you can look at open file descriptors (network connections use
FDs). For example:
https://apps.sematext.com/spm-reports/s/IoQDvdT0Ig -- all good
https://apps.sematext.com/spm-reports/s/v5Hvwta7PP -- Otis restarting 2
consumers
lsof probably shows it, too.
Otis
--
Monitoring *
Hi Guozhang,
Thanks for confirming.
It should be straightforward to make subscribe(TopicFilter) and
subscribe(TopicFilter, Partition) work for added/removed topics, since this
is mostly regex matching against zookeeper metadata. But any thoughts on
how repartitioning would work? (we need to let
I think this is great. I assume the form this would take would be a library
that implements the JMS api that wraps the existing java producer and
consumer?
Our past experience has been that trying to maintain all this stuff
centrally is too hard and tends to stifle rather than support innovation.
Hi Tao,
Yes, your understanding is correct. We probably should update the document
to make it more clear. Could you open a ticket for it?
Jiangjie (Becket) Qin
On 3/6/15, 1:23 AM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
After reading the source code of AbstractFetcherManager I found out
Hi team,
After reading the source code of AbstractFetcherManager I found out that
the usage of num.consumer.fetchers may not match what is described in the
Kafka doc. My interpretation of the Kafka doc is that the number of
fetcher threads is controlled by the value of
property
Hi team,
I am having java.util.IllegalFormatConversionException when running
MirrorMaker with log level set to trace. The code is off latest trunk with
commit 8f0003f9b694b4da5fbd2f86db872d77a43eb63f
The way I bring up is
bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config
A bit more context: I turned on async in producer.properties
On Sat, Mar 7, 2015 at 2:09 AM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I am having java.util.IllegalFormatConversionException when running
MirrorMaker with log level set to trace. The code is off latest trunk with
commit
1. partition / member changes are caught on the server side, who will
notify consumers to re-balance.
2. topic changes are caught on the client side through metadata request,
who will then re-join the group with the new topic list to the server to
re-balance.
Guozhang
On Fri, Mar 6, 2015 at 8:49
One of our staff has has been terrible at adding finally clauses to
close kafka resources.
Does the kafka scala/Java client maintain a count or list of open
producers/consumers/client connections?
It doesn't keep track specifically, but there are open sockets that may
take a while to clean themselves up.
Note that if you use the async producer and don't close the producer
nicely, you may miss messages as the connection will close before all
messages are sent. Guess how we found out? :)
You could also take a thread dump to try to find them by their network
threads. For example this is how new producer network threads are named:
String ioThreadName = kafka-producer-network-thread +
(clientId.length() 0 ? | + clientId : );
On Fri, Mar 6, 2015 at 1:04 PM, Gwen Shapira
Thank you Jay for your note.
So JMSAdaptor(or maybe, MQKafkaBridge?) prototype, it is then! Will run a
feature compatibility feasibility check.
Thanks
Rekha
On 3/6/15, 8:45 AM, Jay Kreps jay.kr...@gmail.com wrote:
I think this is great. I assume the form this would take would be a
library
Spencer,
Kafka (and it's clients) handle failover automatically for you. When you
create a topic, you can select a replication factor. For a replication
factor n, each partition of the topic will be replicated to n different
brokers. At any given time, one of those brokers is considered the
Hi Spencer,
You can configure your producers with a list of brokers. You can add all, but
usually at least two of the brokers in your cluster.
Kind Regards,
Daniel Moreno
On Mar 6, 2015, at 23:43, Spencer Owen
so...@netdocuments.commailto:so...@netdocuments.com wrote:
I've setup a kafka
I've setup a kafka cluster with 3 nodes.
Which node should I push the data to? I would normally push to kafka01, but if
that node goes down, then the entire cluster goes down.
How have other people solved this. Maybe a nginx reverse proxy?
This presentation from a recent Kafka meetup in NYC describes different
approaches.
http://www.slideshare.net/gwenshap/kafka-hadoop-for-nyc-kafka-meetup?ref=http://ingest.tips/2014/10/16/notes-from-kafka-meetup/
It´s companion blog post is this:
You can also try the approach described here
http://blog.cloudera.com/blog/2014/11/flafka-apache-flume-meets-apache-kafka-for-event-processing/
Kind Regards,
Daniel
On Mar 6, 2015, at 23:20, Lin Ma lin...@gmail.commailto:lin...@gmail.com
wrote:
Hi Kafka masters,
Wondering if any open
Try this. https://github.com/linkedin/camus
Aditya
From: Lin Ma [lin...@gmail.com]
Sent: Friday, March 06, 2015 8:19 PM
To: users@kafka.apache.org
Subject: Kafka to Hadoop HDFS
Hi Kafka masters,
Wondering if any open source solutions, to transfer
I think I worked out the root cause
Line 593 in MirrorMaker.scala
trace(Updating offset for %s to %d.format(topicPartition, offset)) should
be
trace(Updating offset for %s to %d.format(topicPartition, offset.element))
On Sat, Mar 7, 2015 at 2:12 AM, tao xiao xiaotao...@gmail.com wrote:
A
Hi, James,
You also mentioned you want to implement another critical component for
monitoring the data consistency between your sources and targets and correcting
the inconsistent data.
Unfortunately, data compare is not easy when some applications/replications
still change your source and
Hi, James,
uh… iOS Gmail app crashed. Let me resend the email to answer your concern.
First, I am not a Kafka user. Like you, I am trying to see if Kafka can be used
for replication-related tasks. If Kafka can provide unit of work, the design
will be much simpler. As Guozhang said, Kafka
25 matches
Mail list logo