Hi team,
I was running the mirror maker off the trunk code and got IOException when
configuring the mirror maker to use KafkaCSVMetricsReporter as the metric
reporter
Here is the exception I got
java.io.IOException: Unable to create /tmp/csv1/BytesPerSec.csv
at
Alex,
I got similar error before due to incorrect network binding of my laptop's
wireless interface. You can try with setting advertised.host.name=kafka's
server hostname in the server.properties and run it again.
On Sun, Feb 8, 2015 at 8:38 AM, Alex Melville amelvi...@g.hmc.edu wrote:
Howdy
Hi all,
I have two mirror maker processes running on two different machines
fetching messages from same topic from one data center to another data
center. These two processes are assigned to the same consumer group. If I
want no data loss or data duplication even when one of the mirror maker
Hi,
I discovered that the new mirror maker implementation in trunk now only
accept one consumer.config property instead of a list of them which means
we can only supply one source per mirror maker process. Is it a reason for
it? If I have multiple source kafka clusters do I need to setup multiple
-1,AC-2,AC-3,BC-1/2/3,BC-4/5/6 respectively;
With createMessageStreamsByFilter(*C = 3) a total of 3 threads will be
created, and consuming AC-1/BC-1/BC-2, AC-2/BC-3/BC-4, AC-3/BC-5/BC-6
respectively.
Guozhang
On Tue, Feb 10, 2015 at 8:37 AM, tao xiao xiaotao...@gmail.com wrote:
Guozhang
, 2015 at 6:24 PM, tao xiao xiaotao...@gmail.com wrote:
Thank you Guozhang for your detailed explanation. In your example
createMessageStreamsByFilter(*C = 3) since threads are shared among
topics there may be situation where all 3 threads threads get stuck with
topic AC e.g. topic
Hi team,
I was trying to migrate my consumer offset from kafka to zookeeper.
Here is the original settings of my consumer
props.put(offsets.storage, kafka);
props.put(dual.commit.enabled, false);
Here is the steps
1. set dual.commit.enabled=true
2. restart my consumer and monitor offset lag
.
On 2/12/15, 7:30 PM, tao xiao xiaotao...@gmail.com wrote:
I used the one shipped with 0.8.2. It is pretty straightforward to
reproduce the issue.
Here are the steps to reproduce:
1. I have a consumer using high level consumer API with initial settings
offsets.storage=kafka
.
-Todd
On Mon, Feb 16, 2015 at 12:27 AM, tao xiao xiaotao...@gmail.com wrote:
Thank you Todd for your detailed explanation. Currently I export all
metrics to graphite using the reporter configuration. is there a way I
can
do similar thing with offset checker?
On Mon, Feb 16, 2015 at 4:21
the mirrormaker and then spin up a console consumer to read from
the source cluster, I get 0 messages consumed.
Alex
On Sun, Feb 15, 2015 at 3:00 AM, tao xiao xiaotao...@gmail.com
javascript:; wrote:
Alex,
Are you sure you have data continually being sent to the topic in source
cluster after you
Hi team,
I got NPE when running the latest mirror maker that is in trunk
[2015-01-23 18:55:20,229] INFO
[kafkatopic-1_LM-SHC-00950667-1422010513674-cb0bb562], exception during
rebalance (kafka.consumer.ZookeeperConsumerConnector)
java.lang.NullPointerException
at
It happens every time I shutdown the connector. It doesn't block the
shutdown process though
On Tue, Feb 10, 2015 at 1:09 AM, Guozhang Wang wangg...@gmail.com wrote:
Is this exception transient or consistent and blocking the shutdown
process?
On Mon, Feb 9, 2015 at 3:07 AM, tao xiao xiaotao
Hi team,
I am comparing the differences between
ConsumerConnector.createMessageStreams
and ConsumerConnector.createMessageStreamsByFilter. My understanding is
that createMessageStreams creates x number of threads (x is the number of
threads passed in to the method) dedicated to the specified
easiest to just monitor MaxLag as that reports the maximum
of all the lag metrics.
On Fri, Feb 13, 2015 at 05:03:28PM +0800, tao xiao wrote:
Hi team,
Is there a metric that shows the consumer lag of a particular consumer
group? similar to what offset checker provides
--
Regards
Hi team,
I got java.nio.channels.ClosedByInterruptException when
closing ConsumerConnector using kafka 0.8.2
Here is the exception
2015-02-09 19:04:19 INFO kafka.utils.Logging$class:68 -
[test12345_localhost], ZKConsumerConnector shutting down
2015-02-09 19:04:19 INFO
Hi team,
If I set offsets.storage=kafka can I still use auto.commit.enable to turn
off auto commit and auto.commit.interval.ms to control commit interval ? As
the documentation mentions that the above two properties are used to
control offset to zookeeper.
--
Regards,
Tao
.
On 2/12/15, 7:30 PM, tao xiao xiaotao...@gmail.com wrote:
I used the one shipped with 0.8.2. It is pretty straightforward to
reproduce the issue.
Here are the steps to reproduce:
1. I have a consumer using high level consumer API with initial settings
offsets.storage=kafka
You can get the partition number and offset of the message by
MessageAndMetadata.partition() and MessageAndMetadata.offset().
To your scenario you can turn off auto commit auto.commit.enable=false and
then commit by yourself after finishing message consumption.
On Mon, Feb 16, 2015 at 1:40 PM,
to catch a broken consumer, as
well as an active consumer that is just falling behind.
-Todd
On Fri, Feb 13, 2015 at 9:34 PM, tao xiao xiaotao...@gmail.com wrote:
Thanks Joel. But I discover that both MaxLag and FetcherLagMetrics are
always
much smaller than the lag shown in offset
Hi,
In order to get it work you can turn off csv-reporter.
On Thu, Feb 5, 2015 at 1:06 PM, Xinyi Su xiny...@gmail.com wrote:
Hi,
Today I updated Kafka cluster from 0.8.2-beta to 0.8.2.0 and run kafka
producer performance test.
The test cannot continue because of some exceptions thrown
Hi team,
I have two consumer instances with the same group id connecting to two
different topics with 1 partition created for each. One consumer uses
partition.assignment.strategy=roundrobin and the other one uses default
assignment strategy. Both consumers have 1 thread spawned internally and
-localhost-1426605370072-904d6fba-0
On Tue, Mar 17, 2015 at 11:30 PM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I have two consumer instances with the same group id connecting to two
different topics with 1 partition created for each. One consumer uses
partition.assignment.strategy=roundrobin
You can set producer property retries not equal to 0. Details can be found
here
http://kafka.apache.org/documentation.html#newproducerconfigs
On Fri, Mar 20, 2015 at 3:01 PM, Samuel Chase samebch...@gmail.com wrote:
Hello Everyone,
In the the new Java Producer API, the Callback code in
here is the slide
http://www.slideshare.net/JonBringhurst/kafka-audit-kafka-meetup-january-27th-2015
On Sat, Mar 21, 2015 at 2:36 AM, Xiao lixiao1...@gmail.com wrote:
Hi, James,
Thank you for sharing it!
The links of videos and slides are the same. Could you check the link of
slides?
Hi,
I created a message stream in my consumer using connector
.createMessageStreamsByFilter(new Whitelist(mm-benchmark-test\\w*), 5); I
have 5 topics in my cluster and each of the topic has only one partition.
My understanding of wildcard stream is that multiple streams are shared
between
different topic name in destination cluster, i mean can i
have different topic names for source and destination cluster for
mirroring. If yes how can i map source topic with destination topic name ?
SunilKalva
On Mon, Mar 9, 2015 at 6:41 AM, tao xiao xiaotao...@gmail.com wrote:
Ctrl+c
I ended up running kafka-reassign-partitions.sh to reassign partitions to
different nodes
On Tue, Mar 10, 2015 at 11:31 AM, sy.pan shengyi@gmail.com wrote:
Hi, tao xiao and Jiangjie Qin
I encounter with the same issue, my node had recovered from high load
problem (caused by other
from the other paritions?
Thanks,
-James
On Feb 11, 2015, at 8:13 AM, Guozhang Wang wangg...@gmail.com wrote:
The new consumer will be released in 0.9, which is targeted for end of
this
quarter.
On Tue, Feb 10, 2015 at 7:11 PM, tao xiao xiaotao...@gmail.com
wrote:
Do you
Hi team,
After reading the source code of AbstractFetcherManager I found out that
the usage of num.consumer.fetchers may not match what is described in the
Kafka doc. My interpretation of the Kafka doc is that the number of
fetcher threads is controlled by the value of
property
consumer to fetch the message on both ends to measure the latency.
Guozhang
On Wed, Mar 4, 2015 at 11:07 PM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
Is there a built-in metric that can measure the end to end latency in MM?
--
Regards,
Tao
--
-- Guozhang
--
Regards,
Tao
Hi team,
I am having java.util.IllegalFormatConversionException when running
MirrorMaker with log level set to trace. The code is off latest trunk with
commit 8f0003f9b694b4da5fbd2f86db872d77a43eb63f
The way I bring up is
bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config
A bit more context: I turned on async in producer.properties
On Sat, Mar 7, 2015 at 2:09 AM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I am having java.util.IllegalFormatConversionException when running
MirrorMaker with log level set to trace. The code is off latest trunk with
commit
I think I worked out the root cause
Line 593 in MirrorMaker.scala
trace(Updating offset for %s to %d.format(topicPartition, offset)) should
be
trace(Updating offset for %s to %d.format(topicPartition, offset.element))
On Sat, Mar 7, 2015 at 2:12 AM, tao xiao xiaotao...@gmail.com wrote
PM, tao xiao xiaotao...@gmail.com wrote:
I think I worked out the root cause
Line 593 in MirrorMaker.scala
trace(Updating offset for %s to %d.format(topicPartition, offset))
should
be
trace(Updating offset for %s to %d.format(topicPartition,
offset.element))
On Sat, Mar 7, 2015
with
--whitelist you could already specify regex to do filtering.
On Thu, Mar 12, 2015 at 5:56 AM, tao xiao xiaotao...@gmail.com wrote:
Hi Guozhang,
I was meant to be topicfilter not topic-count. sorry for the confusion.
What I want to achieve is to pass my own customized topicfilter
of
blacklist and whitelist I can easily achieve this by having something like
--whitelist topic.* --blacklist topic.1
On Thu, Mar 12, 2015 at 9:10 PM, tao xiao xiaotao...@gmail.com wrote:
something like dynamic filtering that can be updated at runtime or deny
all but allow a certain set of topics
.
Guozhang
On Thu, Mar 12, 2015 at 6:10 AM, tao xiao xiaotao...@gmail.com wrote:
something like dynamic filtering that can be updated at runtime or deny
all
but allow a certain set of topics that cannot be specified easily by
regex
On Thu, Mar 12, 2015 at 9:06 PM, Guozhang Wang wangg
Hi,
I have an user case where I need to consume a list topics with name that
matches pattern topic.* except for one that is topic.10. Is there a way
that I can combine the use of whitelist and blacklist so that I can achieve
something like accept all topics with regex topic.* but exclude
org.apache.kafka.clients.producer.Producer is the new api producer
On Tue, Mar 10, 2015 at 11:22 PM, Corey Nolet cjno...@gmail.com wrote:
Thanks Jiangie! So what version is considered the new api? Is that the
javaapi in version 0.8.2?.
On Mon, Mar 9, 2015 at 2:29 PM, Jiangjie Qin
I actually mean if we can achieve this in mirror maker.
On Tue, Mar 10, 2015 at 10:52 PM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I have an user case where I need to consume a list topics with name that
matches pattern topic.* except for one that is topic.10. Is there a way
that I can
at 7:11 PM, tao xiao xiaotao...@gmail.com wrote:
Do you know when the new consumer API will be publicly available?
On Wed, Feb 11, 2015 at 10:43 AM, Guozhang Wang wangg...@gmail.com
wrote:
Yes, it can get stuck. For example, AC and BC are processed by two
different processes and AC
On Mar 11, 2015, at 5:00 PM, tao xiao xiaotao...@gmail.com mailto:
xiaotao...@gmail.com wrote:
Fetcher thread is per broker basis, it ensures that at lease one fetcher
thread per broker. Fetcher thread is sent to broker with a fetch
request to
ask for all partitions. So if A, B, C
from a topic
after you stop consuming from it?
Jiangjie (Becket) Qin
On 3/12/15, 8:05 AM, tao xiao xiaotao...@gmail.com wrote:
Yes, you are right. a dynamic topicfilter is more appropriate where I can
filter topics at runtime via some kind of interface e.g. JMX
On Thu, Mar 12, 2015 at 11:03
since the offsets will be
committed. If you change the filtering dynamically back to whilelist these
topics, you will lose the data that gets consumed during the period of the
blacklist.
Guozhang
On Thu, Mar 12, 2015 at 10:01 PM, tao xiao xiaotao...@gmail.com wrote:
Yes, that will work
will not achieve your goal, since it is still static.
Guozhang
On Thu, Mar 12, 2015 at 6:30 AM, tao xiao xiaotao...@gmail.com wrote:
Thank you Guozhang for your advice. A dynamic topic filter is what I need
so that I can stop a topic consumption when I need to at runtime.
On Thu, Mar 12
11, 2015 at 11:59 PM, tao xiao xiaotao...@gmail.com wrote:
The topic list is not specified in consumer.properties and I don't think
there is any property in consumer config that allows us to specify what
topics we want to consume. Can you point me to the property if there is
any?
On Thu
The reason you need to use a.getBytes is because the default serializer.class
is kafka.serializer.DefaultEncoder which takes byte[] as input. The way the
array returns hash code is not based on equality of the elements hence
every time a new byte array is created which is the case in your sample
Ctrl+c is clean shutdown. kill -9 is not
On Mon, Mar 9, 2015 at 2:32 AM, Alex Melville amelvi...@g.hmc.edu wrote:
What does a clean shutdown of the MM entail? So far I've just been using
Ctrl + C to send an interrupt to kill it.
Alex
On Sat, Mar 7, 2015 at 10:59 PM, Jiangjie Qin
:
Tao,
In MM people can pass in consumer configs, in which people can specify
consumption topics, either in regular topic list format or whitelist /
blacklist. So I think it already does what you need?
Guozhang
On Tue, Mar 10, 2015 at 10:09 PM, tao xiao xiaotao...@gmail.com wrote:
Thank
Did you stop mirror maker?
On Thu, Mar 12, 2015 at 8:27 AM, Saladi Naidu naidusp2...@yahoo.com.invalid
wrote:
We have 3 DC's and created 5 node Kafka cluster in each DC, connected
these 3 DC's using Mirror Maker for replication. We were conducting
performance testing using Kafka Producer
Hi community,
I wanted to know if the solution I supplied can fix the
IllegalMonitorStateException
issue. Our work is pending on this and we'd like to proceed ASAP. Sorry for
bothering.
On Mon, Mar 23, 2015 at 4:32 PM, tao xiao xiaotao...@gmail.com wrote:
I think I worked out the answer
. But I donĀ¹t know if this is a necessary change just
because of the case you saw.
Jiangjie (Becket) Qin
On 3/24/15, 5:05 PM, tao xiao xiaotao...@gmail.com wrote:
The other question I have is the fact that consumer client is unaware of
the health status of underlying fetcher thread
) Qin
On 3/24/15, 4:31 PM, tao xiao xiaotao...@gmail.com wrote:
Hi community,
I wanted to know if the solution I supplied can fix the
IllegalMonitorStateException
issue. Our work is pending on this and we'd like to proceed ASAP. Sorry
for
bothering.
On Mon, Mar 23, 2015 at 4:32 PM, tao
pick them up while fetcher thread is down.
On Wed, Mar 25, 2015 at 8:00 AM, tao xiao xiaotao...@gmail.com wrote:
Thanks JIanjie. Can I reuse KAFKA-1997 or should I create a new ticket?
On Wed, Mar 25, 2015 at 7:58 AM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
Hi Xiao,
I think the fix
You can use kafka-console-consumer consuming the topic from the beginning
*kafka-console-consumer.sh --zookeeper localhost:2181 --topic test
--from-beginning*
On Thu, Mar 26, 2015 at 12:17 AM, Victor L vlyamt...@gmail.com wrote:
Can someone let me know how to dump contents of topics?
I have
for
/consumers/KafkaMaker/offsets/testtopic/0* .Complete log attached.
Regards,
Nitin Kumar Sharma.
On Thu, Mar 26, 2015 at 11:24 AM, tao xiao xiaotao...@gmail.com
wrote:
Both consumer-1 and consumer-2 are properties of source clusters
mirror
maker transfers data from
Hi,
I was running a mirror maker and got
java.lang.IllegalMonitorStateException that caused the underlying fetcher
thread completely stopped. Here is the log from mirror maker.
[2015-03-21 02:11:53,069] INFO Reconnect due to socket error:
java.io.EOFException: Received -1 when reading from
PM, Harsha ka...@harsha.io wrote:
you can increase num.replica.fetchers by default its 1 and also try
increasing replica.fetch.max.bytes
-Harsha
On Fri, Feb 27, 2015, at 11:15 PM, tao xiao wrote:
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I managed to clean up the disk and restarted the replica but the
replica since then never caught up the leader shown below
Topic:test PartitionCount:1 ReplicationFactor:3 Configs:
Topic: test Partition: 0
:15 AM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I have 2 brokers (0 and 1) serving a topic mm-benchmark-test. I did some
tests on the two brokers to verify how leader got elected. Here are the
steps:
1. started 2 brokers
2. created a topic with partition=1 and replication-factor
will not happen.
Jiangjie (Becket) Qin
On 3/2/15, 7:16 PM, tao xiao xiaotao...@gmail.com wrote:
Since I reused the same consumer group to consume the messages after step
6
data there was no data loss occurred. But if I create a new consumer group
for sure the new consumer will suffer data
You can set the consumer config auto.offset.reset=largest
Ref: http://kafka.apache.org/documentation.html#consumerconfigs
On Tue, Mar 3, 2015 at 8:30 PM, Achanta Vamsi Subhash
achanta.va...@flipkart.com wrote:
Hi,
We are using HighLevelConsumer and when a new subscription is added to the
Hi team,
I have 2 brokers (0 and 1) serving a topic mm-benchmark-test. I did some
tests on the two brokers to verify how leader got elected. Here are the
steps:
1. started 2 brokers
2. created a topic with partition=1 and replication-factor=2. Now brokers 1
was elected as leader
3. sent 1000
need to know the total
number
of partitions before I call Producer.send().
Alex
On Thu, Feb 26, 2015 at 7:32 PM, tao xiao xiaotao...@gmail.com
wrote:
Gaurav,
You can get the partition number the message belongs to via
MessageAndMetadata.partition
Hi team,
Is there a built-in metric that can measure the end to end latency in MM?
--
Regards,
Tao
Thanks guy. with unclean.leader.election.enable set to false the issue is
fixed
On Tue, Mar 3, 2015 at 2:50 PM, Gwen Shapira gshap...@cloudera.com wrote:
of course :)
unclean.leader.election.enable
On Mon, Mar 2, 2015 at 9:10 PM, tao xiao xiaotao...@gmail.com wrote:
How do I achieve point
Gaurav,
You can get the partition number the message belongs to via
MessageAndMetadata.partition()
On Fri, Feb 27, 2015 at 5:16 AM, Jun Rao j...@confluent.io wrote:
The partition api is exposed to the consumer in 0.8.2.
Thanks,
Jun
On Thu, Feb 26, 2015 at 10:53 AM, Gaurav Agarwal
Both consumer-1 and consumer-2 are properties of source clusters mirror
maker transfers data from. Mirror maker is designed to be able to consume
data from N sources (N = 1) and transfer data to one destination cluster.
You are free to supply as many consumer properties as you want to instruct
, TimeUnit.MILLISECONDS)
}
On Mon, Mar 23, 2015 at 1:50 PM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I was running a mirror maker and got
java.lang.IllegalMonitorStateException that caused the underlying fetcher
thread completely stopped. Here is the log from mirror maker.
[2015-03-21 02:11
Linkedin has an excellent tool that monitors lag/data loss/data duplication
and etc. Here is the reference
http://www.slideshare.net/JonBringhurst/kafka-audit-kafka-meetup-january-27th-2015
it is not open sourced though.
On Mon, Mar 23, 2015 at 3:26 PM, sunil kalva kalva.ka...@gmail.com wrote:
,
Nitin Kumar Sharma.
On Wed, Apr 8, 2015 at 10:48 PM, tao xiao xiaotao...@gmail.com wrote:
Metrics like Bytepersec, FetchRequestRateAndTimeMs can help you to check
if
the consumer has problem processing messages
On Thu, Apr 9, 2015 at 2:40 AM, nitin sharma
kumarsharma.ni...@gmail.com
,
44.0213, 16683, 31716.7300
Regards,
Nitin Kumar Sharma.
On Mon, Apr 13, 2015 at 3:51 PM, tao xiao xiaotao...@gmail.com wrote:
num.consumer.fetchers means the max number of fetcher threads that can be
spawned. it doesn't necessarily mean you can get as many fetcher threads
as
you specify
Hi team,
I observed java.lang.IllegalMonitorStateException thrown
from AbstractFetcherThread in mirror maker when it is trying to build the
fetchrequst. Below is the error
[2015-04-23 16:16:02,049] ERROR
[ConsumerFetcherThread-group_id_localhost-1429830778627-4519368f-0-7],
Error due to
: 2015-04-29
12:55:56.313|1|11|Normal|-79.74165300042|42.1304580009
On Wed, Apr 29, 2015 at 10:11 AM, tao xiao xiaotao...@gmail.com wrote:
example.shutdown(); in ConsumerGroupExample closes all consumer
connections
to Kafka. remove this line the consumer threads will run forever
example.shutdown(); in ConsumerGroupExample closes all consumer connections
to Kafka. remove this line the consumer threads will run forever
On Wed, Apr 29, 2015 at 9:42 PM, christopher palm cpa...@gmail.com wrote:
Hi All,
I am trying to get a multi threaded HL consumer working against a 2
Hi team,
I have a 12 nodes cluster that has 800 topics and each of which has only 1
partition. I observed that one of the node keeps generating
NotLeaderForPartitionException that causes the node to be unresponsive to
all requests. Below is the exception
[2015-05-07 04:16:01,014] ERROR
see this issue, can
you grep the controller log for topic partition in question and see if
there is anything interesting?
Thanks.
Jiangjie (Becket) Qin
On 5/14/15, 3:43 AM, tao xiao xiaotao...@gmail.com wrote:
Yes, it does exist in ZK and the node that had
Garry,
Do you mind to share the source code that you did for the profiling?
On Sun, May 17, 2015 at 4:59 PM, Garry Turkington
g.turking...@improvedigital.com wrote:
Hi Guozhang/Jay/Becket,
Thanks for the responses.
Regarding my point on performance dropping when the number of partitions
Yes, it does exist in ZK and the node that had the
NotLeaderForPartitionException
is the leader of the topic
On Thu, May 14, 2015 at 6:12 AM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
Does this topic exist in Zookeeper?
On 5/12/15, 11:35 PM, tao xiao xiaotao...@gmail.com wrote:
Hi
Hi,
Any updates on this issue? I keep seeing this issue happening over and over
again
On Thu, May 7, 2015 at 7:28 PM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I have a 12 nodes cluster that has 800 topics and each of which has only 1
partition. I observed that one of the node keeps
is safe.
On Wed, Apr 15, 2015 at 3:45 PM, Guozhang Wang wangg...@gmail.com wrote:
Hello Tao,
Do you think the solution to KAFKA-2056 will resolve this issue? It will be
included in 0.8.3 release.
Guozhang
On Wed, Apr 15, 2015 at 2:21 PM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I
Hi team,
I discovered an issue that when a high level consumer with roundrobin
assignment strategy consumes a topic that hasn't been created on broker a
NPE exception is thrown during partition rebalancing phase. I use Kafka
0.8.2.1
Here is the step to reproduce:
1. create a high level consumer
fetch.message.max.bytes=5243880
Regards,
Nitin Kumar Sharma.
On Tue, Mar 31, 2015 at 12:36 PM, tao xiao xiaotao...@gmail.com wrote:
Can you attach your mirror maker log?
On Wed, Apr 1, 2015 at 12:28 AM, nitin sharma
kumarsharma.ni...@gmail.com
wrote:
i tried
migration by MirrorMaker?
Regards,
Nitin Kumar Sharma.
On Tue, Apr 7, 2015 at 10:10 PM, tao xiao xiaotao...@gmail.com wrote:
You may need to look into the consumer metrics and producer metrics to
identify the root cause. metrics in kafka.consumer and kafka.producer
categories will help
since the follower is still in ISR.
Yes we will loose those 2000 messages.
Mayuresh
Sent from my iPhone
On May 20, 2015, at 8:31 AM, tao xiao xiaotao...@gmail.com
javascript:; wrote:
Hi team,
I know that if a broker is behind the leader by no more than
replica.lag.max.messages
Hi team,
I know that if a broker is behind the leader by no more than
replica.lag.max.messages
the broker is considered in sync with the leader. Considering a situation
where I have unclean.leader.election.enable=true set in brokers and the
follower is now 2000 messages behind (the default
Hi,
I have two mirror makers A and B both subscripting to the same whitelist.
During topic rebalancing one of the mirror maker A encountered
ZkNoNodeException and then stopped all connections. but mirror maker B
didn't pick up the topics that were consumed by A and left some of the
topics
to leader, and once when
checking for acks. The first error is thrown if we detect a small ISR
before writing to the leader. The second if the ISR shrank after we
wrote to the leader but before we got enough acks.
Gwen
On Mon, Jun 8, 2015 at 2:51 AM, tao xiao xiaotao...@gmail.com wrote:
Hi
Hi team,
What is the difference between producer error NOT_ENOUGH_REPLICAS and
NOT_ENOUGH_REPLICAS_AFTER_APPEND? Does the later one imply that the message
has been written to the leader log successfully? If I have retry turned on
in producer does it mean that duplicated messages may be written to
I use commit 9e894aa0173b14d64a900bcf780d6b7809368384 from trunk code
On Wed, 10 Jun 2015 at 01:09 Jiangjie Qin j...@linkedin.com.invalid wrote:
Which version of MM are you running?
On 6/9/15, 4:49 AM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I have two mirror makers A and B both
ErrorLoggingCallback needs some change, though. Can we only
store the value bytes when logAsString is set to true? That looks more
reasonable to me.
Jiangjie (Becket) Qin
On 6/21/15, 3:02 AM, tao xiao xiaotao...@gmail.com wrote:
Yes, I agree with that. It is even better if we can supply our own
Yes, you are right. Will update the patch
On Mon, Jun 22, 2015 at 12:16 PM Jiangjie Qin j...@linkedin.com.invalid
wrote:
Should we still store the value bytes when logAsString is set to TRUE and
only store the length when logAsString is set to FALSE.
On 6/21/15, 7:29 PM, tao xiao xiaotao
Patch updated. please review
On Mon, 22 Jun 2015 at 12:24 tao xiao xiaotao...@gmail.com wrote:
Yes, you are right. Will update the patch
On Mon, Jun 22, 2015 at 12:16 PM Jiangjie Qin j...@linkedin.com.invalid
wrote:
Should we still store the value bytes when logAsString is set to TRUE
Carl,
I double if the change you proposed will have at-least-once guarantee.
consumedOffset
is the next offset of the message that is being returned from
iterator.next(). For example the message returned is A with offset 1 and
then consumedOffset will be 2 set to currentTopicInfo. While the
Hi,
I have 3 high level consumers with the same group id. One of the consumer
goes down, I know rebalance will kick in in the remaining two consumers.
What happens if one of the remaining consumers is very slow during
rebalancing and it hasn't released ownership of some of the topics will the
itself but just the length if people agree that
for MM we probably are not interested in its message value in callback.
Thoughts?
Guozhang
On Wed, Jun 17, 2015 at 1:06 AM, tao xiao xiaotao...@gmail.com wrote:
Thank you for the reply.
Patch submitted https://issues.apache.org/jira
that if interested.
Thanks,
Jiangjie (Becket) Qin
On 6/13/15, 11:39 AM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I am using mirror maker in trunk to replica data across two data centers.
While the destination broker was having busy load and unresponsive the
send
rate of mirror maker was very
Hi,
I am using mirror maker in trunk to replica data across two data centers.
While the destination broker was having busy load and unresponsive the send
rate of mirror maker was very low and the available producer buffer was
quickly filled up. At the end mirror maker threw OOME. Detailed
Hi,
Just wondering why setConsumerRebalanceListener is not exposed
in kafka.javaapi.consumer.ConsumerConnector? In the latest trunk code
setConsumerRebalanceListener is
in kafka.javaapi.consumer.ZookeeperConsumerConnector but not in
kafka.javaapi.consumer.ConsumerConnector which makes the method
That is so cool. Thank you
On Sun, 28 Jun 2015 at 04:29 Guozhang Wang wangg...@gmail.com wrote:
Tao, I have added you to the contributor list of Kafka so you can assign
tickets to yourself now.
I will review the patch soon.
Guozhang
On Thu, Jun 25, 2015 at 2:54 AM, tao xiao xiaotao
1 - 100 of 189 matches
Mail list logo