=AGlRjlrNDYk
Event publishing is different from database replication. Kafka is used for
change publishing or maybe also used for sending changes (recorded in files).
Thanks,
Xiao Li
On Mar 17, 2015, at 7:26 PM, Arya Ketan ketan.a...@gmail.com wrote:
AFAIK , linkedin uses databus to do the same
people share it
with us? I believe it can help us a lot.
Thanks,
Xiao Li
On Mar 17, 2015, at 12:26 PM, James Cheng jch...@tivo.com wrote:
This is a great set of projects!
We should put this list of projects on a site somewhere so people can more
easily see and refer to it. These aren't
I think this is a usability issue. It might need an extra admin tool to verify
if all configuration settings are correct, even if the broker can return an
error message to the consumers.
Thanks,
Xiao Li
On Mar 17, 2015, at 5:18 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
The problem
Hi, James,
Thank you for sharing it!
The links of videos and slides are the same. Could you check the link of
slides?
Xiao Li
On Mar 20, 2015, at 11:30 AM, James Cheng jch...@tivo.com wrote:
For those who missed it:
The Kafka Audit tool was also presented at the 1/27 Kafka meetup
and memory resources.
Best wishes,
Xiao Li
On Mar 5, 2015, at 11:07 AM, James Cheng jch...@tivo.com wrote:
On Mar 5, 2015, at 12:59 AM, Xiao lixiao1...@gmail.com wrote:
Hi, James,
This design regarding the restart point has a few potential issues, I think.
- The restart point
of.
— The recovery points (offsets) in Kafka recovery-point file,
— The offsets and IDs of the last message in the partitions.
— Your local last published message IDs.
Best wishes,
Xiao Li
On Mar 5, 2015, at 11:07 AM, James Cheng jch...@tivo.com wrote:
On Mar 5, 2015, at 12:59 AM, Xiao lixiao1
Hi, Pete,
Thank you for sharing your experience with me!
sendfile and mmap are common system calls, but it sounds like we still need to
consider at least the file-system differences when deploying Kafka.
Cross-platform supports are a headache. : )
Best wishes,
Xiao Li
On Mar 10, 2015
, the design proposal of “transactional messaging” misses a design change in
the Log recovery? Recovery checkpoints might be in the middle of multiple
in-flight transactions.
Thank you very much!
Xiao Li
On Mar 10, 2015, at 1:01 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
Hi Xiao,
For z
protocol when the transaction
is only against a single partition?
We do have monster transactions, which are normally caused by uncommitted batch
jobs. However, that should be very rare. Maybe monthly, quarterly or yearly.
Thank you very much!
Xiao Li
On Mar 11, 2015, at 9:06 AM, Guozhang
/presentation/pillai
Thanks,
Xiao Li
Hi, all,
In my previous note, the two check points per partition have to be stored in
different files. Otherwise, the files could be corrupted.
Thanks,
Xiao Li
On Mar 2, 2015, at 10:25 PM, Xiao lixiao1...@gmail.com wrote:
Hi, all,
I just started reading the source codes of Kafka
,
Xiao Li
On Mar 4, 2015, at 8:01 AM, Jay Kreps jay.kr...@gmail.com wrote:
Hey Xiao,
1. Nothing prevents applying transactions transactionally on the
destination side, though that is obviously more work. But I think the key
point here is that much of the time the replication is not Oracle
, you
can have multiple producers publish the messages at the same time. This could
improve your throughput and your consumers can easily identify if any message
is lost due to any reason.
Best wishes,
Xiao Li
On Mar 4, 2015, at 4:59 PM, James Cheng jch...@tivo.com wrote:
Another thing
to the others too?
Night,
Xiao Li
On Mar 4, 2015, at 9:00 AM, Jay Kreps jay.kr...@gmail.com wrote:
Hey Xiao,
Yeah I agree that without fsync you will not get durability in the case of
a power outage or other correlated failure, and likewise without
replication you won't get durability in the case
above.
Best wishes,
Xiao Li
Best wishes,
Xiao Li
On Mar 3, 2015, at 4:23 PM, Xiao lixiao1...@gmail.com wrote:
Hey Josh,
If you put different tables into different partitions or topics, it might
break transaction ACID at the target side. This is risky for some use cases.
Besides
been implemented in IBM Q
Replication since 2001.
Thanks,
Xiao Li
On Mar 3, 2015, at 3:36 PM, Jay Kreps jay.kr...@gmail.com wrote:
Hey Josh,
As you say, ordering is per partition. Technically it is generally possible
to publish all changes to a database to a single partition--generally
. Unfortunately, based on my
understanding, Kafka is unable to do it because it does not do fsync regularly
for achieving better throughput.
Best wishes,
Xiao Li
On Mar 3, 2015, at 3:45 PM, Xiao lixiao1...@gmail.com wrote:
Hey Josh,
Transactions can be applied in parallel in the consumer
to fully understand Kafka source codes before
using it.
Best wishes,
Xiao Li
On Mar 4, 2015, at 5:18 AM, Josh Rader jrader...@gmail.com wrote:
Thanks everyone for your responses! These are great. It seems our cases
matches closest to Jay's recommendations.
The one part that sounds a little
Linkedin Gabblin compaction tool is using Hive to perform the compaction. Does
it mean Lumos is replaced?
Confused…
On Mar 17, 2015, at 10:00 PM, Xiao lixiao1...@gmail.com wrote:
Hi, all,
Do you know whether Linkedin plans to open source Lumos in the near future?
I found the answer
Hi guys and jun,
We have a problem when adding a breakdown broker back to the cluster. Hope
you guys have some solution for it.
A cluster of 5 brokers(id=0~4) of kafka 0.8.0 was running for log
aggregation . Because of some issues of the disk, a broker(id=1) was down.
We spent one week to
with the leader. Once it's fully
caught up, you can run the rebalance leader tool to move some leaders back
to the failed broker.
Thanks,
Jun
On Tue, Jan 21, 2014 at 8:24 PM, Xiao Bo xiaob...@gmail.com wrote:
Hi guys and jun,
We have a problem when adding a breakdown broker back
Yes, the topic name is countinfo, Maybe in the log it generates a short
name automatically.
2014/1/22 Guozhang Wang wangg...@gmail.com
Hello,
What is your topic name? From the log it seems to be co, but from
list-topic it is countinfo.
Guozhang
On Tue, Jan 21, 2014 at 8:24 PM, Xiao Bo
the replication factor is 2 not 3.
Guozhang
On Tue, Jan 21, 2014 at 9:26 PM, Xiao Bo xiaob...@gmail.com wrote:
Yes, the topic name is countinfo, Maybe in the log it generates a short
name automatically.
2014/1/22 Guozhang Wang wangg...@gmail.com
Hello,
What is your topic name
[co,9] doesn't exist on 5 (kafka.server.KafkaApis)
[2014-01-21 18:00:00,924] WARN [Replica Manager on Broker 5]: While
recording the follower position, the partition [co,4] hasn't been created,
skip updating leader HW (kafka.server.ReplicaManager)
2014/1/22 Xiao Bo xiaob...@gmail.com
Thanks
Hi,
I have checked out the trunk code and tried to use Mirror Maker.
When I enabled the csv reporter in Mirror Maker consumer config
(—consumer.config=c1.properties)
kafka.metrics.polling.interval.secs=5
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter
Hi team,
I was running the mirror maker off the trunk code and got IOException when
configuring the mirror maker to use KafkaCSVMetricsReporter as the metric
reporter
Here is the exception I got
java.io.IOException: Unable to create /tmp/csv1/BytesPerSec.csv
at
Alex,
I got similar error before due to incorrect network binding of my laptop's
wireless interface. You can try with setting advertised.host.name=kafka's
server hostname in the server.properties and run it again.
On Sun, Feb 8, 2015 at 8:38 AM, Alex Melville amelvi...@g.hmc.edu wrote:
Howdy
Hi all,
I have two mirror maker processes running on two different machines
fetching messages from same topic from one data center to another data
center. These two processes are assigned to the same consumer group. If I
want no data loss or data duplication even when one of the mirror maker
Hi,
I discovered that the new mirror maker implementation in trunk now only
accept one consumer.config property instead of a list of them which means
we can only supply one source per mirror maker process. Is it a reason for
it? If I have multiple source kafka clusters do I need to setup multiple
-1,AC-2,AC-3,BC-1/2/3,BC-4/5/6 respectively;
With createMessageStreamsByFilter(*C = 3) a total of 3 threads will be
created, and consuming AC-1/BC-1/BC-2, AC-2/BC-3/BC-4, AC-3/BC-5/BC-6
respectively.
Guozhang
On Tue, Feb 10, 2015 at 8:37 AM, tao xiao xiaotao...@gmail.com wrote:
Guozhang
, 2015 at 6:24 PM, tao xiao xiaotao...@gmail.com wrote:
Thank you Guozhang for your detailed explanation. In your example
createMessageStreamsByFilter(*C = 3) since threads are shared among
topics there may be situation where all 3 threads threads get stuck with
topic AC e.g. topic
Hi team,
I was trying to migrate my consumer offset from kafka to zookeeper.
Here is the original settings of my consumer
props.put(offsets.storage, kafka);
props.put(dual.commit.enabled, false);
Here is the steps
1. set dual.commit.enabled=true
2. restart my consumer and monitor offset lag
.
On 2/12/15, 7:30 PM, tao xiao xiaotao...@gmail.com wrote:
I used the one shipped with 0.8.2. It is pretty straightforward to
reproduce the issue.
Here are the steps to reproduce:
1. I have a consumer using high level consumer API with initial settings
offsets.storage=kafka
.
-Todd
On Mon, Feb 16, 2015 at 12:27 AM, tao xiao xiaotao...@gmail.com wrote:
Thank you Todd for your detailed explanation. Currently I export all
metrics to graphite using the reporter configuration. is there a way I
can
do similar thing with offset checker?
On Mon, Feb 16, 2015 at 4:21
the mirrormaker and then spin up a console consumer to read from
the source cluster, I get 0 messages consumed.
Alex
On Sun, Feb 15, 2015 at 3:00 AM, tao xiao xiaotao...@gmail.com
javascript:; wrote:
Alex,
Are you sure you have data continually being sent to the topic in source
cluster after you
Hi team,
I got NPE when running the latest mirror maker that is in trunk
[2015-01-23 18:55:20,229] INFO
[kafkatopic-1_LM-SHC-00950667-1422010513674-cb0bb562], exception during
rebalance (kafka.consumer.ZookeeperConsumerConnector)
java.lang.NullPointerException
at
It happens every time I shutdown the connector. It doesn't block the
shutdown process though
On Tue, Feb 10, 2015 at 1:09 AM, Guozhang Wang wangg...@gmail.com wrote:
Is this exception transient or consistent and blocking the shutdown
process?
On Mon, Feb 9, 2015 at 3:07 AM, tao xiao xiaotao
Hi team,
I am comparing the differences between
ConsumerConnector.createMessageStreams
and ConsumerConnector.createMessageStreamsByFilter. My understanding is
that createMessageStreams creates x number of threads (x is the number of
threads passed in to the method) dedicated to the specified
when I exec the delete command,return information is below:
It mark the kafka-topic.sh not support the delete parameter.
my package is compiled by myself.
easiest to just monitor MaxLag as that reports the maximum
of all the lag metrics.
On Fri, Feb 13, 2015 at 05:03:28PM +0800, tao xiao wrote:
Hi team,
Is there a metric that shows the consumer lag of a particular consumer
group? similar to what offset checker provides
--
Regards
Hi team,
I got java.nio.channels.ClosedByInterruptException when
closing ConsumerConnector using kafka 0.8.2
Here is the exception
2015-02-09 19:04:19 INFO kafka.utils.Logging$class:68 -
[test12345_localhost], ZKConsumerConnector shutting down
2015-02-09 19:04:19 INFO
Hi team,
If I set offsets.storage=kafka can I still use auto.commit.enable to turn
off auto commit and auto.commit.interval.ms to control commit interval ? As
the documentation mentions that the above two properties are used to
control offset to zookeeper.
--
Regards,
Tao
.
On 2/12/15, 7:30 PM, tao xiao xiaotao...@gmail.com wrote:
I used the one shipped with 0.8.2. It is pretty straightforward to
reproduce the issue.
Here are the steps to reproduce:
1. I have a consumer using high level consumer API with initial settings
offsets.storage=kafka
You can get the partition number and offset of the message by
MessageAndMetadata.partition() and MessageAndMetadata.offset().
To your scenario you can turn off auto commit auto.commit.enable=false and
then commit by yourself after finishing message consumption.
On Mon, Feb 16, 2015 at 1:40 PM,
to catch a broken consumer, as
well as an active consumer that is just falling behind.
-Todd
On Fri, Feb 13, 2015 at 9:34 PM, tao xiao xiaotao...@gmail.com wrote:
Thanks Joel. But I discover that both MaxLag and FetcherLagMetrics are
always
much smaller than the lag shown in offset
Hi,
In order to get it work you can turn off csv-reporter.
On Thu, Feb 5, 2015 at 1:06 PM, Xinyi Su xiny...@gmail.com wrote:
Hi,
Today I updated Kafka cluster from 0.8.2-beta to 0.8.2.0 and run kafka
producer performance test.
The test cannot continue because of some exceptions thrown
Hi team,
I have two consumer instances with the same group id connecting to two
different topics with 1 partition created for each. One consumer uses
partition.assignment.strategy=roundrobin and the other one uses default
assignment strategy. Both consumers have 1 thread spawned internally and
-localhost-1426605370072-904d6fba-0
On Tue, Mar 17, 2015 at 11:30 PM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I have two consumer instances with the same group id connecting to two
different topics with 1 partition created for each. One consumer uses
partition.assignment.strategy=roundrobin
You can set producer property retries not equal to 0. Details can be found
here
http://kafka.apache.org/documentation.html#newproducerconfigs
On Fri, Mar 20, 2015 at 3:01 PM, Samuel Chase samebch...@gmail.com wrote:
Hello Everyone,
In the the new Java Producer API, the Callback code in
here is the slide
http://www.slideshare.net/JonBringhurst/kafka-audit-kafka-meetup-january-27th-2015
On Sat, Mar 21, 2015 at 2:36 AM, Xiao lixiao1...@gmail.com wrote:
Hi, James,
Thank you for sharing it!
The links of videos and slides are the same. Could you check the link of
slides
Hi,
I created a message stream in my consumer using connector
.createMessageStreamsByFilter(new Whitelist(mm-benchmark-test\\w*), 5); I
have 5 topics in my cluster and each of the topic has only one partition.
My understanding of wildcard stream is that multiple streams are shared
between
different topic name in destination cluster, i mean can i
have different topic names for source and destination cluster for
mirroring. If yes how can i map source topic with destination topic name ?
SunilKalva
On Mon, Mar 9, 2015 at 6:41 AM, tao xiao xiaotao...@gmail.com wrote:
Ctrl+c
I ended up running kafka-reassign-partitions.sh to reassign partitions to
different nodes
On Tue, Mar 10, 2015 at 11:31 AM, sy.pan shengyi@gmail.com wrote:
Hi, tao xiao and Jiangjie Qin
I encounter with the same issue, my node had recovered from high load
problem (caused by other
from the other paritions?
Thanks,
-James
On Feb 11, 2015, at 8:13 AM, Guozhang Wang wangg...@gmail.com wrote:
The new consumer will be released in 0.9, which is targeted for end of
this
quarter.
On Tue, Feb 10, 2015 at 7:11 PM, tao xiao xiaotao...@gmail.com
wrote:
Do you
Hi team,
After reading the source code of AbstractFetcherManager I found out that
the usage of num.consumer.fetchers may not match what is described in the
Kafka doc. My interpretation of the Kafka doc is that the number of
fetcher threads is controlled by the value of
property
consumer to fetch the message on both ends to measure the latency.
Guozhang
On Wed, Mar 4, 2015 at 11:07 PM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
Is there a built-in metric that can measure the end to end latency in MM?
--
Regards,
Tao
--
-- Guozhang
--
Regards,
Tao
Hi team,
I am having java.util.IllegalFormatConversionException when running
MirrorMaker with log level set to trace. The code is off latest trunk with
commit 8f0003f9b694b4da5fbd2f86db872d77a43eb63f
The way I bring up is
bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config
A bit more context: I turned on async in producer.properties
On Sat, Mar 7, 2015 at 2:09 AM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I am having java.util.IllegalFormatConversionException when running
MirrorMaker with log level set to trace. The code is off latest trunk with
commit
I think I worked out the root cause
Line 593 in MirrorMaker.scala
trace(Updating offset for %s to %d.format(topicPartition, offset)) should
be
trace(Updating offset for %s to %d.format(topicPartition, offset.element))
On Sat, Mar 7, 2015 at 2:12 AM, tao xiao xiaotao...@gmail.com wrote
PM, tao xiao xiaotao...@gmail.com wrote:
I think I worked out the root cause
Line 593 in MirrorMaker.scala
trace(Updating offset for %s to %d.format(topicPartition, offset))
should
be
trace(Updating offset for %s to %d.format(topicPartition,
offset.element))
On Sat, Mar 7, 2015
with
--whitelist you could already specify regex to do filtering.
On Thu, Mar 12, 2015 at 5:56 AM, tao xiao xiaotao...@gmail.com wrote:
Hi Guozhang,
I was meant to be topicfilter not topic-count. sorry for the confusion.
What I want to achieve is to pass my own customized topicfilter
of
blacklist and whitelist I can easily achieve this by having something like
--whitelist topic.* --blacklist topic.1
On Thu, Mar 12, 2015 at 9:10 PM, tao xiao xiaotao...@gmail.com wrote:
something like dynamic filtering that can be updated at runtime or deny
all but allow a certain set of topics
.
Guozhang
On Thu, Mar 12, 2015 at 6:10 AM, tao xiao xiaotao...@gmail.com wrote:
something like dynamic filtering that can be updated at runtime or deny
all
but allow a certain set of topics that cannot be specified easily by
regex
On Thu, Mar 12, 2015 at 9:06 PM, Guozhang Wang wangg
Hi,
I have an user case where I need to consume a list topics with name that
matches pattern topic.* except for one that is topic.10. Is there a way
that I can combine the use of whitelist and blacklist so that I can achieve
something like accept all topics with regex topic.* but exclude
org.apache.kafka.clients.producer.Producer is the new api producer
On Tue, Mar 10, 2015 at 11:22 PM, Corey Nolet cjno...@gmail.com wrote:
Thanks Jiangie! So what version is considered the new api? Is that the
javaapi in version 0.8.2?.
On Mon, Mar 9, 2015 at 2:29 PM, Jiangjie Qin
I actually mean if we can achieve this in mirror maker.
On Tue, Mar 10, 2015 at 10:52 PM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I have an user case where I need to consume a list topics with name that
matches pattern topic.* except for one that is topic.10. Is there a way
that I can
at 7:11 PM, tao xiao xiaotao...@gmail.com wrote:
Do you know when the new consumer API will be publicly available?
On Wed, Feb 11, 2015 at 10:43 AM, Guozhang Wang wangg...@gmail.com
wrote:
Yes, it can get stuck. For example, AC and BC are processed by two
different processes and AC
On Mar 11, 2015, at 5:00 PM, tao xiao xiaotao...@gmail.com mailto:
xiaotao...@gmail.com wrote:
Fetcher thread is per broker basis, it ensures that at lease one fetcher
thread per broker. Fetcher thread is sent to broker with a fetch
request to
ask for all partitions. So if A, B, C
from a topic
after you stop consuming from it?
Jiangjie (Becket) Qin
On 3/12/15, 8:05 AM, tao xiao xiaotao...@gmail.com wrote:
Yes, you are right. a dynamic topicfilter is more appropriate where I can
filter topics at runtime via some kind of interface e.g. JMX
On Thu, Mar 12, 2015 at 11:03
since the offsets will be
committed. If you change the filtering dynamically back to whilelist these
topics, you will lose the data that gets consumed during the period of the
blacklist.
Guozhang
On Thu, Mar 12, 2015 at 10:01 PM, tao xiao xiaotao...@gmail.com wrote:
Yes, that will work
will not achieve your goal, since it is still static.
Guozhang
On Thu, Mar 12, 2015 at 6:30 AM, tao xiao xiaotao...@gmail.com wrote:
Thank you Guozhang for your advice. A dynamic topic filter is what I need
so that I can stop a topic consumption when I need to at runtime.
On Thu, Mar 12
11, 2015 at 11:59 PM, tao xiao xiaotao...@gmail.com wrote:
The topic list is not specified in consumer.properties and I don't think
there is any property in consumer config that allows us to specify what
topics we want to consume. Can you point me to the property if there is
any?
On Thu
The reason you need to use a.getBytes is because the default serializer.class
is kafka.serializer.DefaultEncoder which takes byte[] as input. The way the
array returns hash code is not based on equality of the elements hence
every time a new byte array is created which is the case in your sample
Ctrl+c is clean shutdown. kill -9 is not
On Mon, Mar 9, 2015 at 2:32 AM, Alex Melville amelvi...@g.hmc.edu wrote:
What does a clean shutdown of the MM entail? So far I've just been using
Ctrl + C to send an interrupt to kill it.
Alex
On Sat, Mar 7, 2015 at 10:59 PM, Jiangjie Qin
:
Tao,
In MM people can pass in consumer configs, in which people can specify
consumption topics, either in regular topic list format or whitelist /
blacklist. So I think it already does what you need?
Guozhang
On Tue, Mar 10, 2015 at 10:09 PM, tao xiao xiaotao...@gmail.com wrote:
Thank
Did you stop mirror maker?
On Thu, Mar 12, 2015 at 8:27 AM, Saladi Naidu naidusp2...@yahoo.com.invalid
wrote:
We have 3 DC's and created 5 node Kafka cluster in each DC, connected
these 3 DC's using Mirror Maker for replication. We were conducting
performance testing using Kafka Producer
Hi community,
I wanted to know if the solution I supplied can fix the
IllegalMonitorStateException
issue. Our work is pending on this and we'd like to proceed ASAP. Sorry for
bothering.
On Mon, Mar 23, 2015 at 4:32 PM, tao xiao xiaotao...@gmail.com wrote:
I think I worked out the answer
. But I don¹t know if this is a necessary change just
because of the case you saw.
Jiangjie (Becket) Qin
On 3/24/15, 5:05 PM, tao xiao xiaotao...@gmail.com wrote:
The other question I have is the fact that consumer client is unaware of
the health status of underlying fetcher thread
Thanks JIanjie. Can I reuse KAFKA-1997 or should I create a new ticket?
On Wed, Mar 25, 2015 at 7:58 AM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
Hi Xiao,
I think the fix for IllegalStateExcepetion is correct.
Can you also create a ticket and submit a patch?
Thanks.
Jiangjie (Becket
pick them up while fetcher thread is down.
On Wed, Mar 25, 2015 at 8:00 AM, tao xiao xiaotao...@gmail.com wrote:
Thanks JIanjie. Can I reuse KAFKA-1997 or should I create a new ticket?
On Wed, Mar 25, 2015 at 7:58 AM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
Hi Xiao,
I think the fix
You can use kafka-console-consumer consuming the topic from the beginning
*kafka-console-consumer.sh --zookeeper localhost:2181 --topic test
--from-beginning*
On Thu, Mar 26, 2015 at 12:17 AM, Victor L vlyamt...@gmail.com wrote:
Can someone let me know how to dump contents of topics?
I have
xiao xiaotao...@gmail.com wrote:
Do you have data sending to *testtopic? *By default mirror maker only
consumes data being sent after it taps into the topic. you need to keep
sending data to the topic after mirror maker connection is established.
If
you want to change the behavior you can
Hi,
I was running a mirror maker and got
java.lang.IllegalMonitorStateException that caused the underlying fetcher
thread completely stopped. Here is the log from mirror maker.
[2015-03-21 02:11:53,069] INFO Reconnect due to socket error:
java.io.EOFException: Received -1 when reading from
PM, Harsha ka...@harsha.io wrote:
you can increase num.replica.fetchers by default its 1 and also try
increasing replica.fetch.max.bytes
-Harsha
On Fri, Feb 27, 2015, at 11:15 PM, tao xiao wrote:
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I managed to clean up the disk and restarted the replica but the
replica since then never caught up the leader shown below
Topic:test PartitionCount:1 ReplicationFactor:3 Configs:
Topic: test Partition: 0
:15 AM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I have 2 brokers (0 and 1) serving a topic mm-benchmark-test. I did some
tests on the two brokers to verify how leader got elected. Here are the
steps:
1. started 2 brokers
2. created a topic with partition=1 and replication-factor
will not happen.
Jiangjie (Becket) Qin
On 3/2/15, 7:16 PM, tao xiao xiaotao...@gmail.com wrote:
Since I reused the same consumer group to consume the messages after step
6
data there was no data loss occurred. But if I create a new consumer group
for sure the new consumer will suffer data
You can set the consumer config auto.offset.reset=largest
Ref: http://kafka.apache.org/documentation.html#consumerconfigs
On Tue, Mar 3, 2015 at 8:30 PM, Achanta Vamsi Subhash
achanta.va...@flipkart.com wrote:
Hi,
We are using HighLevelConsumer and when a new subscription is added to the
Hi team,
I have 2 brokers (0 and 1) serving a topic mm-benchmark-test. I did some
tests on the two brokers to verify how leader got elected. Here are the
steps:
1. started 2 brokers
2. created a topic with partition=1 and replication-factor=2. Now brokers 1
was elected as leader
3. sent 1000
need to know the total
number
of partitions before I call Producer.send().
Alex
On Thu, Feb 26, 2015 at 7:32 PM, tao xiao xiaotao...@gmail.com
wrote:
Gaurav,
You can get the partition number the message belongs to via
MessageAndMetadata.partition
Hi team,
Is there a built-in metric that can measure the end to end latency in MM?
--
Regards,
Tao
Thanks guy. with unclean.leader.election.enable set to false the issue is
fixed
On Tue, Mar 3, 2015 at 2:50 PM, Gwen Shapira gshap...@cloudera.com wrote:
of course :)
unclean.leader.election.enable
On Mon, Mar 2, 2015 at 9:10 PM, tao xiao xiaotao...@gmail.com wrote:
How do I achieve point
Gaurav,
You can get the partition number the message belongs to via
MessageAndMetadata.partition()
On Fri, Feb 27, 2015 at 5:16 AM, Jun Rao j...@confluent.io wrote:
The partition api is exposed to the consumer in 0.8.2.
Thanks,
Jun
On Thu, Feb 26, 2015 at 10:53 AM, Gaurav Agarwal
Both consumer-1 and consumer-2 are properties of source clusters mirror
maker transfers data from. Mirror maker is designed to be able to consume
data from N sources (N = 1) and transfer data to one destination cluster.
You are free to supply as many consumer properties as you want to instruct
, TimeUnit.MILLISECONDS)
}
On Mon, Mar 23, 2015 at 1:50 PM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I was running a mirror maker and got
java.lang.IllegalMonitorStateException that caused the underlying fetcher
thread completely stopped. Here is the log from mirror maker.
[2015-03-21 02:11
Linkedin has an excellent tool that monitors lag/data loss/data duplication
and etc. Here is the reference
http://www.slideshare.net/JonBringhurst/kafka-audit-kafka-meetup-january-27th-2015
it is not open sourced though.
On Mon, Mar 23, 2015 at 3:26 PM, sunil kalva kalva.ka...@gmail.com wrote:
:43 PM, nitin sharma kumarsharma.ni...@gmail.com
wrote:
hi Xiao,
i have finally got JMX monitoring enabled for my kafka nodes in test
envrionment and here is what i observed.
i was monitoring mbeans under kafka.consumer domain of JVM running Kafka
Mirror Maker process
,
44.0213, 16683, 31716.7300
Regards,
Nitin Kumar Sharma.
On Mon, Apr 13, 2015 at 3:51 PM, tao xiao xiaotao...@gmail.com wrote:
num.consumer.fetchers means the max number of fetcher threads that can be
spawned. it doesn't necessarily mean you can get as many fetcher threads
as
you specify
Hi team,
I observed java.lang.IllegalMonitorStateException thrown
from AbstractFetcherThread in mirror maker when it is trying to build the
fetchrequst. Below is the error
[2015-04-23 16:16:02,049] ERROR
[ConsumerFetcherThread-group_id_localhost-1429830778627-4519368f-0-7],
Error due to
Hi, Joong,
Please check the following two links:
-
https://cwiki.apache.org/confluence/display/KAFKA/KIP-3+-+Mirror+Maker+Enhancement
-
https://cwiki.apache.org/confluence/display/KAFKA/KIP-8+-+Add+a+flush+method+to+the+producer+API
They might help you understand the problem.
Cheers,
Xiao Li
1 - 100 of 221 matches
Mail list logo