i set the conf num.replica.fetchers=6,then solve this.
Thanks,
Lax
Date: Wed, 31 Dec 2014 15:51:57 -0800
Subject: Re: kafka 0.8.1.1 delay in replica data
From: j...@confluent.io
To: users@kafka.apache.org
Is the system time in-sync btw the two hosts?
Thanks,
Jun
On Tue, Dec 23,
as the kafka doc Only committed messages are ever given out to the consumer.
.
if followers does not copy the message on time and followers are in ISR,
consumers would consume the message from leader broker?
Thanks,
Lax
as the kafka doc Only committed messages are ever given out to the consumer.
.
if followers does not copy the message on time and followers are in ISR,
consumers would consume the message from leader broker?
Thanks,
Lax
Hi,
We use kafka_2.10-0.8.1.1 in our server. Today we found disk space alert.
We find many kafka data files are deleted, but still opened by kafka.
such as:
_yellowpageV2-0/68170670.log (deleted)
java 8446 root 724u REG 253,2 536937911
26087362
Hello,
I would like to write a multi-threaded consumer for the high-level
consumer in Kafka 0.8.1. I have found two ways that seem feasible
while keeping the guarantee that messages in a partition are processed
in order. I would appreciate any feedback this list has.
Option 1
- Create
Hi, All
I am doing performance test on our new kafka production server, but after
sending some messages (even faked message by using bin/kafka-run-class.sh
org.apache.kafka.clients.tools.ProducerPerformance), it comes out the error
of connection, and shut down the brokers, after that, I see such
Hi,
Your disk is full. You should probably have something that checks/monitors
disk space and alerts you when it's full.
Maybe you can point Kafka to a different, larger disk or partition.
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch
Continue this issue, when I restart the server, like
bin/kafka-server-start.sh config/server.properties
it will fails to start the server, like
[2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
I'm keen to hear about how to work one's way out of a filled partition
since I've run into this many times after having tuned retention bytes or
retention (time?) incorrectly. The proper path to resolving this isn't
obvious based on my many harried searches through documentation.
I often end up
There are two parts to this
1) How to prevent Kafka from filling up disks which
https://issues.apache.org/jira/browse/KAFKA-1489 is trying to deal with (I
set the ticket to unassigned just now since i don't think anyone is working
on it and was assigned by default, could be wrong though so assign
The 0.8.2 branch is our release branch. We won't update any existing
released version and instead, will use a new version for every release. So
0.8.2-beta will be changed to 0.8.2.-beta-2 and eventually just 0.8.2.
Thanks,
Jun
On Sun, Jan 4, 2015 at 5:12 PM, Shannon Lloyd shanl...@gmail.com
Thanks the reply, the disk is not full:
root@exemplary-birds:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 133G 3.4G 123G 3% /
none4.0K 0 4.0K 0% /sys/fs/cgroup
udev 32G 4.0K 32G 1% /dev
tmpfs 6.3G 764K 6.3G 1% /run
Hi, All
I am running performance test on kafka, the command
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test-rep-three 500 100 -1 acks=1 bootstrap.servers=
10.100.10.101:9092 buffer.memory=67108864 batch.size=8196
Since we send 50 billions to brokers, it
Hi, All
I am testing and making changes on server.properties, I wonder do I need to
specifically change the values in consumer and producer properties, here is
the consumer.properties
zookeeper.connect=10.100.98.100:2181,10.100.98.101:2181,10.100.98.102:2181
# timeout in ms for connecting to
Hi
Having read a lot about kafka and its use at linkedin, I'm still unsure
whether Kafka can be used, with some mindset change for sure, as a general
purpose data store.
For example, would someone use Kafka to enforce an unique constraint?
A simple use case is, in the case of linkedin, unicity
Hi, guys,
Want to confirm whether the mirrormaker supports different versions of
kafka. For example, if DC1 uses Kafka-0.8 and DC2 uses Kafka-0.7, does DC1
can mirrormake DC2’s messages? It looks this cannot work in my local test.
If not work, is there any candidate ways for replicating msgs
Hi, All
I am running a C# producer to send messages to kafka (3 nodes cluster), but
have such errors:
[2015-01-06 16:09:51,143] ERROR Closing socket for /10.100.70.128 because
of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at
Hi, experts
Again, we still having the issues of losing data, see we see data 5000
records, but only find 4500 records on brokers, we did set required.acks -1
to make sure all brokers ack, but that only cause the long latency, but not
cure the data lost.
thanks
On Mon, Jan 5, 2015 at 9:55 AM,
You should never be storing your log files in /tmp please change that.
Ack = -1 is what you should be using if you want to guarantee messages are
saved. You should not be seeing high latencies (unless a few milliseconds
is high for you).
Are you using sync or async producer? What version of
BTW, I found the the /kafka/logs also getting biger and bigger, like
controller.log and state-change.logs. should I launch a cron the clean them
up regularly or there is way to delete them regularly?
thanks
AL
On Tue, Jan 6, 2015 at 2:01 PM, Sa Li sal...@gmail.com wrote:
Hi, All
We fix the
Hi, All
We fix the problem, I like to share the what the problem is in case someone
come across the similar issues. We add the data drive for each node
/dev/sdb1 , but specify the wrong path in server.properties, which means
the data was written into the wrong drive /dev/sda2, quickly eat up all
the complete error message:
-su: cannot create temp file for here-document: No space left on device
OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory file:
/tmp/hsperfdata_root/19721
Try using the -Djava.io.tmpdir= option to select an alternate temp location.
[2015-01-06
Hi,
I’m hitting a strange problem using 0.8.2-beta and just a single kafka broker
on CentOS 6.5.
A percentage of my topic create attempts are randomly failing and leaving the
new topic in a state in which it can not be used due to “partition doesn’t
exist” errors as seen in server.log below.
Log compaction though allows it to work as a data store quite well for some use
cases . It's exactly why I started looking hard at Kafka lately.
The general idea is quite simple. Rather than maintaining only recent
log entries in the log and throwing away old log segments we maintain
the most
Hi,
We are using mirrormaker to replicate data between two kafka clusters in
difference datacenter, a fiber connected them, 1G bandwidth, ~35ms
lantency.
we found that the MessageIn in JMX beans with the broker in destination
cluster greater than broker in source cluster.
Our configuration like
Have you tried using the built-in stress test scripts?
bin/kafka-producer-perf-test.sh
bin/kafka-consumer-perf-test.sh
Here's how I stress tested them -
nohup ${KAFKA_HOME}/bin/kafka-producer-perf-test.sh --broker-list
${KAFKA_SERVERS} --topic ${TOPIC_NAME} --new-producer --threads 16 --messages
Hello,
I have some of my own test cases very similar to the ones
in ProducerFailureHandlingTest.
I moved to the new producer against 0.8.1.1 and all of my test cases around
disconnects fail. Moving to use 0.8.2-beta on server side things succeed.
Here is an example test:
- Healthy
Try doing .get() on the future returned by the new producer. It should
guarantee that message has made to kafka.
Thanks,
Mayuresh
On Tue, Jan 6, 2015 at 4:21 PM, Sa Li sal...@gmail.com wrote:
Hi, experts
Again, we still having the issues of losing data, see we see data 5000
records, but
Did you set offsets.storage to kafka in the consumer of mirror maker?
Thanks,
Jun
On Mon, Jan 5, 2015 at 3:49 PM, svante karlsson s...@csi.se wrote:
I'm using 0.82beta and I'm trying to push data with the mirrormaker tool
from several remote sites to two datacenters. I'm testing this from a
The consumer always fetches data from the leader broker.
Thanks,
Jun
On Tue, Jan 6, 2015 at 2:50 AM, chenlax lax...@hotmail.com wrote:
as the kafka doc Only committed messages are ever given out to the
consumer. .
if followers does not copy the message on time and followers are in ISR,
Do you mean that the Kafka broker still holds a file handler on a deleted
file? Do you see those files being deleted in the Kafka log4j log?
Thanks,
Jun
On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao zhaoyong...@gmail.com wrote:
Hi,
We use kafka_2.10-0.8.1.1 in our server. Today we found disk
Not sure exactly what's happening there. In any case, 0.7 is really old.
You probably want to upgrade to the latest 0.8 release.
Thanks,
Jun
On Tue, Jan 6, 2015 at 12:16 AM, tao li tust.05102...@gmail.com wrote:
Hi,
We are using mirrormaker to replicate data between two kafka clusters in
Ok, I could make the example work.
The problem was that on the wiki page, there was no value for
zk.connectiontimeout.ms.I inserted a value for that property (value = 10)
and the example worked.
Also, I realized my mail client messed up the earlier message.
Here's the URL for the wiki page I
Only Committed messages are made available to the consumer. The consumer
will always read from the leader. The message is said to be committed (The
high water mark is advanced) only when all the replicas in the ISR have
received it.
Thanks,
Mayuresh
On Tue, Jan 6, 2015 at 3:41 AM, chenlax
You can track the blockers here
https://issues.apache.org/jira/browse/KAFKA-1841?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%200.8.2%20AND%20resolution%20%3D%20Unresolved%20AND%20priority%20%3D%20Blocker%20ORDER%20BY%20key%20DESC.
We are waiting on follow up patches for 2 JIRAs which are
Paul,
That behavior is currently expected, see
https://issues.apache.org/jira/browse/KAFKA-1788. There are currently no
client-side timeouts in the new producer, so the message just sits there
forever waiting for the server to come back so it can try to send it.
If you already have tests for a
36 matches
Mail list logo