Hi, experts
Again, we still having the issues of losing data, see we see data 5000
records, but only find 4500 records on brokers, we did set required.acks -1
to make sure all brokers ack, but that only cause the long latency, but not
cure the data lost.
thanks
On Mon, Jan 5, 2015 at 9:55 AM,
You should never be storing your log files in /tmp please change that.
Ack = -1 is what you should be using if you want to guarantee messages are
saved. You should not be seeing high latencies (unless a few milliseconds
is high for you).
Are you using sync or async producer? What version of
Try doing .get() on the future returned by the new producer. It should
guarantee that message has made to kafka.
Thanks,
Mayuresh
On Tue, Jan 6, 2015 at 4:21 PM, Sa Li sal...@gmail.com wrote:
Hi, experts
Again, we still having the issues of losing data, see we see data 5000
records, but
@Sa,
the required.acks is producer side configuration. Set to -1 means requiring
ack from all brokers.
On Fri, Jan 2, 2015 at 1:51 PM, Sa Li sal...@gmail.com wrote:
Thanks a lot, Tim, this is the config of brokers
--
broker.id=1
port=9092
host.name=10.100.70.128
Thanks a lot, Tim, this is the config of brokers
--
broker.id=1
port=9092
host.name=10.100.70.128
num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
auto.leader.rebalance.enable=true
What's your configured required.acks? And also are you waiting for all
your messages to be acknowledged as well?
The new producer returns futures back, but you still need to wait for
the futures to complete.
Tim
On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
Hi, all
We are