Ok, I've finally figured this out---it was a firewall/routing issue, but
between zookeeper and the broker on the same machine, not between my client
and the machine hosting zookeeper and the broker. Zookeeper uses the
external IP address it receives from the advertised.host.name to connect
back
Sorry its too late to reply.
But I was using 0.8.2.
~
Sandeep
Hi,
I have a kafka cluster of three nodes.
I have constructed a topic with the following command:
bin/kafka-topics.sh --create --zookeeper localhost:2181
--replication-factor 1 --partitions 3 --topic testv1p3
So the topic testv1p3 has 3 partitions and replication factor is 1.
Here is the
Sandeep, You need to have multiple replicas. Having single replica
means you've one copy of the data and if that machine goes down there isn't
another replica who can take over and be the leader for that partition.-Harsha
_
From: Sandeep Bishnoi
Thanks Bhavesh.
I understand that to get exactly once processing of a message requires
some de-duplication. What I'm saying, is that the current high level
consumer, with automatic offset commits enabled, gives neither at most
once nor at least once guarantees: A consumer group might get
Thank you for the replay.
I am using kafka_2.10-0.8.2.1,and I didn't change the log things in Kafka.
bit1...@163.com
From: Joe Stein
Date: 2015-06-19 13:43
To: users
Subject: Re: Closing socket connection to /192.115.190.61.
(kafka.network.Processor)
What version of Kafka are you using? This
How are you tracking the offsets that you manually commit? I.e. where do
you get the metadata for the consumed messages?
On Thu, Jun 18, 2015 at 11:21 PM, noah iamn...@gmail.com wrote:
We are in a situation where we need at least once delivery. We have a
thread that pulls messages off the
Hi,
in older Kafka versions where offsets were stored in Zookeeper - I could
manually update the value of the Zookeeper's node:
/consumers/consumer_group_name/offsets/topic_name/partition_number/offset_value.
In 0.8.2.1 - there are no values in offsets anymore, but there is a new topic,
Hi Marina,
Check slide 32 in this presentation
http://www.slideshare.net/jjkoshy/offset-management-in-kafka.
Hope this helps.
Thanks,
Raja.
On Fri, Jun 19, 2015 at 9:43 AM, Marina ppi...@yahoo.com.invalid wrote:
Thanks, Stevo, for the quick reply,
Yes, I understand how to do this
Hi:
You've said that in wiki page
http://kafka.apache.org/documentation.html#producerapi:
As of the 0.8.2 release we encourage all new development to use the new Java
producer. This client is production tested and generally both faster and more
fully featured than the previous Scala client.
Thanks, Stevo, for the quick reply,
Yes, I understand how to do this programmatically - but I would like to be able
to do this manually from a command line, just as before I was able to do this
in the Zookeeper shell. I don't want to write and run a Java app just to set an
offset :)
[unless,
lines.foreachRDD(new FunctionJavaRDDString, Void() {
@Override
public Void call(JavaRDDString rdd) throws Exception {
ListString collect = rdd.collect();
for (String data : collect) {
try {
// save data in the log.txt file
Path filePath = Paths
.get(rdd save file);
if (!Files.exists(filePath)) {
Basically it boils down to the fact that distributed computers and their
networking are not reliable. [0] So, in order to ensure that messages do
infact get across there are cases where duplicates have to be sent.
Take for example this simple experiment, given three servers A, B, and C. A
sends a
Check that you get the data from kafka producer
lines.foreachRDD(new FunctionJavaRDDString, Void() {
@Override
public Void call(JavaRDDString rdd) throws
Exception {
ListString collect = rdd.collect();
From my understanding of the code (admittedly very limited), the offset in
OffsetAndMetadata corresponds to the start of the message just obtained
from iterator.next(). So if you commit that, a restarted consumer should
get that message again. So it should actually continue at the previous
message
Hello Marina,
There's Kafka API to fetch and commit offsets
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
- maybe it will work for you.
Kind regards,
Stevo Slavic.
On Fri, Jun 19, 2015 at 3:23 PM, Marina ppi...@yahoo.com.invalid wrote:
Hi,
Sounds good, thanks for the clarification.
Jan
On 17 June 2015 at 22:09, Jason Gustafson ja...@confluent.io wrote:
We have a couple open tickets to address these issues (see KAFKA-1894 and
KAFKA-2168). It's definitely something we want to fix.
On Wed, Jun 17, 2015 at 4:21 AM, Jan Stette
It is the value we get from calling MessageAndMetadata#offset() for the
last message processed. The MessageAndMetadata instance comes from the
ConsumerIterator.
On Fri, Jun 19, 2015 at 2:31 AM Carl Heymann ch.heym...@gmail.com wrote:
How are you tracking the offsets that you manually commit?
Hi,
We have been solving this very problem in Hermes. You can see what we came
up by examining classes located here:
https://github.com/allegro/hermes/tree/master/hermes-consumers/src/main/java/pl/allegro/tech/hermes/consumers/consumer/offset
We are quite sure this gives us at-least-once
Duplicate messages might be due to network issues, but it is worthwhile to
dig deeper.
It sounds like the problem happens when you have 3 partitions and 3
consumers. Based on my understanding (still learning), each consumer should
have it's own partition to consume. Can you verify this while your
I don't think this got changed until after 0.8.2. I believe the change is
still in trunk and not a released version. We haven't even picked it up
internally at LinkedIn yet.
-Todd
On Fri, Jun 19, 2015 at 12:03 AM, bit1...@163.com bit1...@163.com wrote:
Thank you for the replay.
I am using
yup, my bad I really thought we got that one in but it is on trunk so 0.8.3
it is
~ Joe Stein
- - - - - - - - - - - - - - - - -
http://www.stealth.ly
- - - - - - - - - - - - - - - - -
On Fri, Jun 19, 2015 at 8:52 AM, Todd Palino tpal...@gmail.com wrote:
I don't think this got changed until
22 matches
Mail list logo