The Go Kafka Client also supports offset storage in ZK and Kafka
https://github.com/stealthly/go_kafka_client/blob/master/docs/offset_storage.md
and has two other strategies for partition ownership with a consensus
server (currently uses Zookeeper will be implementing Consul in near
future).
~
That's part of the new consumer API that hasn't been released yet. The API
happens to be included in the 0.8.2.* artifacts because it is under
development, but isn't yet released -- it hasn't been mentioned in the
release notes, nor is it in the official documentation:
What I found was 2 problems.
1. The producer wasn't passing in a partition key, so not all partitions
were getting data.
2. After fixing the producer, I could see all threads getting data
consistently then the shutdown method was
clearly killing the threads.
I have removed the shutdown,and with
On Thu, Apr 30, 2015 at 2:15 AM, Nimi Wariboko Jr n...@channelmeter.com
wrote:
My mistake, it seems the Java drivers are a lot more advanced than the
Shopify's Kafka driver (or I am missing something) - and I haven't used
Kafka before.
With the Go driver - it seems you have to manage offsets
Running a 1 broker system. I had some issues with the system but got it
working. I've deleted the topic I had trouble with and re-created it.
But describing shows no leader, not producer/consumption works on it.
I create a brand new topic with a name I never used before and I get a
leader. I
I had 3 zookeeper nodes. I added 3 new ones and shut down the old 3.
The server.log shows Closing socket connection error to the old IPs. I
rebooted the kafka server entirely but it still somehow seems aware of
these servers.
Any ideas what's up?
Have you changed
zookeeper.connect=
in server.properties.
A better procedure for replacing zookeeper nodes would be to shutdown one
and install the new one with the same ip. This can easily be done to a
running cluster.
/svante
2015-04-30 20:08 GMT+02:00 Dillian Murphey
You need to first decide the conditions that need to be met for you to
scale to 50 consumers. These can be as simple as the consumer lag. Look at
the console offset checker tool and see if any of those numbers make sense.
Your existing consumers could also produce some metrics based on which
I am trying to reproduce this. But if I create a topic, then delete it,
then re-create it, no leader is getting assigned.
I can still produce/consume messages (via command line, basic testing).
Is there some additional cleanup I need to do?
Thanks for your time!
Not sure if this is the best way to do this, but my zookeeper.connect is
set to a DNS alias which points to a load balancer for 3 zookeeper nodes.
I was trying this to see if I could have the kafka config dynamic and allow
me to change/scale whatever I wanted with zookeeper and not have to ever
2015-04-30 8:50 GMT+03:00 Ewen Cheslack-Postava e...@confluent.io:
They aren't going to get this anyway (as Jay pointed out) given the current
broker implementation
Is it also incorrect to assume atomicity even if all messages in the batch
go to the same partition?
Why do we think atomicity is expected, if the old API we are emulating here
lacks atomicity?
I don't remember emails to the mailing list saying: I expected this batch
to be atomic, but instead I got duplicates when retrying after a failed
batch send.
Maybe atomicity isn't as strong requirement as
Which mirror maker version did you look at? The MirrorMaker in trunk
should not have data loss if you just use the default setting.
On 4/30/15, 7:53 PM, Joong Lee jo...@me.com wrote:
Hi,
We are exploring Kafka to keep two data centers (primary and DR) running
hosts of elastic search nodes in
It'll be officially ready only in version 0.9.
Aditya
From: Mohit Gupta [success.mohit.gu...@gmail.com]
Sent: Thursday, April 30, 2015 8:58 PM
To: users@kafka.apache.org
Subject: Java Consumer API
Hello,
Kafka documentation (
Roshan,
If I understand correctly, you just want to make sure a number of messages
has been sent successfully. Using callback might be easier to do so.
Public class MyCallback implements Callback {
public SetRecordMetadata failedSend;
@Override
Public void
When we evaluated MirrorMaker last year we didn't find any risk of data
loss, only duplicate messages in the case of a network partition.
Did you discover data loss in your tests, or were you just looking at the
docs?
On Fri, 1 May 2015 at 4:31 pm Jiangjie Qin j...@linkedin.com.invalid
wrote:
Hello,
Kafka documentation ( http://kafka.apache.org/documentation.html#producerapi
) suggests using only Producer from kafka-clients ( 0.8.2.0 ) and to use
Consumer from the packaged scala client. I just want to check once if the
Consumer API from this client is ready for production use.
--
Hello Everyone,
I am quite exited about the recent example of replicating PostgresSQL
Changes to Kafka. My view on the log compaction feature always had been
a very sceptical one, but now with its great potential exposed to the
wide public, I think its an awesome feature. Especially when
Hi,
We are exploring Kafka to keep two data centers (primary and DR) running hosts
of elastic search nodes in sync. One key requirement is that we can't lose any
data. We POC'd use of MirrorMaker and felt it may not meet out data loss
requirement.
I would like ask the community if we should
Which Kafka version are you using?
On Thu, Apr 30, 2015 at 4:11 PM, Dillian Murphey crackshotm...@gmail.com
wrote:
Scenerio with 1 node broker, and 3 node zookeeper ensemble.
1) Create topic
2) Delete topic
3) Re-create with same name
I'm noticing this recreation gives me Leader: non, and
Hey all,
I am attempting to create a topic which uses 8GB log segment sizes, like so:
./kafka-topics.sh --zookeeper localhost:2181 --create --topic perftest6p2r
--partitions 6 --replication-factor 2 --config max.message.bytes=655360
--config segment.bytes=8589934592
And am getting the following
Scenerio with 1 node broker, and 3 node zookeeper ensemble.
1) Create topic
2) Delete topic
3) Re-create with same name
I'm noticing this recreation gives me Leader: non, and Isr: as empty.
Any ideas what the deal is here?
I googled around and not being an experienced kafka admin, someone said
@Gwen, @Ewen,
While atomicity of a batch is nice to have, it is not essential. I don't
think users always expect such atomicity. Atomicity is not even guaranteed
in many un-batched systems let alone batched systems.
As long as the client gets informed about the ones that failed in the
batch..
With reties 1 you still see the 3 secs delay? The idea is, you can change these
property to reduce the time to throw exception to 1 secs or below. Does that
help?
Thanks
Zakee
On Apr 28, 2015, at 10:29 PM, Madhukar Bharti bhartimadhu...@gmail.com
wrote:
Hi Zakee,
I feel a need to respond to the Sqoop-killer comment :)
1) Note that most databases have a single transaction log per db and in
order to get the correct view of the DB, you need to read it in order
(otherwise transactions will get messed up). This means you are limited to
a single producer
Thank you,
It seems the following methods are not supported in KafkaConsumer. Do you
know when they will be supported?
public OffsetMetadata commit(MapTopicPartition, Long offsets, boolean
sync) {
throw new UnsupportedOperationException();
}
Thanks Regards,
On Wed, Apr 29,
My mistake, it seems the Java drivers are a lot more advanced than the
Shopify's Kafka driver (or I am missing something) - and I haven't used
Kafka before.
With the Go driver - it seems you have to manage offsets and partitions
within the application code, while in Scala driver it seems you have
27 matches
Mail list logo