Hey Apache Users,
I'm working on a web application that has a web service component, and a
background processor component. Both applications will send messages to
the same Kafka topic as an object is manipulated.
In some cases, a web service call in the service component will send a
message to K
Kaufman, Thanks for your clear and helpful explanation.
The article you provided by link is definitely useful about new consumer
client, I have fully understood.
On Mon, Sep 26, 2016 at 10:36 PM, Kaufman Ng wrote:
> Hi Zhuo,
>
> Since your code uses KafkaConsumer class, it's the "new consumer" i
I have a topic with 16 partitions.
I also have 24 consumer threads (8 per process per box) subscribed to that
same topic. This configuration ensures that there is plenty of room for 1:1
partition to consumer assignment. And some standby consumers to take over
in case the process dies
But during a
Try running mirror maker from the other direction (i.e. from 0.8.2.1 ). I had
a similar issue, and that seemed to work.
-David
On 9/26/16, 5:19 PM, "Xavier Lange" wrote:
I'm using bin/kafka-mirror-maker.sh for the first time and I need to take
my "aws-cloudtrail" topic from a 0.8.2.1
I'm using bin/kafka-mirror-maker.sh for the first time and I need to take
my "aws-cloudtrail" topic from a 0.8.2.1 single broker and mirror it to a
0.10.0.0 cluster. I am running mirror maker from a host in the 0.10.0.0
cluster.
My consumer.properties file:
$ cat consumer.properties
zookeeper.con
Thanks. Got it same rules apply to producer.
Thanks again.
On Mon, Sep 26, 2016 at 2:29 PM, Alexis Midon <
alexis.mi...@airbnb.com.invalid> wrote:
> the official recommendations are here
> http://kafka.apache.org/documentation.html#upgrade_10
>
>
> On Fri, Sep 23, 2016 at 7:48 PM Vadim Keylis
>
the official recommendations are here
http://kafka.apache.org/documentation.html#upgrade_10
On Fri, Sep 23, 2016 at 7:48 PM Vadim Keylis wrote:
> Hello we have a producer that is written in c language to send data to
> kafka using 0.8 protocol. We now need to upgrade since protocol has
> change
Hello,
Has anyone used Confluent’s Schema Registry? If so, I’m curious to hear about
best practices for using it in a staging environment.
Do users typically copy schemas over to the staging environment from
production? Are developers allowed to create new schemas in the staging
environment?
Hello,
my first mailing here and I am pretty new to kafka so I search for your
professional help.
I have zookeeper and kafka 2.11-0.9.0.0 running on my laptop with Fedora24.
After some successful tests with console producer and consumer I started a
project in eclipse working on:
- reading a c
Hi there,
Can anyone please help us as we are getting the SendFailedException when Kafka
consumer is starting and not able to consume any message?
Thanks
Achintya
Hi,
So, here’s the situation:
- for classic batching of writes to external systems, right now I simply hack
it. This specific case is writing of records to Accmumlo database, and I simply
use the batch writer to batch writes, and it flushes every second or so. I’ve
added a shutdown hook to the
Hi Walter,
One thing I can think of is that, if you pass the serde object as part of
your topology definition, instead of passing the serde class in the config,
then these serde objects will not be auto configured and hence for your
case the schema registry client will not be constructed and initi
Yes, only kafka connect to HDFS implementations using avro or json converter
seems to be available,but there's a current issue that avoids using the
AvroConverter when your kafka/confluent version < 3.0.0 so I was wondering if
there was any workaround available, like a simple ByteArray converter th
Guozhang,
Its a bit hacky but I guess it will work fine as range scan isn't expensive
in RocksDB.
Michael,
One reason is to be able to batch before sinking to an external system.
Sink call per record isn't very efficient.
This can be used just for the sink processor.
I feel I might be stealing th
Hi Zhuo,
Since your code uses KafkaConsumer class, it's the "new consumer" in Kafka
which uses a special kafka topic to keep track of offsets (rather than
zookeeper). By default that topic is named "__consumer_offsets". You can
use the "kafka-consumer-groups" command to check where the offset is
The current converters want you to send Avro records with a "schema id"
prepended to the serialized Avro. You also need the schema registry running.
I'm guessing this is what Olivier is talking about.
I think it is possible to write your own converter that doesn't need this but
I haven't tri
You'll need to do a rolling restart of your kafka nodes after changing the
zookeeper ensemble. There's no real way around that right now.
On Sun, Sep 25, 2016 at 6:41 PM, Ali Akhtar wrote:
> Perhaps if you add 1 node, take down existing node, etc?
>
> On Sun, Sep 25, 2016 at 10:37 PM, brenfield1
Not sure what you are trying to do,
Insert data to Kafka? Get data from Kafka?
What about the JsonConverter?
On Fri, Sep 23, 2016 at 4:13 PM, Olivier Girardot <
o.girar...@lateral-thoughts.com> wrote:
> Hi everyone,is there any way to use a straightforward converter instead of
> the
> AvroConve
Ara,
may I ask why you need to use micro-batching in the first place?
Reason why I am asking: Typically, when people talk about micro-batching,
they are refer to the way some originally batch-based stream processing
tools "bolt on" real-time processing by making their batch sizes really
small. H
19 matches
Mail list logo