Yes, that’s correct. It’s only for serializing the java type ‘byte[]’.
On Thu, May 7, 2020, at 10:37, Pushkar Deole wrote:
> Thanks John... I got to finish the work in few days so need to get it
> quick, so looking for something ready. I will take a look at jackson json.
>
> By the way, what is
Sure we agree to differ.
That is indeed a sign of civilised mind to entertain an idea without
accepting it.
HTH,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
I agree with Steve.
Also, it’s worth reading Jay’s PR last year regarding confluent community
licence.
Regards,
On Sat, 9 May 2020 at 16:14, Steven Miller wrote:
> At the risk of starting a uh-huh-uhnt-uh battle, I would have to disagree.
> There are seriously good people at Confluent, many
At the risk of starting a uh-huh-uhnt-uh battle, I would have to disagree.
There are seriously good people at Confluent, many of whom have been pivotal to
Kafka’s development. You can decide whether or not Confluent makes sense for
you for other reasons, but I wouldn’t go for a blanket no, by
hello, currently CONTROLLED_SHUTDOWN API does not ensure the current leader
sends all data to followers which in isr. it may cause data loss if the
client does not use ack=all.
I think semantic is a little weak. I can accept data loss if the broker
crash, but we can try best to avoid data loss
Hi Ali,
You may want to look at using the consumer pause / resume api. Its a
mechanism that allows you to poll without retrieving new messages.
I employed this strategy to effectively handle highly variable workloads by
processing them in a background thread. First pause the consumer when
Hi,
I would say stick to Apache Kafka.
These days every start-up sees some opportunities to grab a piece of cake.
Often ending up re-inventing the wheel to create an add-on that mimics what
already exists and charge license fees.
One of the most pertinent exception is Apache HBase that has been
Hi Lei,
This tutorial will introduce you to the Avro consumers.
https://docs.confluent.io/current/schema-registry/schema_registry_tutorial.html
In terms of going from Avro record to SQL, the JDBC sink generates SQL
based on the field names in the schema, and configured table names.
IIRC, the
Thanks Liam,
I want to achive the following function using java code:
For each avro serialized record received:
1 deserialized the record automatically by referencing schema
registry
2 change the record to a sql statement needed to be executed and
execute it
Seems the
I want to know if there's any difference between apache kafka and the open
sourced confluent kafka ?
Thanks,
Lei
wangl...@geekplus.com.cn
Hi Lei,
You could use the Kafka Avro consumer to deserialise records using the
Schema Registry automatically.
Then write to the DB as you see fit.
Cheers,
Liam Clarke-Hutchinson
On Sat, 9 May 2020, 2:38 pm wangl...@geekplus.com.cn, <
wangl...@geekplus.com.cn> wrote:
>
> Using debezium to
11 matches
Mail list logo