When specific.avro.reader is set to true Deserializer tries to create the
instance of the Class. The class name is formed by reading the schema (writer
schema) from schema registry and concatenating the namespace and record name.
It is trying to create that instance and it is not found in the
Thanks Manoj. It makes sense to use Consumer itself for fetching meta data.
On Wed, May 6, 2020 at 12:39 AM wrote:
>
>
>
>
> Glade , it work for you .
>
>
>
> Kafka Admin API run on zookeeper and sometime you don’t have access to
> Zookeeper host /port . I don’t know in your scenario how you
Glade , it work for you .
Kafka Admin API run on zookeeper and sometime you don’t have access to
Zookeeper host /port . I don’t know in your scenario how you are managing
kafka/Zk cluster but for security purpose , Zookeeper access only limited to
kafka Cluster .
From: SenthilKumar K
Thanks Manoj. It works for me.
Looks to me the KafkaAdminClient (Singleton instance ) is faster than
Consumer.partitionsFor() API. In terms of performance which one is good to
fetch the metadata of a given topic. Thanks!
On Wed, May 6, 2020 at 12:26 AM wrote:
> I think you can filter list of
I think you can filter list of topic return by KafkaConsumer.partitionsFor()
on by using method type , if this is PartitionInfo.leader() then include
those partition in list .
On 5/5/20, 11:44 AM, "SenthilKumar K" wrote:
[External]
Hi Team, We are using
Hi Team, We are using KafkaConsumer.partitionsFor() API to find the list of
available partitions. After fetching the list of partitions, We use
Consumer.offsetsForTimes() API to find the offsets for a given timestamp.
The API Consumer.partitionsFor() simply returning all partitions including
the
Error: java.lang.IncompatibleClassChangeError: class
com.typesafe.scalalogging.BaseLogger can not implement
com.typesafe.scalalogging.Logger
I am configuring sink-influxDB, i was getting above error. Using
kafka-connect-influxdb-1.2.0.jar, the other supported jars are
kcql-2.4.0.jar
Hi All,
Currently, I'm working on a usecase wherein I have to deserialie an Avro
object and convert to some other format of Avro. Below is the flow.
DB -> Source Topic(Avro format) -> Stream Processor -> Target Topic (Avro
as nested object).
When I deserialize the message from the Source
Hi,
What I can see from the configurations:
> log.dir = /tmp/kafka-logs (default)
> log.dirs = /var/kafkadata/data01/data
>From the documentation log.dir is only used if log.dirs is not set, so
*/var/kafkadata/data01/data
*is the folder used for logs.
Regards
Em ter., 5 de mai. de 2020 às
Hi guys, still following your discussion even if it's out of my reach.
Just been noticing that you use /tmp/ for your logs, dunno if it's a good
idea :o https://issues.apache.org/jira/browse/KAFKA-3925
Le lun. 4 mai 2020 à 19:40, JP MB a écrit :
> Here are the startup logs from a deployment
Thanks John... appreciate your inputs and suggestions. I have been assigned
recently to this task (of persisting the cache) and haven't been involved
in original design and architecture and agree with all the issues you have
highlighted.
However, at this point, i don't think the application can be
11 matches
Mail list logo