Hi AJ,
No, there is no public schema registry, you will need to deploy and
maintain your own.
Cheers,
Liam Clarke-Hutchinson
On Thu, May 21, 2020 at 11:56 AM AJ Chen wrote:
> I use avro for kafka message. When producing avro message, it fails to
> access schema registry,
> ERROR
Hi,
You want metadata.max.age.ms which, as you noticed, defaults to 5 minutes
:)
https://kafka.apache.org/documentation/#metadata.max.age.ms
Cheers,
Liam Clarke-Hutchinson
On Thu, May 21, 2020 at 1:06 PM Kafka Shil wrote:
> I was running a test where kafka consumer was reading data from
I was running a test where kafka consumer was reading data from multiple
partitions of a topic. While the process was running I added more
partitions. It took around 5 minutes for consumer thread to read data from
the new partition. I have found this configuration "
I use avro for kafka message. When producing avro message, it fails to
access schema registry,
ERROR io.confluent.kafka.schemaregistry.client.rest.RestService - Failed to
send HTTP request to endpoint:
http://localhost:8081/subjects/avro_emp-value/versions
When using confluent schema registry,
Hi Robin
I had gone though the link you provided, It is not helpful in my case.
Apart from this, *I am not getting why the tasks are divided in *below
pattern* when they are *first time registered*, which is expected behavior.
I*s there any parameter which we can pass in worker property file
The issue description formatted on stackoverflow :
https://stackoverflow.com/questions/61919200/field-does-not-exist-on-transformations-to-extract-key-with-debezium
Hi, I am trying to create a debezium mysql connector with a transformation
to extract the key.
Before key transformations : create source connector mysql with(
"connector.class" = 'io.debezium.connector.mysql.MySqlConnector',
"database.hostname" = 'mysql',
"tasks.max" = '1',
Thanks for the clarification. If this is an actual problem that you're
encountering and need a solution to then since the task allocation is not
deterministic it sounds like you need to deploy separate worker clusters
based on the workload patterns that you are seeing and machine resources
Hi Robin
Replying to your query i.e
One thing I'd ask at this point is though if it makes any difference where
the tasks execute?
It actually makes difference to us, we have 16 connectors and as I stated
tasks division earlier, first 8 connector' task are assigned to first
worker process and
It turns out that kafka acls support wildcard principal, I missed this in the
document.
Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test3,
patternType=LITERAL)`:
(principal=User:*, host=*, operation=ALL, permissionType=ALLOW)
It is good now.
Hi Liam,
Thank you for the clarification.
My use of words was a bit confusing. Let me rephrase it. :)
I believe each partition has a leader. This is liable to change in case of
any broker
going down. What I am interested in it getting logs from the current leader
of any partition. For
that, I
OK, I understand better now.
You can read more about the guts of the rebalancing protocol that Kafka
Connect uses as of Apache Kafka 2.3 an onwards here:
https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/
One thing I'd ask at this point is though if it makes any
Hey Guys,
one of the value for SFTP configuration is key.schema.
i am giving through postman as a json request
So how can i give the schema details ,Because it is having DoubleQuotes For
all key and value?
could anyone explain?
if i give like this this exception is coming.
"key.schema":
Hi Robin
Thanks for your reply.
We are having two worker on different IP. The example which I gave you it
was just a example. We are using kafka version 2.3.1.
Let me tell you again with a simple example.
Suppose, we have two EC2 node, N1 and N2 having worker process W1 and W2
running in
Hey guys
I changed the properties in SFTP CSV source and it is working fine..
Now I set the schema generation enabled true .so it is adding the schema
data to every data into the topic ..
So when I set that generation false it ask for key.schema and value.schema
But both will be json and I
So you're running two workers on the same machine (10.0.0.4), is
that correct? Normally you'd run one worker per machine unless there was a
particular reason otherwise.
What version of Apache Kafka are you using?
I'm not clear from your question if the distribution of tasks is
presenting a problem
Seems you are already added.
On 5/19/20 7:58 PM, Jiamei Xie wrote:
> Hi,
>
>
>
> Please add my JIRA ID into the contributors list of Apache Kafka.
>
>
>
> Here is my JIRA profile:
>
>
>
> Username: adally
>
> Full name: jiamei xie
>
> Best Wishes,
> Jiamei
>
> IMPORTANT
17 matches
Mail list logo