Thank you. And I do not need to use compact, but it so happened that the topic
I was given to use for the POC was originally defined as compact and when I was
getting many different errors which can now be attributed to what I was doing
with the connector configuration that was wrong and I
Thanks. I tried with those options with many combinations, but kept getting
same error. Asking this to get better understanding.
So, I used the same connector configuration as below. I created the topic
with cleanup.policy=compact and kept getting the error below, so I changed
"only"
The options I reported are related to kafka broker configuration I guess, so
they should be set at broker level and not in the connector config.
By the way, without more information on your configuration and why you need
compaction this is not so much we can do.
--
Andrea Cosentino
This seems related more on kafka connect configuration than the connector
itself. I guess you'll need to tune the options related to this like:
offset.flush.timeout.ms
offset.flush.interval.ms
--
Andrea Cosentino
--
Apache Camel PMC Chair
Apache Karaf Committer
aws-s3 connector - not aws2-s3.
On 1/4/21, 5:19 PM, "Andrea Cosentino" wrote:
Is this with aws2-s3 connector or aws2-s3?
Il lun 4 gen 2021, 23:05 Arundhati Bhende
ha scritto:
> Hi, I am testing the connector with different cleanup policies for the
> Topic.
>
Is this with aws2-s3 connector or aws2-s3?
Il lun 4 gen 2021, 23:05 Arundhati Bhende
ha scritto:
> Hi, I am testing the connector with different cleanup policies for the
> Topic.
>
> If the topic cleanup.policy is set to "delete", the connector works
> correctly and I am able to access the
Hi, I am testing the connector with different cleanup policies for the Topic.
If the topic cleanup.policy is set to "delete", the connector works correctly
and I am able to access the message in the topic
If the topic cleanup.policy is set to "compact", the connect Task fails with
the below