Here is the log on the app side when the worker tries to start the failing
task.
{
"timestamp": "2020-05-04T23:22:37,759Z",
"level": "ERROR",
"thread": "pool-7-thread-3",
"logger": "org.apache.kafka.connect.runtime.Worker",
"timestamp": "2020-05-04T16:22:37,759",
"message": "Failed to
Hi community,
I have a connect cluster deployed in a 'cloud-like' environment where an
instance can die anytime but a new instance gets automatically re-spawn
immediately after(within the min...). This obviously leads to an eager
rebalance - I am on Kafka 2.1.1 for client and server.
What
Thanks John... what parameters would affect the latency in case
GlobalKTable will be used and is there any configurations that could be
tuned to minimize the latency of sync with input topic?
On Mon, May 4, 2020 at 10:20 PM John Roesler wrote:
> Hello Pushkar,
>
> Yes, that’s correct. The
Here are the startup logs from a deployment where we lost 15 messages in
topic-p:
https://gist.github.com/josebrandao13/81271140e59e28eda7aaa777d2d3b02c
.timeindex files state before the deployment:
*Partitions with messages: timestamp mismatch
*Partitions without messages: permission denied
Hello Pushkar,
Yes, that’s correct. The operation you describe is currently not supported. If
you want to keep the structure you described in place, I’d suggest using an
external database for the admin objects. I’ll give another idea below.
With your current architecture, I’m a little
Hi guys,
I'm gonna get back to this today, I get mixed feelings regarding the
volumes being the cause. This volume switching is around for quite some
time, in a lot of clusters, and we only started noticing this problem when
we updated some of them. Also, this only happens in *a few* of those