For sink connectors, I believe you can scale up the tasks to match the
partitions on the topic. But I don't believe this is the case for source
connectors; the number of partitions on the topic you're producing to has
nothing to do with the number of connector tasks. It really depends on the
Hello
Confirmed. Partition is the minimal granularity level, so having more
consumers than the number of partitions of a topic for a same consumer
group is useless, having P partitions means maximum parallelism is reached
using P consumers.
Regards,
Sébastien.
Le jeu. 30 mai 2024 à 14:43,
To: users@kafka.apache.org
Subject: [EXTERNAL] Regarding Kafka connect task to partition relationship for
both source and sink connectors
Hi everyone,
From my understanding, if a topic has n partitions, we can create up to n
tasks for both the source and sink connectors to achieve the maximum
Hi everyone,
>From my understanding, if a topic has n partitions, we can create up to n
>tasks for both the source and sink connectors to achieve the maximum
>parallelism. Adding more tasks would not be beneficial, as they would remain
>idle and be limited to the number of partitions of the
Hi,
I'm using kafka connect, passing data with avro schema.
By default I get a schema of mili-seconds time precision for datetime2
columns.
Do you support time precision of micro seconds as well?
Thanks
Hey Greg,
Thinking more, I do like the idea of a source-side equivalent of the
ErrantRecordReporter interface!
However, I also suspect we may have to reason more carefully about what
users could do with this kind of information in a DLQ topic. Yes, it's an
option to reset the connector (or a
Hey Chris,
That's a cool idea! That can certainly be applied for failures other
than poll(), and could be useful when combined with the Offsets
modification API.
Perhaps failures inside of poll() can be handled by an extra
mechanism, similar to the ErrantRecordReporter, which allows reporting
Hi Greg,
This was my understanding as well--if we can't turn a record into a byte
array on the source side, it's difficult to know exactly what to write to a
DLQ topic.
One idea I've toyed with recently is that we could write the source
partition and offset for the failed record (assuming,
Hi Yeikel,
Thanks for your question. It certainly isn't clear from the original
KIP-298, the attached discussion, or the follow-up KIP-610 as to why
the situation is asymmetric.
The reason as I understand it is: Source connectors are responsible
for importing data to Kafka. If an error occurs
Hi all,
Sink connectors support Dear Letter Queues[1], but Source connectors don't seem
to
What is the reason that we decided to do that?
In my data pipeline, I'd like to apply some transformations to the messages
before they are sink, but that leaves me vulnerable to failures as I need to
a breaking change for Kafka
Connect in https://github.com/apache/kafka/pull/9669. Before that change,
the SourceTask::stop method [1] would be invoked on a separate thread from
the one that did the actual data processing for the task (polling the task
for records, transforming and converting those
Hello everyone,
Is there any mechanism to force Kafka Connect to ingest at a given rate per
second as opposed to tasks?
I am operating in a shared environment where the ingestion rate needs to be as
low as possible (for example, 5 requests/second as an upper limit), and as far
as I can
can arrange
discussion with Partner Manager.
-Original Message-
From: Boyee [mailto:zhenchua...@163.com]
Sent: 14 October 2023 12:38
To: users@kafka.apache.org
Subject: The Plan To Introduce Virtual Threads To Kafka Connect
Kafka Connect as a kind of thread-intense program, can benifit a lo
Manager.
-Original Message-
From: Greg Harris [mailto:greg.har...@aiven.io.INVALID]
Sent: 16 October 2023 20:38
To: users@kafka.apache.org
Subject: Re: The Plan To Introduce Virtual Threads To Kafka Connect
Hi Boyee,
Thanks for the suggestion, Virtual threads look like they may
/jira/browse/KAFKA-14606 .
Thanks!
Greg Harris
On Mon, Oct 16, 2023 at 6:20 AM Boyee wrote:
>
> Kafka Connect as a kind of thread-intense program, can benifit a lot from the
> usage of virtual threads.
> From JDK 21, released in last month, virtual threads is a formal feature of
>
Kafka Connect as a kind of thread-intense program, can benifit a lot from the
usage of virtual threads.
From JDK 21, released in last month, virtual threads is a formal feature of JDK.
I would like to ask if any plans exist to bring virtual threads into Kafka
Connect.
Thank you.
ion =
headers.getHeaderString(HttpHeaders.AUTHORIZATION);
if (credentialAuthorization != null) {
req.header(HttpHeaders.AUTHORIZATION, credentialAuthorization);
}
}
}
This is of course risky and it would be significantly more convenient if this
functionality is
Hi Yeikel,
Neat question! And thanks for the link to the RestClient code; very helpful.
I don't believe there's a way to configure Kafka Connect to add these
headers to forwarded requests right now. You may be able to do some kind of
out-of-band proxy magic to intercept forwarded requests
Hello everyone,
I'm currently running Kafka Connect behind a firewall that mandates the
inclusion of a specific header. This situation becomes particularly challenging
when forwarding requests among multiple workers, as it appears that only the
Authorization header is included in the request
s to workers with whom it
> cannot communicate?
This happens via the group rebalance process where each Kafka Connect
worker communicates with the Kafka broker that has been chosen as the group
co-ordinator for the Kafka Connect cluster. The assignment is indeed
computed by the leader
cess where each Kafka Connect
worker communicates with the Kafka broker that has been chosen as the group
co-ordinator for the Kafka Connect cluster. The assignment is indeed
computed by the leader Connect worker but it is disseminated to the other
Connect workers via the group coordinator [
as that would render it useless as you mentioned
Thank you for taking the time
On Mon, 25 Sep 2023 11:41:18 -0400 Yash Mayya wrote
---
Hi Yeikel,
Heartbeats and group coordination in Kafka Connect do occur through Kafka,
but a Kafka Connect cluster where all workers cannot communicate
Hi Yeikel,
Heartbeats and group coordination in Kafka Connect do occur through Kafka,
but a Kafka Connect cluster where all workers cannot communicate with
each other won't work very well. You'll be able to create / update / delete
connectors by making requests to any workers that can communicate
, 24 Sept 2023 at 06:36, Yeikel Santana wrote:
> Hello everyone,I'm currently designing a new Kafka Connect cluster, and
> I'm trying to understand how connectivity functions among workers.In my
> setup, I have a single Kafka Connect cluster connected to the same Kafka
> topics and K
those REST requests will fail. I'm referring to REST requests like
> CREATE / UPDATE / DELETE.
>
> Hope this helps a little.
>
> Thanks,
> -Nikhil
>
> On Sun, 24 Sept 2023 at 06:36, Yeikel Santana wrote:
>
> > Hello everyone,I'm currently designing a new Kafka Conn
tly designing a new Kafka Connect cluster, and
> I'm trying to understand how connectivity functions among workers.In my
> setup, I have a single Kafka Connect cluster connected to the same Kafka
> topics and Kafka cluster. However, the workers are deployed in
> geographically separat
Hello everyone,I'm currently designing a new Kafka Connect cluster, and I'm
trying to understand how connectivity functions among workers.In my setup, I
have a single Kafka Connect cluster connected to the same Kafka topics and
Kafka cluster. However, the workers are deployed in geographically
every 100 ms
<https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/source/JdbcSourceTask.java#L427>.
Then I call a DEL on this connector, and the stop is not processed until
the next loop in the `poll()`.
Your initial diagnosis is 100% correct
-connect-jdbc received a patch to improve this
behavior when no data is being emitted:
https://github.com/confluentinc/kafka-connect-jdbc/pull/947 but I'm
not sure if that is relevant to your situation.
Thanks!
Greg
On Mon, Aug 21, 2023 at 6:53 AM Robson Hermes wrote:
>
> No, it stops the
; You have to remove connectors first using delete api
> > > > and then stop the connector
> > > >
> > > > On Thu, 17 Aug 2023 at 2:51 AM, Robson Hermes <
> robsonher...@gmail.com>
> > > > wrote:
> > > >
> > > > >
> > > and then stop the connector
> > >
> > > On Thu, 17 Aug 2023 at 2:51 AM, Robson Hermes
> > > wrote:
> > >
> > > > Hello
> > > >
> > > > I'm using kafka connect 7.4.0 to read data from Postgres views and
> > write
>
>
> wrote:
>
> > You have to remove connectors first using delete api
> > and then stop the connector
> >
> > On Thu, 17 Aug 2023 at 2:51 AM, Robson Hermes
> > wrote:
> >
> > > Hello
> > >
> > > I'm using kafka connect 7.4
n stop the connector
>
> On Thu, 17 Aug 2023 at 2:51 AM, Robson Hermes
> wrote:
>
> > Hello
> >
> > I'm using kafka connect 7.4.0 to read data from Postgres views and write
> to
> > another Postgres tables. So using JDBC source and sink connectors.
> > All wo
You have to remove connectors first using delete api
and then stop the connector
On Thu, 17 Aug 2023 at 2:51 AM, Robson Hermes
wrote:
> Hello
>
> I'm using kafka connect 7.4.0 to read data from Postgres views and write to
> another Postgres tables. So using JDBC source and sin
Hello Greg (sorry about the duplicate e-mail, forgot to cc users mailing
list)
Thanks a lot for your detailed reply. I'm using JDBC Source connectors from
kafka-connect-jdbc <https://github.com/confluentinc/kafka-connect-jdbc>.
Indeed the `poll()` implementation is blocked, so it only pro
Hi Robson,
Thank you for the detailed bug report.
I believe the behavior that you're describing is caused by this flaw:
https://issues.apache.org/jira/browse/KAFKA-15090 which is still under
discussion. Since the above flaw was introduced in 3.0, source
connectors need to return from poll()
Hello
I'm using kafka connect 7.4.0 to read data from Postgres views and write to
another Postgres tables. So using JDBC source and sink connectors.
All works good, but whenever I stop the source connectors via the rest api:
DEL http://kafka-connect:8083/connectors/connector_name_here
ge-
From: mil...@votecgroup.com [mailto:mil...@votecgroup.com]
Sent: 01 August 2023 11:56
To: users@kafka.apache.org
Subject: RE: Kafka Connect Rest Extension Question
Hi Team,
Greetings,
We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC
IT SERVICE PARTNERSH
e-
From: 양형욱 [mailto:hyungwooky...@navercorp.com]
Sent: 31 July 2023 14:42
To: users@kafka.apache.org
Subject: Kafka Connect Rest Extension Question
https://stackoverflow.com/questions/76797743/how-can-i-solve-connectrestextension-error
There is an issue with this link where the
inal Message-
From: Greg Harris [mailto:greg.har...@aiven.io.INVALID]
Sent: 31 July 2023 23:42
To: users@kafka.apache.org
Subject: Re: Kafka Connect Rest Extension Question
Hello Yang Hyung Wook,
In your post I do not see anything obviously wrong, so you may need to do some
more debugging.
1.
://docs.oracle.com/javase/tutorial/deployment/jar/view.html
2. Do you see either of these errors
https://github.com/apache/kafka/blob/3.5.0/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/DelegatingClassLoader.java#L267-L269
for filesystem-specific problems?
3. This log line
https
https://stackoverflow.com/questions/76797743/how-can-i-solve-connectrestextension-error
There is an issue with this link where the ConnectRestExtension implementation
is not registered. I've done everything the kip official documentation says,
but can you tell me why it doesn't work?
양형욱
ear what someone more familiar with client and broker
> internals has to say! Going to be following this thread.
>
> [1] -
> https://github.com/apache/kafka/blob/513e1c641d63c5e15144f9fcdafa1b56c5e5ba0
> 9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ExactlyOnce
>
ent and broker
internals has to say! Going to be following this thread.
[1] -
https://github.com/apache/kafka/blob/513e1c641d63c5e15144f9fcdafa1b56c5e5ba09/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ExactlyOnceWorkerSourceTask.java#L357
Cheers,
Chris
On Thu, Jun 8, 2023 at 7:53
Hi,
I'm investigating possibilities of exactly-once semantic for Debezium [1]
Kafka Connect source connectors, which implements change data capture for
various databases. Debezium has two phases, initial snapshot phase and
streaming phase. Initial snapshot phase loads existing data from
Hi all,
I'm testing apache kafka connect for a project and I found that the main
process listens to two different ports, the one to provide REST api, 8083
by default, and a different unprivileged port that changes its number each
restart. For instance, this is fragment of the output from netstat
Hey Jorge,
I looked into it, and can reproduce the second LISTEN port in a
vanilla Kafka Connect cluster without any connectors running.
Using jstack, I see that there are two threads that appear to be
waiting in the corresponding accept methods:
"RMI TCP Accept-0" #15 daemon prio=5
Hi all,
I'm testing apache kafka connect for a project and I found that the main
process listens to two different ports, the one to provide REST api, 8083
by default, and a different unprivileged port that changes its number each
restart. For instance, this is fragment of the output from netstat
Hello,
can someone please give me a hint how to execute two lines of code upon Kafka
Connect Startup, like:
final JaegerTracer tracer = Configuration.fromEnv().getTracer();
GlobalTracer.register(tracer);
I implemented using a custom (Fake-)Connector, but there is much overhead,
because you
;
}
@Override
public ConfigData get(String s, Set set) {
return null;
}
@Override
public void close(){}
}
And setting these Environment Variables in Kafka Connect
- CONNECT_CONFIG_PROVIDERS=tracing
- CONNECT_CONFIG_PROVIDERS_TRACING_CLASS=org.example.TracingConfigProvider
Best regards
) {
return null;
}
@Override
public void close(){}
}
And setting these Environment Variables in Kafka Connect
- CONNECT_CONFIG_PROVIDERS=tracing
- CONNECT_CONFIG_PROVIDERS_TRACING_CLASS=org.example.TracingConfigProvider
Best regards,
Jan
Von: Jakub Scholz
Datum: Montag, 20
gt; can someone please give me a hint how to execute two lines of code upon
> Kafka Connect Startup, like:
>
> final JaegerTracer tracer = Configuration.fromEnv().getTracer();
> GlobalTracer.register(tracer);
>
> I implemented using a custom (Fake-)Connector, but there is much
Hello,
can someone please give me a hint how to execute two lines of code upon Kafka
Connect Startup, like:
final JaegerTracer tracer = Configuration.fromEnv().getTracer();
GlobalTracer.register(tracer);
I implemented using a custom (Fake-)Connector, but there is much overhead,
because you
forward to your reply.
> >
> >
> > Thank you,
> > Xiaoxia
> >
> >
> >
> >
> > -- 原始邮件 ------
> > *发件人:* "users" ;
> > *发送时间:* 2023年3月15日(星期三) 晚上6:38
> &
he signature.
> Looking forward to your reply.
>
>
> Thank you,
> Xiaoxia
>
>
>
>
> -- 原始邮件 --
> *发件人:* "users" ;
> *发送时间:* 2023年3月15日(星期三) 晚上6:38
> *收件人:* "users";
> *主
:04 PM Chris Egerton
wrote:
> Hi Nitty,
>
> Sorry, I should have clarified. The reason I'm thinking about shutdown here
> is that, when exactly-once support is enabled on a Kafka Connect cluster
> and a new set of task configurations is generated for a connector, the
> Conne
Hi Nitty,
Sorry, I should have clarified. The reason I'm thinking about shutdown here
is that, when exactly-once support is enabled on a Kafka Connect cluster
and a new set of task configurations is generated for a connector, the
Connect framework makes an effort to shut down all the old task
er with the same transactionalId which fences the current one.
>>> > >> 2023-03-12 11:32:45,224 ERROR [json-sftp-source-connector|task-0]
>>> > >> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} failed
>>> to
>>> > send
>>> > >
nector-0]
>> > >> org.apache.kafka.common.errors.ProducerFencedException: There is a
>> newer
>> > >> producer with the same transactionalId which fences the current one.
>> > >> 2023-03-12 11:32:45,222 ERROR
>> > [json-sftp-source-connector|
[task-thread-json-sftp-source-connector-0]
> > >> org.apache.kafka.common.errors.ProducerFencedException: There is a
> newer
> > >> producer with the same transactionalId which fences the current one.
> > >> 2023-03-12 11:32:45,225 ERROR [json-sftp-source-conn
source-connector-0} Task threw
> an
> >> uncaught and unrecoverable exception. Task is being killed and will not
> >> recover until manually restarted
> >> (org.apache.kafka.connect.runtime.WorkerTask)
> >> [task-thread-json-sftp-source-connector-0]
> >>
&g
ne more scenario, When I call the commit I saw the below
>> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
>> producerId=11, producerEpoch=2, txnTimeoutMs=6, state=*Ongoing*,
>> pendingState=None
txnLastUpdateTimestamp=1678620463834)
> Then, before changing the states to Abort, I dropped the next file then I
> dont see any issues. Previous transaction
> as well as the current transaction are committed.
>
> Thank you for your support.
>
> Thanks,
> Nitty
>
> O
error record, but
> commit is not happening for me. Kafka connect tries to abort the
> transaction automatically
>
> This is really interesting--are you certain that your task never invoked
> TransactionContext::abortTransaction in this case? I'm looking over the
> code base and it se
Hi Nitty,
> I called commitTransaction when I reach the first error record, but
commit is not happening for me. Kafka connect tries to abort the
transaction automatically
This is really interesting--are you certain that your task never invoked
TransactionContext::abortTransaction in this c
Hi Chris,
We have a use case to commit previous successful records and stop the
processing of the current file and move on with the next file. To achieve
that I called commitTransaction when I reach the first error record, but
commit is not happening for me. Kafka connect tries to abort
gt; CompleteAbort.
>>> > So for my next transaction I am getting InvalidProducerEpochException
>>> and
>>> > then task stopped after that. I tried calling the abort after sending
>>> last
>>> > record to the topic then transacti
rong here.
>> >
>> > Please advise.
>> > Thanks,
>> > Nitty
>> >
>> > On Tue 7 Mar 2023 at 2:21 p.m., Chris Egerton
>> > wrote:
>> >
>> > > Hi Nitty,
>> > >
>> > > We've recently added some do
ocumentation/#connect_exactlyoncesourceconnectors
> > > .
> > > To quote a relevant passage from those docs:
> > >
> > > > In order for a source connector to take advantage of this support, it
> > > must be able to provide meaningful source offsets for each rec
ecord that it
> > emits, and resume consumption from the external system at the exact
> > position corresponding to any of those offsets without dropping or
> > duplicating messages.
> >
> > So, as long as your source connector is able to use the Kafka Connect
>
.
>
> So, as long as your source connector is able to use the Kafka Connect
> framework's offsets API correctly, it shouldn't be necessary to make any
> other code changes to the connector.
>
> To enable exactly-once support for source connectors on your Connect
> cluster, see
.
>
> So, as long as your source connector is able to use the Kafka Connect
> framework's offsets API correctly, it shouldn't be necessary to make any
> other code changes to the connector.
>
> To enable exactly-once support for source connectors on your Connect
> cluster, see
ine its own transaction boundaries. In this case, it sounds
like may be what you want; I just want to make sure to call out that in
either case, you should not be directly instantiating a producer in your
connector code, but let the Kafka Connect runtime do that for you, and just
worry about retu
to call commit
in some cases. Is it a valid use case in terms of kafka connect?
Another Question - Should I use a transactional producer instead
creating an object of TransactionContext? Below is the connector
configuration I am using.
exactly.once.support: "required"
transactio
Hi Team,
I am trying to implement exactly once behavior in our source connector. Is
there any sample source connector implementation available to have a look
at?
Regards,
Nitty
Hi All,
I am working on Debezium POC.
We have a zookeeper, Kafka broker and kafka connect service.
As per the logs the debezium connector is working fine. But kafka topics
are not created automatically(auto topic creation enabled), except a few
default topics ex: app_webapp.persona
Ah, it definitely seems like KIP-710 will address the issue we've been bitten
by most.We'll eagerly await the kafka-3.5.0 release and then see if enabling
'dedicated.mode.enable.internal.rest' is possible with Strimzi.
Thanks for the help and patience! :-)
w this can be implemented within Kafka
Connect itself so that it works as expected for all users?
I have not looked into solutions in enough depth to recommend one. If I
had, the PR would be open :)
> We tried adding tasks to trigger a propagation of the task configs
(increased from 36 to 40 t
d9b54/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1569-L1572
Only restarting the workers seemed to unblock the propagation of the new task
config for the new mirrored topic.
Hopefully this can help us narrow things down a bit...
In th
be implemented within Kafka Connect
itself so that it works as expected for all users?
Thanks!
:
> > This is the condition which is causing the issue:
> https://github.com/apache/kafka/blob/6e2b86597d9cd7c8b2019cffb895522deb63c93a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1918-L1931>
> The DistributedHerder is comparing th
:47:19 PM EST, Greg Harris
wrote:
> This is the condition which is causing the issue:
> https://github.com/apache/kafka/blob/6e2b86597d9cd7c8b2019cffb895522deb63c93a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1918
to retry the operation.
This is a recursive loop, but isn't obvious because it's inside a callback.
This is the condition which is causing the issue:
https://github.com/apache/kafka/blob/6e2b86597d9cd7c8b2019cffb895522deb63c93a/connect/runtime/src/main/java/org/apache/kafka/conne
Another thought... if an API exists to list all connectors in such a state,
then at least some monitoring/alerting could be put in place, right?
So I've been looking into the codebase to familiarize myself with it.I'm
operating on the assumption that the connectors in question get stuck in an
inconsistent state which causes them to prune the new task configs from those
which are "broadcast" to the workers.I see on
Severity: important
Description:
A possible security vulnerability has been identified in Apache Kafka
Connect. This requires access to a Kafka Connect worker,
and the ability to create/modify connectors on it with an arbitrary
Kafka client SASL JAAS config and a SASL-based security protocol
h to tasks transparently
> It seems we're not the first to notice that the issue isn't limited to
> connectors who selectively propagate properties to the task configs.FWIW,
> the kafka-connect-s3 connector also does not seem to prune any configs from
> the tasks:
> https://github.com/c
seems we're not the first to notice that the issue isn't limited to
connectors who selectively propagate properties to the task configs.FWIW, the
kafka-connect-s3 connector also does not seem to prune any configs from the
tasks:
https://github.com/confluentinc/kafka-connect-storage-clou
ue with a number of 3rd-party connectors
> not provided as part of the Kafka project as well.e.g.- Confluent's
> kafka-connect-s3 connector (
> https://github.com/confluentinc/kafka-connect-storage-cloud)- Aerospike's
> connector: (
> https://docs.aerospike.com/connect/kafka/to
this issue with a number of 3rd-party connectors not
provided as part of the Kafka project as well.e.g.- Confluent's
kafka-connect-s3 connector
(https://github.com/confluentinc/kafka-connect-storage-cloud)- Aerospike's
connector:
(https://docs.aerospike.com/connect/kafka/to-asdb/from-kafka-to-asd
rs at a higher level, when a worker is deciding whether to
> write new task configs at all.
> The relevant code is here:
> https://github.com/apache/kafka/blob/6e2b86597d9cd7c8b2019cffb895522deb63c93a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerde
3a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1918-L1931
In that snippet, new task configs generated by the connector are only
written to the config topic if they differ from the current contents of the
config topic. And this comparison is done o
into the Kafka Connect codebase to better understand
how config.storage.topic is consumed.
In the interest of brevity I won't repeat that entire thread of discussion here.
However, I was wondering if anyone knows whether the JavaDoc suggestion on
ClusterConfigState.inconsistentConnectors() is actually
: column
"created_at" is of type timestamp without time zone but expression is
of type bigint
Hint: You will need to rewrite or cast the expression.
Position: 52
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at
org.postgresq
Hi,
I am trying to set up active <> active mm2 via Kafka connect distributed
cluster. It seems not possible because of the limitations like
*bootstrap.servers *property.
And also as per this KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP382:MirrorMa
Hi,
What version of Kafka Connect are you running? This sounds like a bug that
was fixed a few releases ago.
Cheers,
Chris
On Wed, Oct 12, 2022, 21:27 Hemanth Savasere
wrote:
> We have stumbled upon an issue on a running cluster with multiple
> source/sink connectors:
>
>1
to be stuck forever, which in turn made the start method of the connector
hang forever.
3. After some time, the entire Kafka Connect cluster was unavailable and
the REST API was not responding giving {"error_code":500,"message":"Request
timed out"} for most
gt; Date: Friday, 30. September 2022 at 19.56
> To: users@kafka.apache.org
> Subject: Apache Kafka Connect
> Hi All,
>
> I have a scenario where I want to send data from elasticsearch to Mongodb
> through kafka and while researching I came across Kafka connect.
>
> Through Kafka
To: users@kafka.apache.org
Subject: Apache Kafka Connect
Hi All,
I have a scenario where I want to send data from elasticsearch to Mongodb
through kafka and while researching I came across Kafka connect.
Through Kafka connect is it possible to have the elasticsearch as a source
connector
Hi Namita,
For Moving data from Elasticsearch to Kafka you need Elasticsearch Source
connector. I guess this is not supported connector. You may have to rely on
some community developed connector where you may not get instant support.
https://github.com/DarioBalinzo/kafka-connect-elasticsearch
1 - 100 of 635 matches
Mail list logo