https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)
בתאריך יום ו׳, 28 במאי 2021, 08:04, מאת Ran Lupovich :
> While your DB consumer is running you get the access to the partition
> ${partition} @ offset
While your DB consumer is running you get the access to the partition
${partition} @ offset ${offset}
https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
setting your second consumers for real time just set them tostart from that
point
בתאריך יום ו׳, 28
name=elasticsearch-sinkconnector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnectortasks.max=1topics=test-elasticsearch-sinkkey.ignore=trueconnection.url=https://localhost:9200type.name=kafka-connect
https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html
בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari <
sunilmchaudhar...@gmail.com>:
> The configurations doesnt have provision for the truststore. Thats my
> concern.
>
>
> On Thu, 27 May 2021 at 10:47 PM, Ran
The configurations doesnt have provision for the truststore. Thats my
concern.
On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich wrote:
> For https connections you need to set truststore configuration parameters ,
> giving it jks with password , the jks needs the contain the certficate of
> CA
Done, added you to Confluence and Jira so you should be able to self-assign
tickets and create KIPs if necessary.
Welcome to Kafka :)
On Thu, May 27, 2021 at 4:28 PM Norbert Wojciechowski <
wojciechowski.norbert.git...@gmail.com> wrote:
> Hello,
>
> Can I please be assigned to Kafka contributor
Hello,
Can I please be assigned to Kafka contributor list on Confluence/Jira, so I
can start contributing to Kafka and be able to work on issues?
My Jira username is: erzbnif
Thanks,
Norbert
I'm trying to figure out how to pragmatically read a consumer groups offset for
a topic.
What I'm trying to do is read the offsets of our DB consumers that run once an
hour and batch lad all new messages. I then would have another consumer that
monitors the offsets that have been consumed and
The main purpose of the /*tmp* directory is to temporarily store *files* when
installing an OS or software. If any *files* in the /*tmp* directory have
not been accessed for a while, they will be automatically *deleted* from
the system
בתאריך יום ה׳, 27 במאי 2021, 19:04, מאת Ran Lupovich :
>
For https connections you need to set truststore configuration parameters ,
giving it jks with password , the jks needs the contain the certficate of
CA that is signing your certifcates
בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari <
sunilmchaudhar...@gmail.com>:
> Hi Ran,
> That
Hi Ran,
That problem is solved already.
If you read complete thread and see that last problem is about https
connection.
On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich wrote:
> Try setting es.port = "9200" without quotes?
>
> בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari <
>
Seems you log dir is sending your data to tmp folder, if I am bot mistken
this dir automatically removing files from itself, causing the log deletuon
procedure of the kafka internal to fail and shutdown broker on file not
found
בתאריך יום ה׳, 27 במאי 2021, 17:52, מאת Neeraj Gulia <
Hi team,
Our Kafka is getting down almost once or twice a month due to log file
deletion failure.
There is single node kafka broker is running in our system and gets down
every time it tires to delete the log files as cleanup and fails.
Sharing the Error Logs, we need a robust solution for
Try setting es.port = "9200" without quotes?
בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari <
sunilmchaudhar...@gmail.com>:
> Hello team,
> Can anyone help me with this issue?
>
>
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
>
>
> Regards,
> Sunil.
>
Hi,
I am trying to understand few things:
in a normal consumer-process-produce topology, consumer is polling records,
then process each and then gives to producer to produce on destination
topic. In this case,
is the 'produce' a synchronous call i.e does it happen in the same consumer
thread or
15 matches
Mail list logo