cc user@f.a.o

Hi Siva,

I am not aware of a Flink MongoDB Connector in either Apache Flink, Apache
Bahir or flink-packages.org. I assume that you are doing idempotent
upserts, and hence do not require a transactional sink to achieve
end-to-end exactly-once results.

To build one yourself, you implement
org.apache.flink.streaming.api.functions.sink.SinkFunction (better inherit
from org.apache.flink.streaming.api.functions.sink.RichSinkFunction).
Roughly speaking, you would instantiate the MongoDB client in the "open"
method and write records in the MongoDB client. Usually, such sinks us some
kind of batching to increase write performance.

I suggest you also have a look at the source code of the ElasticSearch or
Cassandra Sink.

Best,

Konstantin

On Sat, Mar 28, 2020 at 1:47 PM Sivapragash Krishnan <
sivapragas...@gmail.com> wrote:

> Hi
>
> I'm working on creating a streaming pipeline which streams data from Kafka
> and stores in MongoDB using Flink scala.
>
> I'm able to successfully stream data from Kafka using FLink Scala. I'm not
> finding any support to store the data into MongoDB, could you please help
> me with the code snippet to store data into MongoDB.
>
> Thanks
> Siva
>


-- 

Konstantin Knauf | Head of Product

+49 160 91394525


Follow us @VervericaData Ververica <https://www.ververica.com/>


--

Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Tony) Cheng

Reply via email to