Pretty much anything you can write to from a Hadoop MapReduce program can be a Flink destination. Just plug in the OutputFormat and go.
Re: output semantics, your mileage may vary. Flink should do you fine for at least once. On Friday, March 11, 2016, Josh <jof...@gmail.com> wrote: > Hi all, > > I want to use an external data store (DynamoDB) as a sink with Flink. It > looks like there's no connector for Dynamo at the moment, so I have two > questions: > > 1. Is it easy to write my own sink for Flink and are there any docs around > how to do this? > 2. If I do this, will I still be able to have Flink's processing > guarantees? I.e. Can I be sure that every tuple has contributed to the > DynamoDB state either at-least-once or exactly-once? > > Thanks for any advice, > Josh