Hi Tomas,

triggering a batch DataSet job from a DataStream program for each input
record doesn't sound like a good idea to me.
You would have to make sure that the cluster always has sufficient
resources and handle failures.

It would be preferable to have all data processing in a DataStream job. You
mentioned that the challenge is to join the data of the files with a JDBC
database.
I see two ways to do that in a DataStream program:
- replicate the JDBC table in a stateful operator. This means that you have
to publish updates on the database to the Flink program.
- query the JDBC table with an AsyncFunction. This operator concurrently
executes multiple calls to an external service which improves latency and
throughput. The operator ensures that checkpoints and watermarks are
correctly handled.

Best, Fabian

2017-10-30 19:11 GMT+01:00 Tomas Mazukna <tomas.mazu...@gmail.com>:

> Trying to figure out the best design in Flink.
> Reading from a kafka topic which has messages with pointers to files to be
> processed.
> I am thinking to somehow kick off a batch job per file... unless there is
> an easy way to get a separate dataset per file.
> I can do almost all of this in the stream, parse file with flat map ->
> explode its contents into multiple data elements -> map, etc...
> On of these steps would be to grab another dataset from JDBC source and
> join with the stream's contents...
> I think I am mixing the two concepts here and the right approach would be
> to kick of this mini batch job per file,
> where I have file datase t+ jdbc dataset to join with.
>
> So how would I go about kicking a batch from from streaming job?
>
> Thanks,
> Tomas
>

Reply via email to