I have read the documentation and various blogs that state that it is
possible to load data into a data-set and use that data to bootstrap a
stream application.

The documentation literally says this, "...you can read a batch of data
from any store, preprocess it, and write the result to a savepoint that you
use to bootstrap the state of a streaming application." (source:
https://ci.apache.org/projects/flink/flink-docs-master/dev/libs/state_processor_api.html
).

Another blog states, "You can create both Batch and Stream environment in a
single job." (source:
https://www.kharekartik.dev/2019/12/14/bootstrap-your-flink-jobs/

I want to try this approach, but I cannot find any real examples online.

I have failed on numerous attempts.

I have a few questions:

1) is there an example that demonstrate this feature?
2) how can you launch batch and stream environment from a single job?
3) does this require two jobs?

Anybody, please help.

Reply via email to