Hi,
The job is not explicitly stopped, bringing down the cluster will also bring
down the job. (Which is maybe not the nicest way of doing things but it works.)
Sources can trigger pipeline termination by returning from their run() method.
Best,
Aljoscha
> On 7. Feb 2018, at 21:15, Thomas
Thanks! It would indeed be nice to have this as framework that makes test
fun and easy to write ;-)
Looking at SavepointMigrationTestBase, I see that the cluster is eventually
stopped in teardown, but I don't find where the individual job is
terminated after the expected results are in? Also,
There is StatefulJobSavepointMigrationITCase, which executes a proper unbounded
pipeline on a locally started cluster and "listens" for some criteria via
accumulators before cancelling the job and shutting down the cluster. The
communication with the cluster is quite custom here, but I would
Hi Ken,
Thanks! I would expect more folks to run into this and hence surprised to
not find this in LocalStreamEnvironment. Is there a reason for that?
In the specific case, we have an unbounded source (Kinesis), but for
testing we would like to make it bounded. Hence the earlier question
whether
Hi Thomas,
Normally the streaming job will terminate when the sources are exhausted and
all records have been processed.
I assume you have some unbounded source(s), thus this doesn’t work for your
case.
We’d run into a similar situation with a streaming job that has iterations.
Our solution
Hi,
I'm looking for an example of an integration test that runs a streaming job
and terminates when the expected result becomes available. I could think of
2 approaches:
1. Modified version of LocalStreamEnvironment that executes the job
asynchronously and polls for the result or
2. Source that