Moving the post-execution logic out of the flow itself is an option in some
cases, but it has a few issues:

- The environment executing the flow may not have the same permissions (or
even network access!) as the environment the flow is executing in, which
can make this tricky.
- It introduces a single point of failure for the post-execution step. (eg:
the thing that started the flow dies, but the flow is still running)
- It makes it difficult for the post-execution step to rely on any state
generated by the flow.


On Fri, Aug 25, 2017 at 11:46 AM, Chamikara Jayalath <[email protected]>
wrote:

> Can you do this from the program that runs the Beam job, after job is
> complete (you might have to use a blocking runner or poll for the status of
> the job) ?
>
> - Cham
>
> On Fri, Aug 25, 2017 at 8:44 AM Steve Niemitz <[email protected]> wrote:
>
> > I also have a similar use case (but with BigTable) that I feel like I had
> > to hack up to make work.  It'd be great to hear if there is a way to do
> > something like this already, or if there are plans in the future.
> >
> > On Fri, Aug 25, 2017 at 9:46 AM, Chaim Turkel <[email protected]> wrote:
> >
> > > Hi,
> > >   I have a few piplines that are an ETL from different systems to
> > bigquery.
> > > I would like to write the status of the ETL after all records have
> > > been updated to the bigquery.
> > > The problem is that writing to bigquery is a sink and you cannot have
> > > any other steps after the sink.
> > > I tried a sideoutput, but this is called in no correlation to the
> > > writing to bigquery, so i don't know if it succeeded or failed.
> > >
> > >
> > > any ideas?
> > > chaim
> > >
> >
>

Reply via email to