Re: [VOTE] Release version 0.1.0-incubating

2016-06-08 Thread Jean-Baptiste Onofré

+1 (binding)

- all files have incubating
- signatures check out (and KEYS there)
- disclaimer exists
- LICENSE and NOTICE good
- No unexpected binary in source
- All ASF licensed files have ASF headers
- source distribution available and content is good

Improvements for next release:
- As the source zip distribution is not well named, and not useful on 
Maven, we skip the deploy for this
- Related to the previous point, we can stage the source distribution on 
dev dist.apache.org (https://dist.apache.org/repos/dist/dev/beam)


Next steps:
- drop the RC1 & RC2 staging repo on repository.apache.org
- forward the vote to IPMC once this one will passed
- when passed on general@i.a.o,
-- promote the repository on repository.apache.org
-- upload the source distribution and signatures on release 
dist.apache.org with apache-beam... renaming

-- announce the release on mailing lists
-- announce the release on the website
- provide a release guide (Davor & JB)

Regards
JB

On 06/09/2016 01:20 AM, Davor Bonaci wrote:

Hi everyone,
Here's the first vote for the first release of Apache Beam -- version
0.1.0-incubating!

As a reminder, we aren't looking for any specific new functionality, but
would like to release the existing code, get something to our users' hands,
and test the processes. Previous discussions and iterations on this release
have been archived on the dev@ mailing list.

The complete staging area is available for your review, which includes:
* the official Apache source release to be deployed to dist.apache.org [1],
and
* all artifacts to be deployed to the Maven Central Repository [2].

This corresponds to the tag "v0.1.0-incubating-RC3" in source control, [3].

Please vote as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

For those of us enjoying our first voting experience -- the release
checklist is here [4]. This is a "package release"-type of the Apache
voting process [5]. As customary, the vote will be open for 72 hours. It is
adopted by majority approval with at least 3 PPMC affirmative votes. If
approved, the proposal will be presented to the Apache Incubator for their
review.

Thanks,
Davor

[1]
https://repository.apache.org/content/repositories/orgapachebeam-1002/org/apache/beam/beam-parent/0.1.0-incubating/beam-parent-0.1.0-incubating-source-release.zip
[2] https://repository.apache.org/content/repositories/orgapachebeam-1002/
[3] https://github.com/apache/incubator-beam/tree/v0.1.0-incubating-RC3
[4] http://incubator.apache.org/guides/releasemanagement.html#check-list
[5] http://www.apache.org/foundation/voting.html



--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


[VOTE] Release version 0.1.0-incubating

2016-06-08 Thread Davor Bonaci
Hi everyone,
Here's the first vote for the first release of Apache Beam -- version
0.1.0-incubating!

As a reminder, we aren't looking for any specific new functionality, but
would like to release the existing code, get something to our users' hands,
and test the processes. Previous discussions and iterations on this release
have been archived on the dev@ mailing list.

The complete staging area is available for your review, which includes:
* the official Apache source release to be deployed to dist.apache.org [1],
and
* all artifacts to be deployed to the Maven Central Repository [2].

This corresponds to the tag "v0.1.0-incubating-RC3" in source control, [3].

Please vote as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

For those of us enjoying our first voting experience -- the release
checklist is here [4]. This is a "package release"-type of the Apache
voting process [5]. As customary, the vote will be open for 72 hours. It is
adopted by majority approval with at least 3 PPMC affirmative votes. If
approved, the proposal will be presented to the Apache Incubator for their
review.

Thanks,
Davor

[1]
https://repository.apache.org/content/repositories/orgapachebeam-1002/org/apache/beam/beam-parent/0.1.0-incubating/beam-parent-0.1.0-incubating-source-release.zip
[2] https://repository.apache.org/content/repositories/orgapachebeam-1002/
[3] https://github.com/apache/incubator-beam/tree/v0.1.0-incubating-RC3
[4] http://incubator.apache.org/guides/releasemanagement.html#check-list
[5] http://www.apache.org/foundation/voting.html


Re: 0.1.0-incubating release

2016-06-08 Thread Davor Bonaci
The third release candidate is now available for everyone's review [1],
which should be incorporating all feedback so far.

Please comment if there's additional feedback, as we are about to start the
voting process.

[1] https://repository.apache.org/content/repositories/orgapachebeam-1002

On Wed, Jun 8, 2016 at 12:10 PM, P. Taylor Goetz  wrote:

> Thanks for the clarification JB. In the projects I’ve been involved with,
> I’ve not seen that practice.
>
> As long as the resulting release ends up on dist.a.o I don’t think it’s a
> problem.
>
> -Taylor
>
>
> > On Jun 8, 2016, at 12:49 AM, Jean-Baptiste Onofré 
> wrote:
> >
> > Hi Taylor,
> >
> > Just to be clearn, in most other projects, we stage the distributions on
> repository. We upload the distro and signatures to dist.apache.org only
> when the vote passed.
> >
> > Basically, the release process I talked with Davor (and that I will
> document) is:
> > - Tag and stage using mvn release:prepare release:perform
> > - Close repo
> > - Start vote
> > - If passed, forward vote to incubator
> > - If passed, close repo
> > - Upload distro to dist
> > - Announce the release (mailing lists, website)
> >
> > It's based on what I do in Karaf, ServiceMix, etc.
> >
> > Regards
> > JB
> >
> > On 06/08/2016 02:39 AM, P. Taylor Goetz wrote:
> >> Out of curiosity, is there a reason for distributing the release on
> repository.a.o vs. dist.a.o?
> >>
> >> In my experience repository.a.o has traditionally been used for maven
> artifacts, and dist.a.o has been for release artifacts (source archives and
> convenience binaries).
> >>
> >> I'd be happy to help with documenting the process.
> >>
> >> I ask because this might come up during an IPMC release vote.
> >>
> >> -Taylor
> >>
> >>> On Jun 1, 2016, at 9:46 PM, Davor Bonaci 
> wrote:
> >>>
> >>> Hi everyone!
> >>> We've started the release process for our first release,
> 0.1.0-incubating.
> >>>
> >>> To recap previous discussions, we don't have particular functional
> goals
> >>> for this release. Instead, we'd like to make available what's
> currently in
> >>> the repository, as well as work through the release process.
> >>>
> >>> With this in mind, we've:
> >>> * branched off the release branch [1] at master's commit 8485272,
> >>> * updated master to prepare for the second release, 0.2.0-incubating,
> >>> * built the first release candidate, RC1, and deployed it to a staging
> >>> repository [2].
> >>>
> >>> We are not ready to start a vote just yet -- we've already identified
> a few
> >>> issues worth fixing. That said, I'd like to invite everybody to take a
> peek
> >>> and comment. I'm hoping we can address as many issues as possible
> before we
> >>> start the voting process.
> >>>
> >>> Please let us know if you see any issues.
> >>>
> >>> Thanks,
> >>> Davor
> >>>
> >>> [1]
> https://github.com/apache/incubator-beam/tree/release-0.1.0-incubating
> >>> [2]
> https://repository.apache.org/content/repositories/orgapachebeam-1000/
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
>
>


Re: 0.1.0-incubating release

2016-06-08 Thread P. Taylor Goetz
Thanks for the clarification JB. In the projects I’ve been involved with, I’ve 
not seen that practice.

As long as the resulting release ends up on dist.a.o I don’t think it’s a 
problem.

-Taylor


> On Jun 8, 2016, at 12:49 AM, Jean-Baptiste Onofré  wrote:
> 
> Hi Taylor,
> 
> Just to be clearn, in most other projects, we stage the distributions on 
> repository. We upload the distro and signatures to dist.apache.org only when 
> the vote passed.
> 
> Basically, the release process I talked with Davor (and that I will document) 
> is:
> - Tag and stage using mvn release:prepare release:perform
> - Close repo
> - Start vote
> - If passed, forward vote to incubator
> - If passed, close repo
> - Upload distro to dist
> - Announce the release (mailing lists, website)
> 
> It's based on what I do in Karaf, ServiceMix, etc.
> 
> Regards
> JB
> 
> On 06/08/2016 02:39 AM, P. Taylor Goetz wrote:
>> Out of curiosity, is there a reason for distributing the release on 
>> repository.a.o vs. dist.a.o?
>> 
>> In my experience repository.a.o has traditionally been used for maven 
>> artifacts, and dist.a.o has been for release artifacts (source archives and 
>> convenience binaries).
>> 
>> I'd be happy to help with documenting the process.
>> 
>> I ask because this might come up during an IPMC release vote.
>> 
>> -Taylor
>> 
>>> On Jun 1, 2016, at 9:46 PM, Davor Bonaci  wrote:
>>> 
>>> Hi everyone!
>>> We've started the release process for our first release, 0.1.0-incubating.
>>> 
>>> To recap previous discussions, we don't have particular functional goals
>>> for this release. Instead, we'd like to make available what's currently in
>>> the repository, as well as work through the release process.
>>> 
>>> With this in mind, we've:
>>> * branched off the release branch [1] at master's commit 8485272,
>>> * updated master to prepare for the second release, 0.2.0-incubating,
>>> * built the first release candidate, RC1, and deployed it to a staging
>>> repository [2].
>>> 
>>> We are not ready to start a vote just yet -- we've already identified a few
>>> issues worth fixing. That said, I'd like to invite everybody to take a peek
>>> and comment. I'm hoping we can address as many issues as possible before we
>>> start the voting process.
>>> 
>>> Please let us know if you see any issues.
>>> 
>>> Thanks,
>>> Davor
>>> 
>>> [1] https://github.com/apache/incubator-beam/tree/release-0.1.0-incubating
>>> [2] https://repository.apache.org/content/repositories/orgapachebeam-1000/
> 
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: 0.1.0-incubating release

2016-06-08 Thread Amit Sela
To Davor, JB and anyone else helping with the release, Thanks! this looks
great.

On Wed, Jun 8, 2016 at 9:11 PM Amit Sela  wrote:

> Regarding Dan's questions:
> 1. I'm not sure - it is built with spark-*_2.10 but I honestly don't know
> if this matters for the runner itself, it could be nice to have in order to
> be more informative. In addition, this will change with Spark 2.0 to Scala
> 2.11 AFAIK.
> 2. This is to allow running out-of-the-box examples I guess. The Flink
> runner just tells you how to do it on your own here:
> https://github.com/apache/incubator-beam/tree/master/runners/flink
> Would you say this is a better approach ?
>
> In any case, packaging is necessary to run on cluster and the shading
> rules are there for Guava - Beam/Hadoop..
>
> On Wed, Jun 8, 2016 at 12:14 PM Maximilian Michels  wrote:
>
>> I like the compromise on the Maven naming scheme. Thanks for
>> incorporating all the feedback!
>>
>> On Wed, Jun 8, 2016 at 6:49 AM, Jean-Baptiste Onofré 
>> wrote:
>> > Hi Taylor,
>> >
>> > Just to be clearn, in most other projects, we stage the distributions on
>> > repository. We upload the distro and signatures to dist.apache.org
>> only when
>> > the vote passed.
>> >
>> > Basically, the release process I talked with Davor (and that I will
>> > document) is:
>> > - Tag and stage using mvn release:prepare release:perform
>> > - Close repo
>> > - Start vote
>> > - If passed, forward vote to incubator
>> > - If passed, close repo
>> > - Upload distro to dist
>> > - Announce the release (mailing lists, website)
>> >
>> > It's based on what I do in Karaf, ServiceMix, etc.
>> >
>> > Regards
>> > JB
>> >
>> >
>> > On 06/08/2016 02:39 AM, P. Taylor Goetz wrote:
>> >>
>> >> Out of curiosity, is there a reason for distributing the release on
>> >> repository.a.o vs. dist.a.o?
>> >>
>> >> In my experience repository.a.o has traditionally been used for maven
>> >> artifacts, and dist.a.o has been for release artifacts (source
>> archives and
>> >> convenience binaries).
>> >>
>> >> I'd be happy to help with documenting the process.
>> >>
>> >> I ask because this might come up during an IPMC release vote.
>> >>
>> >> -Taylor
>> >>
>> >>> On Jun 1, 2016, at 9:46 PM, Davor Bonaci 
>> >>> wrote:
>> >>>
>> >>> Hi everyone!
>> >>> We've started the release process for our first release,
>> >>> 0.1.0-incubating.
>> >>>
>> >>> To recap previous discussions, we don't have particular functional
>> goals
>> >>> for this release. Instead, we'd like to make available what's
>> currently
>> >>> in
>> >>> the repository, as well as work through the release process.
>> >>>
>> >>> With this in mind, we've:
>> >>> * branched off the release branch [1] at master's commit 8485272,
>> >>> * updated master to prepare for the second release, 0.2.0-incubating,
>> >>> * built the first release candidate, RC1, and deployed it to a staging
>> >>> repository [2].
>> >>>
>> >>> We are not ready to start a vote just yet -- we've already identified
>> a
>> >>> few
>> >>> issues worth fixing. That said, I'd like to invite everybody to take a
>> >>> peek
>> >>> and comment. I'm hoping we can address as many issues as possible
>> before
>> >>> we
>> >>> start the voting process.
>> >>>
>> >>> Please let us know if you see any issues.
>> >>>
>> >>> Thanks,
>> >>> Davor
>> >>>
>> >>> [1]
>> >>>
>> https://github.com/apache/incubator-beam/tree/release-0.1.0-incubating
>> >>> [2]
>> >>>
>> https://repository.apache.org/content/repositories/orgapachebeam-1000/
>> >
>> >
>> > --
>> > Jean-Baptiste Onofré
>> > jbono...@apache.org
>> > http://blog.nanthrax.net
>> > Talend - http://www.talend.com
>>
>


Re: 0.1.0-incubating release

2016-06-08 Thread Amit Sela
Regarding Dan's questions:
1. I'm not sure - it is built with spark-*_2.10 but I honestly don't know
if this matters for the runner itself, it could be nice to have in order to
be more informative. In addition, this will change with Spark 2.0 to Scala
2.11 AFAIK.
2. This is to allow running out-of-the-box examples I guess. The Flink
runner just tells you how to do it on your own here:
https://github.com/apache/incubator-beam/tree/master/runners/flink
Would you say this is a better approach ?

In any case, packaging is necessary to run on cluster and the shading rules
are there for Guava - Beam/Hadoop..

On Wed, Jun 8, 2016 at 12:14 PM Maximilian Michels  wrote:

> I like the compromise on the Maven naming scheme. Thanks for
> incorporating all the feedback!
>
> On Wed, Jun 8, 2016 at 6:49 AM, Jean-Baptiste Onofré 
> wrote:
> > Hi Taylor,
> >
> > Just to be clearn, in most other projects, we stage the distributions on
> > repository. We upload the distro and signatures to dist.apache.org only
> when
> > the vote passed.
> >
> > Basically, the release process I talked with Davor (and that I will
> > document) is:
> > - Tag and stage using mvn release:prepare release:perform
> > - Close repo
> > - Start vote
> > - If passed, forward vote to incubator
> > - If passed, close repo
> > - Upload distro to dist
> > - Announce the release (mailing lists, website)
> >
> > It's based on what I do in Karaf, ServiceMix, etc.
> >
> > Regards
> > JB
> >
> >
> > On 06/08/2016 02:39 AM, P. Taylor Goetz wrote:
> >>
> >> Out of curiosity, is there a reason for distributing the release on
> >> repository.a.o vs. dist.a.o?
> >>
> >> In my experience repository.a.o has traditionally been used for maven
> >> artifacts, and dist.a.o has been for release artifacts (source archives
> and
> >> convenience binaries).
> >>
> >> I'd be happy to help with documenting the process.
> >>
> >> I ask because this might come up during an IPMC release vote.
> >>
> >> -Taylor
> >>
> >>> On Jun 1, 2016, at 9:46 PM, Davor Bonaci 
> >>> wrote:
> >>>
> >>> Hi everyone!
> >>> We've started the release process for our first release,
> >>> 0.1.0-incubating.
> >>>
> >>> To recap previous discussions, we don't have particular functional
> goals
> >>> for this release. Instead, we'd like to make available what's currently
> >>> in
> >>> the repository, as well as work through the release process.
> >>>
> >>> With this in mind, we've:
> >>> * branched off the release branch [1] at master's commit 8485272,
> >>> * updated master to prepare for the second release, 0.2.0-incubating,
> >>> * built the first release candidate, RC1, and deployed it to a staging
> >>> repository [2].
> >>>
> >>> We are not ready to start a vote just yet -- we've already identified a
> >>> few
> >>> issues worth fixing. That said, I'd like to invite everybody to take a
> >>> peek
> >>> and comment. I'm hoping we can address as many issues as possible
> before
> >>> we
> >>> start the voting process.
> >>>
> >>> Please let us know if you see any issues.
> >>>
> >>> Thanks,
> >>> Davor
> >>>
> >>> [1]
> >>> https://github.com/apache/incubator-beam/tree/release-0.1.0-incubating
> >>> [2]
> >>> https://repository.apache.org/content/repositories/orgapachebeam-1000/
> >
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
>


Re: DoFn Reuse

2016-06-08 Thread Ben Chambers
On Wed, Jun 8, 2016 at 10:29 AM Raghu Angadi 
wrote:

> On Wed, Jun 8, 2016 at 10:13 AM, Ben Chambers  >
> wrote:
>
> > - If failure occurs after finishBundle() but before the consumption is
> > committed, then the bundle may be reprocessed, which leads to duplicated
> > calls to processElement() and finishBundle().
> >
>
>
>
> > - If failure occurs after consumption is committed but before
> > finishBundle(), then those elements which may have buffered state in the
> > DoFn but not had their side-effects fully processed (since the
> > finishBundle() was responsible for that) are lost.
> >

I am trying to understand this better. Does this mean during
> recovery/replay after a failure, the particular instance of DoFn that
> existed before the worker failure would not be discarded, but might still
> receive elements?  If a DoFn is caching some internal state, it should
> always assume the worker its on might abruptly fail anytime and the state
> would be lost, right?
>

To clarify -- this case is actually not allowed by the beam model. The
guarantee is that either a bundle is successfully completed (startBundle,
processElement*, finishBundle, commit) or not. If it isn't, then the bundle
is reprocessed. So, if a `DoFn` instance builds up any state while
processing a bundle and a failure happens at any point prior to the commit,
it will be retried. Even though the actual state in the first `DoFn` was
lost, the second attempt will build up the same state.


Re: DoFn Reuse

2016-06-08 Thread Raghu Angadi
On Wed, Jun 8, 2016 at 10:15 AM, Dan Halperin 
wrote:

> > I thought finishBundle()
> > exists simply as best-effort indication from the runner to user some
> chunk
> > of records have been processed.. not part of processing guarantees. Also
> > the term "bundle" itself is fairly loosely defined (may be
> intentionally).
> >
>
> No, finish bundle MUST be called by a runner before it can commit any work.
> This
> is akin to flushing a stream before closing it -- the DoFn may have some
> elements
> cached or pending and if you don't call finish bundle you will not have
> fully
> processed or produced all the elements.


I see. finshBundle() includes context too (DoFn could output more elements
e.g.). Yeah it should be called before the runner can commit/checkpoint.


Re: DoFn Reuse

2016-06-08 Thread Thomas Groh
In the case of failure, a DoFn instance will not be reused; however, in the
case of failure either the inputs will be retried, or the pipeline will
fail, allowing a newly deserialized instance of the DoFn to reprocess the
inputs (which should produce the same result, meaning there is no data
loss).

On Wed, Jun 8, 2016 at 10:29 AM, Raghu Angadi 
wrote:

> On Wed, Jun 8, 2016 at 10:13 AM, Ben Chambers  >
> wrote:
>
> > - If failure occurs after finishBundle() but before the consumption is
> > committed, then the bundle may be reprocessed, which leads to duplicated
> > calls to processElement() and finishBundle().
> >
>
>
>
> > - If failure occurs after consumption is committed but before
> > finishBundle(), then those elements which may have buffered state in the
> > DoFn but not had their side-effects fully processed (since the
> > finishBundle() was responsible for that) are lost.
> >
>
> I am trying to understand this better. Does this mean during
> recovery/replay after a failure, the particular instance of DoFn that
> existed before the worker failure would not be discarded, but might still
> receive elements?  If a DoFn is caching some internal state, it should
> always assume the worker its on might abruptly fail anytime and the state
> would be lost, right?
>


Re: DoFn Reuse

2016-06-08 Thread Thomas Groh
A Bundle is an arbitrary collection of elements. A PCollection is divided
into bundles at the discretion of the runner. However, the bundles must
partition the input PCollection; each element is in exactly one bundle, and
each bundle is successfully committed exactly once in a successful pipeline.

Ben's distinction is useful - notably, in the second sequence, as the
bundle has been committed, the elements will not (and can not) be
reprocessed, and outputs can be entirely lost.


For ParDo, the existing sequence was  *
, and the new sequence is ( *
)*

The documentation for the earlier sequence is at
https://github.com/apache/incubator-beam/blob/0393a7917318baaa1e580259a74bff2c1dcbe6b8/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/ParDo.java#L88
finishBundle is noted as being called whenever an input bundle is completed.

There is also documentation that permits the system to run multiple copies
of a DoFn, starting at
https://github.com/apache/incubator-beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/ParDo.java#L400;
in either case, completion includes the execution of the finishBundle()
method.
"
* Sometimes two or more {@link DoFn} instances will be running on the
* same bundle simultaneously, with the system taking the results of
* the first instance to complete successfully
"

On Wed, Jun 8, 2016 at 10:13 AM, Ben Chambers 
wrote:

> I think there is a difference:
>
> - If failure occurs after finishBundle() but before the consumption is
> committed, then the bundle may be reprocessed, which leads to duplicated
> calls to processElement() and finishBundle().
> - If failure occurs after consumption is committed but before
> finishBundle(), then those elements which may have buffered state in the
> DoFn but not had their side-effects fully processed (since the
> finishBundle() was responsible for that) are lost.
>
>
>
> On Wed, Jun 8, 2016 at 10:09 AM Raghu Angadi 
> wrote:
>
> > On Wed, Jun 8, 2016 at 10:05 AM, Raghu Angadi 
> wrote:
> > >
> > > I thought finishBundle() exists simply as best-effort indication from
> the
> > > runner to user some chunk of records have been processed..
> >
> > also to help with DoFn's own clean up if there is any.
> >
>


Re: DoFn Reuse

2016-06-08 Thread Raghu Angadi
On Wed, Jun 8, 2016 at 10:13 AM, Ben Chambers 
wrote:

> - If failure occurs after finishBundle() but before the consumption is
> committed, then the bundle may be reprocessed, which leads to duplicated
> calls to processElement() and finishBundle().
>



> - If failure occurs after consumption is committed but before
> finishBundle(), then those elements which may have buffered state in the
> DoFn but not had their side-effects fully processed (since the
> finishBundle() was responsible for that) are lost.
>

I am trying to understand this better. Does this mean during
recovery/replay after a failure, the particular instance of DoFn that
existed before the worker failure would not be discarded, but might still
receive elements?  If a DoFn is caching some internal state, it should
always assume the worker its on might abruptly fail anytime and the state
would be lost, right?


Re: DoFn Reuse

2016-06-08 Thread Robert Bradshaw
The unit of commit is the bundle.

Consider a DoFn that does batching (e.g. to interact with some
external service less frequently). Items may be buffered during
process() but these buffered items must be processed and the results
emitted in finishBundle(). If inputs are committed as being consumed
before finishBundle is called (and its outputs committed) this
buffered state would be lost but the inputs not replayed.

Put another way, the elements are partitioned into bundles, and the
exactly once guarantee states that the output is the union of exactly
one processing of each bundle. (Bundles may be retried and/or
partially processed; such outputs are discarded.)

On Wed, Jun 8, 2016 at 10:05 AM, Raghu Angadi
 wrote:
> Such data loss can still occur if the worker dies after finishBundle()
> returns, but before the consumption is committed. I thought finishBundle()
> exists simply as best-effort indication from the runner to user some chunk
> of records have been processed.. not part of processing guarantees. Also
> the term "bundle" itself is fairly loosely defined (may be intentionally).
>
> On Wed, Jun 8, 2016 at 8:47 AM, Thomas Groh 
> wrote:
>
>> finishBundle() **must** be called before any input consumption is committed
>> (i.e. marking inputs as completed, which incldues committing any elements
>> they produced). Doing otherwise can cause data loss, as the state of the
>> DoFn is lost if a worker dies, but the input elements will never be
>> reprocessed to recreate the DoFn state. If this occurs, any buffered
>> outputs are lost.
>>
>> On Wed, Jun 8, 2016 at 8:21 AM, Bobby Evans 
>> wrote:
>>
>> > The local java runner does arbitrary batching of 10 elements.
>> >
>> > I'm not sure if flink exposes this or not, but couldn't you use the
>> > checkpoint triggers to also start/finish a bundle?
>> >  - Bobby
>> >
>> > On Wednesday, June 8, 2016 10:17 AM, Aljoscha Krettek <
>> > aljos...@apache.org> wrote:
>> >
>> >
>> >  Ahh, what we could do is artificially induce bundles using either count
>> or
>> > processing time or both. Just so that finishBundle() is called once in a
>> > while.
>> >
>> > On Wed, 8 Jun 2016 at 17:12 Aljoscha Krettek 
>> wrote:
>> >
>> > > Pretty sure, yes. The Iterable in a MapPartitionFunction should give
>> you
>> > > all the values in a given partition.
>> > >
>> > > I checked again for streaming execution. We're doing the opposite,
>> right
>> > > now: every element is a bundle in itself, startBundle()/finishBundle()
>> > are
>> > > called for every element which seems a bit wasteful. The only other
>> > option
>> > > is to see all elements as one bundle, because Flink does not
>> bundle/micro
>> > > batch elements in streaming execution.
>> > >
>> > > On Wed, 8 Jun 2016 at 16:38 Bobby Evans 
>> > > wrote:
>> > >
>> > >> Are you sure about that for Flink?  I thought the iterable finished
>> when
>> > >> you processed a maximum number of elements or the input queue was
>> empty
>> > so
>> > >> that it could returned control back to akka for better sharing of the
>> > >> thread pool.
>> > >>
>> > >>
>> > >>
>> >
>> https://github.com/apache/incubator-beam/blob/af8f5935ca1866012ceb102b9472c8b1ef102d73/runners/flink/runner/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkDoFnFunction.java#L99
>> > >> Also in the javadocs for DoFn.Context it explicitly states that you
>> can
>> > >> emit from the finishBundle method.
>> > >>
>> > >>
>> > >>
>> >
>> https://github.com/apache/incubator-beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/DoFn.java#L104-L110
>> > >> I thought I had seen some example of this being used for batching
>> output
>> > >> to something downstream, like HDFS or Kafka, but I'm not sure on that.
>> > If
>> > >> you can emit from finsihBundle and an new instance of the DoFn will be
>> > >> created around each bundle then I can see some people trying to do
>> > >> aggregations inside a DoFn and then emitting them at the end of the
>> > bundle
>> > >> knowing that if a batch fails or is rolled back the system will handle
>> > it.
>> > >> If that is not allowed we should really update the javadocs around it
>> to
>> > >> explain the pitfalls of doing this.
>> > >>  - Bobby
>> > >>
>> > >>On Wednesday, June 8, 2016 4:24 AM, Aljoscha Krettek <
>> > >> aljos...@apache.org> wrote:
>> > >>
>> > >>
>> > >>  Hi,
>> > >> a quick related question: In the Flink runner we basically see
>> > everything
>> > >> as one big bundle, i.e. we call startBundle() once at the beginning
>> and
>> > >> then keep processing indefinitely, never calling finishBundle(). Is
>> this
>> > >> also correct behavior?
>> > >>
>> > >> Best,
>> > >> Aljoscha
>> > >>
>> > >> On Tue, 7 Jun 2016 at 20:44 Thomas Groh 
>> > wrote:
>> > >>
>> > >> > Hey everyone;
>> > >> >
>> > >> > I'm starting to 

Re: DoFn Reuse

2016-06-08 Thread Dan Halperin
On Wed, Jun 8, 2016 at 10:05 AM, Raghu Angadi 
wrote:

> Such data loss can still occur if the worker dies after finishBundle()
> returns, but before the consumption is committed.


If the runner is correctly implemented, there will not be data loss in this
case -- the runner
should retry the bundle (or all the elements that were in this bundle as
part of one or more new bundles)
as it has not committed the work.


> I thought finishBundle()
> exists simply as best-effort indication from the runner to user some chunk
> of records have been processed.. not part of processing guarantees. Also
> the term "bundle" itself is fairly loosely defined (may be intentionally).
>

No, finish bundle MUST be called by a runner before it can commit any work.
This
is akin to flushing a stream before closing it -- the DoFn may have some
elements
cached or pending and if you don't call finish bundle you will not have
fully
processed or produced all the elements.

Dan



>
> On Wed, Jun 8, 2016 at 8:47 AM, Thomas Groh 
> wrote:
>
> > finishBundle() **must** be called before any input consumption is
> committed
> > (i.e. marking inputs as completed, which incldues committing any elements
> > they produced). Doing otherwise can cause data loss, as the state of the
> > DoFn is lost if a worker dies, but the input elements will never be
> > reprocessed to recreate the DoFn state. If this occurs, any buffered
> > outputs are lost.
> >
> > On Wed, Jun 8, 2016 at 8:21 AM, Bobby Evans  >
> > wrote:
> >
> > > The local java runner does arbitrary batching of 10 elements.
> > >
> > > I'm not sure if flink exposes this or not, but couldn't you use the
> > > checkpoint triggers to also start/finish a bundle?
> > >  - Bobby
> > >
> > > On Wednesday, June 8, 2016 10:17 AM, Aljoscha Krettek <
> > > aljos...@apache.org> wrote:
> > >
> > >
> > >  Ahh, what we could do is artificially induce bundles using either
> count
> > or
> > > processing time or both. Just so that finishBundle() is called once in
> a
> > > while.
> > >
> > > On Wed, 8 Jun 2016 at 17:12 Aljoscha Krettek 
> > wrote:
> > >
> > > > Pretty sure, yes. The Iterable in a MapPartitionFunction should give
> > you
> > > > all the values in a given partition.
> > > >
> > > > I checked again for streaming execution. We're doing the opposite,
> > right
> > > > now: every element is a bundle in itself,
> startBundle()/finishBundle()
> > > are
> > > > called for every element which seems a bit wasteful. The only other
> > > option
> > > > is to see all elements as one bundle, because Flink does not
> > bundle/micro
> > > > batch elements in streaming execution.
> > > >
> > > > On Wed, 8 Jun 2016 at 16:38 Bobby Evans  >
> > > > wrote:
> > > >
> > > >> Are you sure about that for Flink?  I thought the iterable finished
> > when
> > > >> you processed a maximum number of elements or the input queue was
> > empty
> > > so
> > > >> that it could returned control back to akka for better sharing of
> the
> > > >> thread pool.
> > > >>
> > > >>
> > > >>
> > >
> >
> https://github.com/apache/incubator-beam/blob/af8f5935ca1866012ceb102b9472c8b1ef102d73/runners/flink/runner/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkDoFnFunction.java#L99
> > > >> Also in the javadocs for DoFn.Context it explicitly states that you
> > can
> > > >> emit from the finishBundle method.
> > > >>
> > > >>
> > > >>
> > >
> >
> https://github.com/apache/incubator-beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/DoFn.java#L104-L110
> > > >> I thought I had seen some example of this being used for batching
> > output
> > > >> to something downstream, like HDFS or Kafka, but I'm not sure on
> that.
> > > If
> > > >> you can emit from finsihBundle and an new instance of the DoFn will
> be
> > > >> created around each bundle then I can see some people trying to do
> > > >> aggregations inside a DoFn and then emitting them at the end of the
> > > bundle
> > > >> knowing that if a batch fails or is rolled back the system will
> handle
> > > it.
> > > >> If that is not allowed we should really update the javadocs around
> it
> > to
> > > >> explain the pitfalls of doing this.
> > > >>  - Bobby
> > > >>
> > > >>On Wednesday, June 8, 2016 4:24 AM, Aljoscha Krettek <
> > > >> aljos...@apache.org> wrote:
> > > >>
> > > >>
> > > >>  Hi,
> > > >> a quick related question: In the Flink runner we basically see
> > > everything
> > > >> as one big bundle, i.e. we call startBundle() once at the beginning
> > and
> > > >> then keep processing indefinitely, never calling finishBundle(). Is
> > this
> > > >> also correct behavior?
> > > >>
> > > >> Best,
> > > >> Aljoscha
> > > >>
> > > >> On Tue, 7 Jun 2016 at 20:44 Thomas Groh 
> > > wrote:
> > > >>
> > > >> > Hey everyone;
> > > >> >
> > > >> > I'm starting to work on 

Re: DoFn Reuse

2016-06-08 Thread Raghu Angadi
Such data loss can still occur if the worker dies after finishBundle()
returns, but before the consumption is committed. I thought finishBundle()
exists simply as best-effort indication from the runner to user some chunk
of records have been processed.. not part of processing guarantees. Also
the term "bundle" itself is fairly loosely defined (may be intentionally).

On Wed, Jun 8, 2016 at 8:47 AM, Thomas Groh 
wrote:

> finishBundle() **must** be called before any input consumption is committed
> (i.e. marking inputs as completed, which incldues committing any elements
> they produced). Doing otherwise can cause data loss, as the state of the
> DoFn is lost if a worker dies, but the input elements will never be
> reprocessed to recreate the DoFn state. If this occurs, any buffered
> outputs are lost.
>
> On Wed, Jun 8, 2016 at 8:21 AM, Bobby Evans 
> wrote:
>
> > The local java runner does arbitrary batching of 10 elements.
> >
> > I'm not sure if flink exposes this or not, but couldn't you use the
> > checkpoint triggers to also start/finish a bundle?
> >  - Bobby
> >
> > On Wednesday, June 8, 2016 10:17 AM, Aljoscha Krettek <
> > aljos...@apache.org> wrote:
> >
> >
> >  Ahh, what we could do is artificially induce bundles using either count
> or
> > processing time or both. Just so that finishBundle() is called once in a
> > while.
> >
> > On Wed, 8 Jun 2016 at 17:12 Aljoscha Krettek 
> wrote:
> >
> > > Pretty sure, yes. The Iterable in a MapPartitionFunction should give
> you
> > > all the values in a given partition.
> > >
> > > I checked again for streaming execution. We're doing the opposite,
> right
> > > now: every element is a bundle in itself, startBundle()/finishBundle()
> > are
> > > called for every element which seems a bit wasteful. The only other
> > option
> > > is to see all elements as one bundle, because Flink does not
> bundle/micro
> > > batch elements in streaming execution.
> > >
> > > On Wed, 8 Jun 2016 at 16:38 Bobby Evans 
> > > wrote:
> > >
> > >> Are you sure about that for Flink?  I thought the iterable finished
> when
> > >> you processed a maximum number of elements or the input queue was
> empty
> > so
> > >> that it could returned control back to akka for better sharing of the
> > >> thread pool.
> > >>
> > >>
> > >>
> >
> https://github.com/apache/incubator-beam/blob/af8f5935ca1866012ceb102b9472c8b1ef102d73/runners/flink/runner/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkDoFnFunction.java#L99
> > >> Also in the javadocs for DoFn.Context it explicitly states that you
> can
> > >> emit from the finishBundle method.
> > >>
> > >>
> > >>
> >
> https://github.com/apache/incubator-beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/DoFn.java#L104-L110
> > >> I thought I had seen some example of this being used for batching
> output
> > >> to something downstream, like HDFS or Kafka, but I'm not sure on that.
> > If
> > >> you can emit from finsihBundle and an new instance of the DoFn will be
> > >> created around each bundle then I can see some people trying to do
> > >> aggregations inside a DoFn and then emitting them at the end of the
> > bundle
> > >> knowing that if a batch fails or is rolled back the system will handle
> > it.
> > >> If that is not allowed we should really update the javadocs around it
> to
> > >> explain the pitfalls of doing this.
> > >>  - Bobby
> > >>
> > >>On Wednesday, June 8, 2016 4:24 AM, Aljoscha Krettek <
> > >> aljos...@apache.org> wrote:
> > >>
> > >>
> > >>  Hi,
> > >> a quick related question: In the Flink runner we basically see
> > everything
> > >> as one big bundle, i.e. we call startBundle() once at the beginning
> and
> > >> then keep processing indefinitely, never calling finishBundle(). Is
> this
> > >> also correct behavior?
> > >>
> > >> Best,
> > >> Aljoscha
> > >>
> > >> On Tue, 7 Jun 2016 at 20:44 Thomas Groh 
> > wrote:
> > >>
> > >> > Hey everyone;
> > >> >
> > >> > I'm starting to work on BEAM-38 (
> > >> > https://issues.apache.org/jira/browse/BEAM-38), which enables an
> > >> > optimization for runners with many small bundles. BEAM-38 allows
> > >> runners to
> > >> > reuse DoFn instances so long as that DoFn has not terminated
> > abnormally.
> > >> > This replaces the previous requirement that a DoFn be used for only
> a
> > >> > single bundle if either of startBundle or finishBundle have been
> > >> > overwritten.
> > >> >
> > >> > DoFn deserialization-per-bundle can be a significance performance
> > >> > bottleneck when there are many small bundles, as is common in
> > streaming
> > >> > executions. It has also surfaced as the cause of much of the current
> > >> > slowness in the new InProcessRunner.
> > >> >
> > >> > Existing Runners do not require any changes; they may choose to take
> > >> > advantage of of the new optimization 

Re: DoFn Reuse

2016-06-08 Thread Thomas Groh
finishBundle() **must** be called before any input consumption is committed
(i.e. marking inputs as completed, which incldues committing any elements
they produced). Doing otherwise can cause data loss, as the state of the
DoFn is lost if a worker dies, but the input elements will never be
reprocessed to recreate the DoFn state. If this occurs, any buffered
outputs are lost.

On Wed, Jun 8, 2016 at 8:21 AM, Bobby Evans 
wrote:

> The local java runner does arbitrary batching of 10 elements.
>
> I'm not sure if flink exposes this or not, but couldn't you use the
> checkpoint triggers to also start/finish a bundle?
>  - Bobby
>
> On Wednesday, June 8, 2016 10:17 AM, Aljoscha Krettek <
> aljos...@apache.org> wrote:
>
>
>  Ahh, what we could do is artificially induce bundles using either count or
> processing time or both. Just so that finishBundle() is called once in a
> while.
>
> On Wed, 8 Jun 2016 at 17:12 Aljoscha Krettek  wrote:
>
> > Pretty sure, yes. The Iterable in a MapPartitionFunction should give you
> > all the values in a given partition.
> >
> > I checked again for streaming execution. We're doing the opposite, right
> > now: every element is a bundle in itself, startBundle()/finishBundle()
> are
> > called for every element which seems a bit wasteful. The only other
> option
> > is to see all elements as one bundle, because Flink does not bundle/micro
> > batch elements in streaming execution.
> >
> > On Wed, 8 Jun 2016 at 16:38 Bobby Evans 
> > wrote:
> >
> >> Are you sure about that for Flink?  I thought the iterable finished when
> >> you processed a maximum number of elements or the input queue was empty
> so
> >> that it could returned control back to akka for better sharing of the
> >> thread pool.
> >>
> >>
> >>
> https://github.com/apache/incubator-beam/blob/af8f5935ca1866012ceb102b9472c8b1ef102d73/runners/flink/runner/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkDoFnFunction.java#L99
> >> Also in the javadocs for DoFn.Context it explicitly states that you can
> >> emit from the finishBundle method.
> >>
> >>
> >>
> https://github.com/apache/incubator-beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/DoFn.java#L104-L110
> >> I thought I had seen some example of this being used for batching output
> >> to something downstream, like HDFS or Kafka, but I'm not sure on that.
> If
> >> you can emit from finsihBundle and an new instance of the DoFn will be
> >> created around each bundle then I can see some people trying to do
> >> aggregations inside a DoFn and then emitting them at the end of the
> bundle
> >> knowing that if a batch fails or is rolled back the system will handle
> it.
> >> If that is not allowed we should really update the javadocs around it to
> >> explain the pitfalls of doing this.
> >>  - Bobby
> >>
> >>On Wednesday, June 8, 2016 4:24 AM, Aljoscha Krettek <
> >> aljos...@apache.org> wrote:
> >>
> >>
> >>  Hi,
> >> a quick related question: In the Flink runner we basically see
> everything
> >> as one big bundle, i.e. we call startBundle() once at the beginning and
> >> then keep processing indefinitely, never calling finishBundle(). Is this
> >> also correct behavior?
> >>
> >> Best,
> >> Aljoscha
> >>
> >> On Tue, 7 Jun 2016 at 20:44 Thomas Groh 
> wrote:
> >>
> >> > Hey everyone;
> >> >
> >> > I'm starting to work on BEAM-38 (
> >> > https://issues.apache.org/jira/browse/BEAM-38), which enables an
> >> > optimization for runners with many small bundles. BEAM-38 allows
> >> runners to
> >> > reuse DoFn instances so long as that DoFn has not terminated
> abnormally.
> >> > This replaces the previous requirement that a DoFn be used for only a
> >> > single bundle if either of startBundle or finishBundle have been
> >> > overwritten.
> >> >
> >> > DoFn deserialization-per-bundle can be a significance performance
> >> > bottleneck when there are many small bundles, as is common in
> streaming
> >> > executions. It has also surfaced as the cause of much of the current
> >> > slowness in the new InProcessRunner.
> >> >
> >> > Existing Runners do not require any changes; they may choose to take
> >> > advantage of of the new optimization opportunity. However, user DoFns
> >> may
> >> > need to be revised to properly set up and tear down state in
> startBundle
> >> > and finishBundle, respectively, if the depended on only being used
> for a
> >> > single bundle.
> >> >
> >> > The first two updates are already in pull requests:
> >> >
> >> > PR #419 (https://github.com/apache/incubator-beam/pull/419) updates
> the
> >> > Javadoc to the new spec
> >> > PR #418 (https://github.com/apache/incubator-beam/pull/418) updates
> the
> >> > DirectRunner to reuse DoFns according to the new policy.
> >> >
> >> > Yours,
> >> >
> >> > Thomas
> >> >
> >>
> >>
> >>
> >
> >
>
>
>
>


Re: DoFn Reuse

2016-06-08 Thread Aljoscha Krettek
Ahh, what we could do is artificially induce bundles using either count or
processing time or both. Just so that finishBundle() is called once in a
while.

On Wed, 8 Jun 2016 at 17:12 Aljoscha Krettek  wrote:

> Pretty sure, yes. The Iterable in a MapPartitionFunction should give you
> all the values in a given partition.
>
> I checked again for streaming execution. We're doing the opposite, right
> now: every element is a bundle in itself, startBundle()/finishBundle() are
> called for every element which seems a bit wasteful. The only other option
> is to see all elements as one bundle, because Flink does not bundle/micro
> batch elements in streaming execution.
>
> On Wed, 8 Jun 2016 at 16:38 Bobby Evans 
> wrote:
>
>> Are you sure about that for Flink?  I thought the iterable finished when
>> you processed a maximum number of elements or the input queue was empty so
>> that it could returned control back to akka for better sharing of the
>> thread pool.
>>
>>
>> https://github.com/apache/incubator-beam/blob/af8f5935ca1866012ceb102b9472c8b1ef102d73/runners/flink/runner/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkDoFnFunction.java#L99
>> Also in the javadocs for DoFn.Context it explicitly states that you can
>> emit from the finishBundle method.
>>
>>
>> https://github.com/apache/incubator-beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/DoFn.java#L104-L110
>> I thought I had seen some example of this being used for batching output
>> to something downstream, like HDFS or Kafka, but I'm not sure on that.  If
>> you can emit from finsihBundle and an new instance of the DoFn will be
>> created around each bundle then I can see some people trying to do
>> aggregations inside a DoFn and then emitting them at the end of the bundle
>> knowing that if a batch fails or is rolled back the system will handle it.
>> If that is not allowed we should really update the javadocs around it to
>> explain the pitfalls of doing this.
>>  - Bobby
>>
>> On Wednesday, June 8, 2016 4:24 AM, Aljoscha Krettek <
>> aljos...@apache.org> wrote:
>>
>>
>>  Hi,
>> a quick related question: In the Flink runner we basically see everything
>> as one big bundle, i.e. we call startBundle() once at the beginning and
>> then keep processing indefinitely, never calling finishBundle(). Is this
>> also correct behavior?
>>
>> Best,
>> Aljoscha
>>
>> On Tue, 7 Jun 2016 at 20:44 Thomas Groh  wrote:
>>
>> > Hey everyone;
>> >
>> > I'm starting to work on BEAM-38 (
>> > https://issues.apache.org/jira/browse/BEAM-38), which enables an
>> > optimization for runners with many small bundles. BEAM-38 allows
>> runners to
>> > reuse DoFn instances so long as that DoFn has not terminated abnormally.
>> > This replaces the previous requirement that a DoFn be used for only a
>> > single bundle if either of startBundle or finishBundle have been
>> > overwritten.
>> >
>> > DoFn deserialization-per-bundle can be a significance performance
>> > bottleneck when there are many small bundles, as is common in streaming
>> > executions. It has also surfaced as the cause of much of the current
>> > slowness in the new InProcessRunner.
>> >
>> > Existing Runners do not require any changes; they may choose to take
>> > advantage of of the new optimization opportunity. However, user DoFns
>> may
>> > need to be revised to properly set up and tear down state in startBundle
>> > and finishBundle, respectively, if the depended on only being used for a
>> > single bundle.
>> >
>> > The first two updates are already in pull requests:
>> >
>> > PR #419 (https://github.com/apache/incubator-beam/pull/419) updates the
>> > Javadoc to the new spec
>> > PR #418 (https://github.com/apache/incubator-beam/pull/418) updates the
>> > DirectRunner to reuse DoFns according to the new policy.
>> >
>> > Yours,
>> >
>> > Thomas
>> >
>>
>>
>>
>
>


Re: DoFn Reuse

2016-06-08 Thread Aljoscha Krettek
Pretty sure, yes. The Iterable in a MapPartitionFunction should give you
all the values in a given partition.

I checked again for streaming execution. We're doing the opposite, right
now: every element is a bundle in itself, startBundle()/finishBundle() are
called for every element which seems a bit wasteful. The only other option
is to see all elements as one bundle, because Flink does not bundle/micro
batch elements in streaming execution.

On Wed, 8 Jun 2016 at 16:38 Bobby Evans  wrote:

> Are you sure about that for Flink?  I thought the iterable finished when
> you processed a maximum number of elements or the input queue was empty so
> that it could returned control back to akka for better sharing of the
> thread pool.
>
>
> https://github.com/apache/incubator-beam/blob/af8f5935ca1866012ceb102b9472c8b1ef102d73/runners/flink/runner/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkDoFnFunction.java#L99
> Also in the javadocs for DoFn.Context it explicitly states that you can
> emit from the finishBundle method.
>
>
> https://github.com/apache/incubator-beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/DoFn.java#L104-L110
> I thought I had seen some example of this being used for batching output
> to something downstream, like HDFS or Kafka, but I'm not sure on that.  If
> you can emit from finsihBundle and an new instance of the DoFn will be
> created around each bundle then I can see some people trying to do
> aggregations inside a DoFn and then emitting them at the end of the bundle
> knowing that if a batch fails or is rolled back the system will handle it.
> If that is not allowed we should really update the javadocs around it to
> explain the pitfalls of doing this.
>  - Bobby
>
> On Wednesday, June 8, 2016 4:24 AM, Aljoscha Krettek <
> aljos...@apache.org> wrote:
>
>
>  Hi,
> a quick related question: In the Flink runner we basically see everything
> as one big bundle, i.e. we call startBundle() once at the beginning and
> then keep processing indefinitely, never calling finishBundle(). Is this
> also correct behavior?
>
> Best,
> Aljoscha
>
> On Tue, 7 Jun 2016 at 20:44 Thomas Groh  wrote:
>
> > Hey everyone;
> >
> > I'm starting to work on BEAM-38 (
> > https://issues.apache.org/jira/browse/BEAM-38), which enables an
> > optimization for runners with many small bundles. BEAM-38 allows runners
> to
> > reuse DoFn instances so long as that DoFn has not terminated abnormally.
> > This replaces the previous requirement that a DoFn be used for only a
> > single bundle if either of startBundle or finishBundle have been
> > overwritten.
> >
> > DoFn deserialization-per-bundle can be a significance performance
> > bottleneck when there are many small bundles, as is common in streaming
> > executions. It has also surfaced as the cause of much of the current
> > slowness in the new InProcessRunner.
> >
> > Existing Runners do not require any changes; they may choose to take
> > advantage of of the new optimization opportunity. However, user DoFns may
> > need to be revised to properly set up and tear down state in startBundle
> > and finishBundle, respectively, if the depended on only being used for a
> > single bundle.
> >
> > The first two updates are already in pull requests:
> >
> > PR #419 (https://github.com/apache/incubator-beam/pull/419) updates the
> > Javadoc to the new spec
> > PR #418 (https://github.com/apache/incubator-beam/pull/418) updates the
> > DirectRunner to reuse DoFns according to the new policy.
> >
> > Yours,
> >
> > Thomas
> >
>
>
>