Excellent! On the redefinition of #2, people are used to it. With Hadoop
MapReduce, you had to define types in 1-3 different places.

While you're there, we also need also need a lambda that has access to the
Context object.

On Fri, Feb 3, 2017 at 11:03 AM Kenneth Knowles <[email protected]>
wrote:

> Hi all,
>
> Right now when you want to use MapElements<A, B> (and friends) you have two
> options:
>
> 1. Use a SimpleFunction Java 7 style
>
>     MapElements.via(SimpleFunction<A, B>() {
>       @Override
>       public B return apply(A input) {
>         return ...expr...;
>       }
>     }
>
> and the type descriptors are automatically inferred
>
> 2. Use a lambda and withOutputType
>
>     MapElements.via((A input) -> ...expr...)
>         .withOutputType(new TypeDescriptor<B>(){})
>
>     MapElements.via((A input) -> ...expr...)
>         .withOutputType(TypeDescriotors.bs())
>
> and you might have a handy helper in TypeDescriptors (note the plural) or
> you might have to create your own, which is a weird pattern if you haven't
> seen it before. Both shown above.
>
> [PROPOSAL] Here is a neat trick for getting type information like in #1 but
> with a lambda like #2 and a bit less verbosity:
>
>     MapElements.via(new SimpleFunction<A, B>((A input) -> ...expr...) {})
>
> I think we can add this. I lean towards this just being a third option, but
> could be easily swayed to drop #2.
>
> This is https://github.com/apache/beam/pull/1855 where you can see some
> unit tests demonstrating it more, and take a look at what it means for
> error checking, etc. It is backwards-compatible but still a change to a
> core API so deserves a thread on list.
>
> Thoughts?
>
> Kenn
>

Reply via email to