Yes indeed, your understanding is correct. This is what I had in mind.
PS: I have no idea on perf right now.
Etienne
Le mardi 16 octobre 2018 à 15:03 -0700, Rui Wang a écrit :
> Sounds like a good idea.  
> Sounds like while coding, user gets a list to show if a feature is supported 
> on different runners. User can check the
> list for the answer. Is my understanding correct? Will this approach become 
> slow as number of runner grows? (it's just
> a question as I am not familiar the performance of combination of a long 
> list, annotation and IDE)    
> 
> 
> -Rui
> On Sat, Oct 13, 2018 at 11:56 PM Reuven Lax <re...@google.com> wrote:
> > Sounds like a good idea. I don't think it will work for all capabilities 
> > (e.g. some of them such as "exactly once"
> > apply to all of the API surface), but useful for the ones that we can 
> > capture.
> > 
> > On Thu, Oct 4, 2018 at 2:43 AM Etienne Chauchot <echauc...@apache.org> 
> > wrote:
> > > Hi guys,
> > > As part of our user experience improvement to attract new Beam users, I 
> > > would like to suggest something:
> > > 
> > > Today we only have the capability matrix to inform users about features 
> > > support among runners. But, they might
> > > discover only when the pipeline runs, when they receive an exception, 
> > > that a given feature is not supported by the
> > > targeted runner.
> > > I would like to suggest to translate the capability matrix into the API 
> > > with annotations for example, so that,
> > > while coding, the user could know that, for now, a given feature is not 
> > > supported on the runner he targets. 
> > > 
> > > I know that the runner is only specified at pipeline runtime, and that 
> > > adding code would be a leak of runner
> > > implementation and against portability. So it could be just informative 
> > > annotations like @Experimental for example
> > > with no annotation processor.
> > > 
> > > WDYT?
> > > 
> > > Etienne

Reply via email to