This is a good idea. It needs to be fleshed out how the capability of a
Runner would be visible to the user (apart from the compatibility matrix).
A dry-run feature would be useful, i.e. the user can run an inspection
on the pipeline to see if it contains any features which are not
supported by the Runner.
On 17.10.18 00:03, Rui Wang wrote:
Sounds like a good idea.
Sounds like while coding, user gets a list to show if a feature is
supported on different runners. User can check the list for the answer.
Is my understanding correct? Will this approach become slow as number of
runner grows? (it's just a question as I am not familiar the performance
of combination of a long list, annotation and IDE)
-Rui
On Sat, Oct 13, 2018 at 11:56 PM Reuven Lax <re...@google.com
<mailto:re...@google.com>> wrote:
Sounds like a good idea. I don't think it will work for all
capabilities (e.g. some of them such as "exactly once" apply to all
of the API surface), but useful for the ones that we can capture.
On Thu, Oct 4, 2018 at 2:43 AM Etienne Chauchot
<echauc...@apache.org <mailto:echauc...@apache.org>> wrote:
Hi guys,
As part of our user experience improvement to attract new Beam
users, I would like to suggest something:
Today we only have the capability matrix to inform users about
features support among runners. But, they might discover only
when the pipeline runs, when they receive an exception, that a
given feature is not supported by the targeted runner.
I would like to suggest to translate the capability matrix into
the API with annotations for example, so that, while coding, the
user could know that, for now, a given feature is not supported
on the runner he targets.
I know that the runner is only specified at pipeline runtime,
and that adding code would be a leak of runner implementation
and against portability. So it could be just informative
annotations like @Experimental for example with no annotation
processor.
WDYT?
Etienne