Hello Jozef this change was not introduced in the PR you referenced,
that PR was just a refactor.
The conflicting change was added in [1] via [2] starting on Beam 2.29.0.
It is not clear for me why this was done but maybe Kyle Weaver or
someone else have a better context.
Let's continue the discu
Hi,
Thank you very much for your comments and suggestions, totally agree with
you both. We will discuss it with the rest of the team and let you know the
resolution.
Thanks.
Regards,
On Wed, Feb 23, 2022 at 8:05 PM Ahmet Altay wrote:
> Hi Daniela,
>
> My suggestion would be to rely on github
Yes, I tried with Dataproc 2.0 and Flink 1.12. Cluster creation was fine.
But at the moment of start Groovy testing fails. This is the error.
*Task :sdks:python:apache_beam:testing:load_tests:run**16:44:49*
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Missing
InfluxDB options. Metri
Good Morning/Afternoon/Evening folks,
The current support for beam-plugins is experimental and we would like to
have it as a first class member of the beam library for Python Runner v2.
This helps us load plugins into the runtime before starting the SdkHarness.
https://github.com/apache/beam/pull/
This is your daily summary of Beam's current flaky tests
(https://issues.apache.org/jira/issues/?jql=project%20%3D%20BEAM%20AND%20statusCategory%20!%3D%20Done%20AND%20labels%20%3D%20flake)
These are P1 issues because they have a major negative impact on the community
and make it hard to determin
This is your daily summary of Beam's current P1 issues, not including flaky
tests
(https://issues.apache.org/jira/issues/?jql=project%20%3D%20BEAM%20AND%20statusCategory%20!%3D%20Done%20AND%20priority%20%3D%20P1%20AND%20(labels%20is%20EMPTY%20OR%20labels%20!%3D%20flake).
See https://beam.apache.
Thanks Danny!
I did a first pass on comments, but do like the approach. Needs some
justifications on why this path should be chosen over alternative
implementations.
On Thu, Feb 24, 2022, 7:42 AM Danny McCormick
wrote:
> Hey everyone, I put together a design doc for adding Bundle Finalization
All JMS related properties can be null (@Nullable).
Good point about breaking change.
The part on which I'm not big fan in your proposal is this
getDynamic() method. Maybe we can mimic what I did in JdbcIO with a fn
we can inject to define the destination.
But, ok, I would be happy to review a P
Hi Jean Baptiste 😉
Thank you for your feedback, it is interesting.
In your proposal, the write transform would take a PCollection< JmsRecord>.
JmsRecord has a lot of property that are related to the read operation
(jmsRedelivered, correlationId, etc…)
As a result I think it won’t be easy to crea
Hi Vincent,
It's Jean-Baptiste, not Jean-François, but it works as well ;)
I got your point, however, I think we can achieve the same with
JmsRecord. You can always use a DoFn at any part of your pipeline
(after reading from kafka, redis, whatever), where you can create
JmsRecord. If JmsRecord co
Hi Jean François
Please to hear of the author of JmsIO 😊.
Many thanks for your suggestion.
JmsRecord is used at read time, in most use cases we will use a mapper to
provide an object that will be used in several transform in the pipeline.
The destination won’t necessary be included in the read Jm
11 matches
Mail list logo