Hi all,
I am trying to implement a custom TimestampPolicy and use the event
time instead of the processing time
the flow is read from kafka using
.withTimestampPolicyFactory((tp, previousWatermark) -> new
EventTimestampPolicy(previousWatermark));
and the custom class with implementation i saw
Hello!
i created this example:
https://github.com/j1cs/coder-error-beam
and the issue you told me:
https://github.com/apache/beam/issues/26003
Juan Cuzmar.
--- Original Message ---
On Monday, March 27th, 2023 at 10:58 AM, Alexey Romanenko
wrote:
> Hmm, it worked fine for me on a
That's right! For Loopback, it should be making localhost assumptions.
But that should only be happening in Loopback mode, but it could be that it
assumes any
"external" is local, and it requires the Docker mode to execute everything
else. If the Flink side interacting with an arbitrary
I realized that my last comment doesn't make sense because, as you pointed
out earlier, LOOPBACK loops back to the submitting job process to execute
the job.
On Mon, Mar 27, 2023 at 9:34 AM Sherif Tolba
wrote:
> Oh, I might have missed the point about how the Flink portable runner
> works. Is
Hmm, it worked fine for me on a primitive pipeline and default AvroCode for
simple InboundData implementation.
Could you create a GitHub issue for that
(https://github.com/apache/beam/issues)? It would be very helpful to provide
the steps there how to reproduce this issue.
Thanks,
Alexey
>
Oh, I might have missed the point about how the Flink portable runner
works. Is it the case that when I specify LOOPBACK, that the binary sent
already includes the SDK and is able to spawn the Go routines on the
TaskManager without any additional config?
On Fri, Mar 24, 2023 at 6:02 PM Robert
Hi Robert,
I started implementing the grpc server for the worker service, and while
doing so, a question came to mind: the flags (endpoints) passed by the
runner to the worker service use "localhost" (this what I noticed when
experimenting with the Python pool), now, if I take these and pass them