We used to run legacy sources using the old-style Read translation.
Changing it to SDF might have broken ReadFromPubSub. Could you check in
the Flink jobs whether it uses the SDF code or the Read translation? For
Read you should be seeing the UnboundedSourceWrapper.
Looking at the code, there
Yeah, I’m able to run that. apache_beam.io.ReadFromPubSub transform works
just fine but only for DirectRunner in python. In flink we’re using the
java implementation via an external transform
apache_beam.io.external.gcp.pubsub.ReadFromPubSub.
Is there a different way to do this?
On Wed, Oct 28,
Are you able to run streaming word count on the same setup?
On Tue, Oct 27, 2020 at 5:39 PM Sam Bourne wrote:
> We updated from beam 2.18.0 to 2.24.0 and have been having issues using
> the python ReadFromPubSub external transform in flink 1.10. It seems like
> it starts up just fine, but it
We updated from beam 2.18.0 to 2.24.0 and have been having issues using the
python ReadFromPubSub external transform in flink 1.10. It seems like it
starts up just fine, but it doesn’t consume any messages.
I tried to reduce it to a simple example and tested back to beam 2.22.0 but
have gotten