Dear Beam Community,

We would like to run a streaming job from Pub/Sub to BigQuery that handles
schema updates "smoothly" (i.e. without having to restart a new job). Any
suggestion on a suitable method/architecture to achieve this ?

We found the below unresolved question [1] on Stack Overflow where the Beam
user list was referred to.

Thanks a lot for your support !

[1]
https://stackoverflow.com/questions/60496227/how-can-i-write-streaming-dataflow-pipelines-that-support-schema-evolution

-- 
Pierre

Reply via email to