Hello! I'm an avid apache beam user (on Dataflow) and we use beam to stream blockchain data to various sinks. I recently noticed some memory issues across all our pipelines but have yet to be able to find the root cause and was hoping someone on your team might be able to help. If this isn't the right avenue for it, please let me know how I should reach out.
The details are here in stackoverflow: https://stackoverflow.com/questions/76950068/memory-leak-in-apache-beam-python-readfrompubsub-io Thanks, Chenghan CTO | Allium