Hi all, I'm trying to run a Beam pipeline using Spark on YARN. My pipeline is written in Python, so I need to use a portable runner. Does anybody know how I should configure job server parameters, especially --spark-master-url? Is there anything else I need to be aware of while using such setup?
If it makes a difference, I use Google Dataproc. Best, Kamil