I've got a (dirty) usecase where I have existing spark batch job which produces 
an output that I would like to feed into my beam pipeline (assuming running on 
SparkRunner). I was trying to run it as one job (the output is reduced so not a 
big data hence ok to do something like Create.of(rdd.collect())) but that's 
failing because of the two separate spark contexts.
Is it possible to build the beam pipeline on existing spark context?
thx,Antony.

Reply via email to