Hi!

I have a SparkSQL programm, having one input and 6 ouputs (write). When
executing this programm every call to write(.) executes the plan. My
problem is, that I want all these writes to happen in parallel (inside
one execution plan), because all writes have a common and compute
intensive subpart, that can be shared by all plans. Is there a
possibility to do this? (Caching is not a solution because the input
dataset is way to large...)

Hoping for advises ...

Best, Rico B.


---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to