Jean wrote
> Have you considered using pools?
> http://spark.apache.org/docs/latest/job-scheduling.html#fair-scheduler-pools
> 
> I haven't tried that by myself, but it looks like pool setting is applied
> per thread so that means it's possible to configure fair scheduler, so
> that more, than one job is on a go. Although each of them would probably
> use less number of workers...

Thanks for the tip, but I don't think that would work in this case - while
writing to Redshift, the cluster is sitting idle without the new tasks even
appearing on the pending queue yet, so changing how it executes the jobs on
the queue won't help.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Concurrent-Spark-jobs-tp26011p26062.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to