Probably because only coarse-grained mode respects `spark.cores.max` right
now. See (and maybe review ;-)) #9027
<https://github.com/apache/spark/pull/9027> (sorry for the shameless plug).

iulian

On Wed, Nov 4, 2015 at 5:05 PM, Timothy Chen <tnac...@gmail.com> wrote:

> Hi Chris,
>
> How does coarse grain mode gives you less starvation in your overloaded
> cluster? Is it just because it allocates all resources at once (which I
> think in a overloaded cluster allows less things to run at once).
>
> Tim
>
>
> On Nov 4, 2015, at 4:21 AM, Heller, Chris <chel...@akamai.com> wrote:
>
> We’ve been making use of both. Fine-grain mode makes sense for more ad-hoc
> work loads, and coarse-grained for more job like loads on a common data
> set. My preference is the fine-grain mode in all cases, but the overhead
> associated with its startup and the possibility that an overloaded cluster
> would be starved for resources makes coarse grain mode a reality at the
> moment.
>
> On Wednesday, 4 November 2015 5:24 AM, Reynold Xin <r...@databricks.com>
> wrote:
>
>
> If you are using Spark with Mesos fine grained mode, can you please
> respond to this email explaining why you use it over the coarse grained
> mode?
>
> Thanks.
>
>
>
>


-- 

--
Iulian Dragos

------
Reactive Apps on the JVM
www.typesafe.com

Reply via email to