Hi Sam,

Spark cluster mode merge should also include documentation update that
has details what it is, but in a nutshell it's basically supporting
launching drivers that is managed in your cluster instead of launching
it yourself via client mode. YARN and Standalone both supports cluster
mode, so I've added the support for that in Mesos, and it supports HA
with zookeeper, supervise mode, and also comes with its own Mesos
cluster mode web ui that shows basic information for now. It doesn't
support PySpark yet, and in the future can ideally link to the running
driver WebUI and Mesos sandbox directly to see the results.

Cluster mode actually has nothing to do with dynamic allocation, it's
simply running and managing the Spark drivers for each app via a Mesos
framework.

Tim

On Wed, May 6, 2015 at 6:41 AM, Sam Bessalah <samkiller....@gmail.com> wrote:
> Hi Tim.
> Just a follow up, more related to your work on the rencently merged Spark
> Cluster Mode for Mesos.
> Can you elaborate how it works compared to the Standalone mode.
> and do you maintain the dyanamic allocation of mesos resources in the
> cluster mode unlike the coarse grained mode?
>
> On Tue, May 5, 2015 at 9:54 PM, Timothy Chen <tnac...@gmail.com> wrote:
>>
>> Hi Gidon,
>>
>> 1. Yes, each Spark application is wrapped in a new Mesos framework.
>>
>> 2. In fine grained mode, what happens is that Spark scheduler
>> specifies a custom Mesos executor per slave, and each Mesos task is a
>> Spark executor that will be launched by the Mesos executor. It's hard
>> to determine what exactly you're asking since task and executors are
>> both terms used in Spark and Mesos, perhaps prefixing (Mesos|Spark)
>> task will clarify more what you're asking about.
>>
>> I'm not sure what you mean by slice of app executor, but in fine grain
>> mode there is a fixed cost of resource to launch a per slave executor,
>> and then cpu/mem cost to launch each Mesos task that launches a Spark
>> executor. Each framework is given offers by Mesos master and each have
>> the opportunity to use a offer or not.
>>
>> 3. In coarse-grained mode the scheduler launches
>> CoarseGrainedExecutorBackend on each slave, and it will be registering
>> back to the CoarseGrainedSchedulerBackend via the akka driverUrl. Then
>> the CoarseGrainedSchedulerBackend can scheduler individual Spark tasks
>> to those long running executor backends. These mini-tasks I believe
>> it's the same as Spark tasks, but instead of running a Mesos Task per
>> Spark task it's distributing these tasks to these long running Spark
>> executors.
>>
>> Mesos Resources becomes more static in coarse grained mode as it will
>> just launch a number of these CoarseGrainedExecutorBackends and keep
>> them running until the driver stops. Note this is subject to change
>> with dynamic allocation and other Spark/Mesos patches going into
>> Spark.
>>
>> Tim
>>
>> On Tue, May 5, 2015 at 6:19 AM, Gidon Gershinsky <gi...@il.ibm.com> wrote:
>> > Hi all,
>> >
>> > I have a few questions on how Spark is integrated with Mesos - any
>> > details, or pointers to a design document / relevant source, will be
>> > much
>> > appreciated.
>> >
>> > I'm aware of this description,
>> > https://github.com/apache/spark/blob/master/docs/running-on-mesos.md
>> >
>> > But its pretty high-level as far as the design is concerned, while I'm
>> > looking into lower details on how Spark actually calls the Mesos APIs,
>> > how
>> > it launches the tasks, etc
>> >
>> > Namely,
>> > 1. Does Spark creates a Mesos Framework instance for each Spark
>> > application (SparkContext)?
>> >
>> > 2. Citing from the link above,
>> >
>> > "In "fine-grained" mode (default), each Spark task runs as a separate
>> > Mesos task ... comes with an additional overhead in launching each task
>> > "
>> >
>> >
>> > Does it mean that the Mesos slave launches a Spark Executor for each
>> > task?
>> > (unlikely..) Or the slave host has a number of Spark Executors
>> > pre-launched (one per application), and sends the task to its
>> > application's executor?
>> > What is the resource offer then? Is it a host's cpu slice offered to any
>> > Framework (Spark app/context), that sends the task to run on it? Or its
>> > a
>> > 'slice of app Executor' that got idle, and is offered to its Framework?
>> >
>> > 3. "The "coarse-grained" mode will instead launch only one long-running
>> > Spark task on each Mesos machine, and dynamically schedule its own
>> > "mini-tasks" within it. "
>> >
>> > What is this special task? Is it the Spark app Executor? How these
>> > mini-tasks are different from 'regular' Spark tasks? How the resources
>> > are
>> > allocated/offered in this mode?
>> >
>> >
>> >
>> > Regards,
>> > Gidon
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to