Hi Billy,

Thanks for the reply. This is helpful for understanding the Mesos deeper.

Regards,
Pradeep

On 5 February 2015 at 10:01, Billy Bones <gael.ther...@gmail.com> wrote:

> WARNING: The following assumption is based on my little understanding of
> mesos architecture and internals, you should not take it as a definitive
> answer and may wait for more experimented suggestions.
>
> About my little understanding of the mesos ressource allocation process, I
> think that on your kind of environnements (ARM / x86 / GPU / FPGA) it will
> not really allocate them wisely as the default algorithm is not so smart
> and consider ressources as commodities and not their real speed etc.
>
> I read some topics earlier about the necessity to improve this specific
> part of the "kernel", but It didn't mention your archs and focused deeply
> on the x86 family.
> I think that integrate the GPUs and FPGAs would be awesome!
>
> One nice feature would be that the master look at the registered slaves
> deeper regarding their archs and ressources perfomance before offers any
> ressource to a task.
>
> This kind of feature could be implement when the slave try to register to
> the master as a pre-fly test (Benchmark??).
> That would then add smartest offers and ressource scheduling.
>
> Anyway, long story short, I don't think so, but you should way for more
> experimented answers.
>
> 2015-02-05 0:00 GMT+01:00 Pradeep Kiruvale <pradeepkiruv...@gmail.com>:
>
>> Hi All,
>>
>> I am new to Mesos and I have heard and read lot about it.
>>
>> I have few doubts regarding the resource allocation by the mesos, please
>> help me
>> to clarify my doubts.
>>
>> In a data center, if there are thousands of heterogeneous nodes
>> (x86,arm,gpu,fpgas) then is the mesos can really allocate a co-located
>> resources for any incoming application to finish the task faster?
>>
>> How these resource constraints are solved? what kind of a constraint
>> solver it uses?
>>
>> Is the policy maker configurable?
>>
>> Thanks & Regards,
>> Pradeep
>>
>>
>>
>

Reply via email to