As i've tried cgroups - seems the isolation is done by percantage not by cores number. E.g. i've set min share to 256 - i still see all 8 cores, but i could only load only 20% of each core.

Thanks,
Peter Rudenko
On 2015-11-10 15:52, Saisai Shao wrote:
From my understanding, it depends on whether you enabled CGroup isolation or not in Yarn. By default it is not, which means you could allocate one core but bump a lot of thread in your task to occupy the CPU resource, this is just a logic limitation. For Yarn CPU isolation you may refer to this post (http://hortonworks.com/blog/apache-hadoop-yarn-in-hdp-2-2-isolation-of-cpu-resources-in-your-hadoop-yarn-clusters/).

Thanks
Jerry

On Tue, Nov 10, 2015 at 9:33 PM, Peter Rudenko <petro.rude...@gmail.com <mailto:petro.rude...@gmail.com>> wrote:

    Hi i have a question: how does the cores isolation works on spark
    on yarn. E.g. i have a machine with 8 cores, but launched a worker
    with --executor-cores 1, and after doing something like:

    rdd.foreachPartition(=>{for all visible cores: burn core in a new
    tread})

    Will it see 1 core or all 8 cores?

    Thanks,
    Peter Rudenko



Reply via email to