Re: [Yarn] Executor cores isolation

2015-11-10 Thread Jörn Franke
I would have to check the Spark source code, but theoretically you can limit 
the no of threads on the jvm level. Maybe spark does this.Alternatively, you 
can use cgroups, but this introduces other complexity.

> On 10 Nov 2015, at 14:33, Peter Rudenko  wrote:
> 
> Hi i have a question: how does the cores isolation works on spark on yarn. 
> E.g. i have a machine with 8 cores, but launched a worker with 
> --executor-cores 1, and after doing something like:
> 
> rdd.foreachPartition(=>{for all visible cores: burn core in a new tread})
> 
> Will it see 1 core or all 8 cores?
> 
> Thanks,
> Peter Rudenko
> 

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: [Yarn] Executor cores isolation

2015-11-10 Thread Peter Rudenko
As i've tried cgroups - seems the isolation is done by percantage not by 
cores number. E.g. i've set min share to 256 - i still see all 8 cores, 
but i could only load only 20% of each core.


Thanks,
Peter Rudenko
On 2015-11-10 15:52, Saisai Shao wrote:
From my understanding, it depends on whether you enabled CGroup 
isolation or not in Yarn. By default it is not, which means you could 
allocate one core but bump a lot of thread in your task to occupy the 
CPU resource, this is just a logic limitation. For Yarn CPU isolation 
you may refer to this post 
(http://hortonworks.com/blog/apache-hadoop-yarn-in-hdp-2-2-isolation-of-cpu-resources-in-your-hadoop-yarn-clusters/). 



Thanks
Jerry

On Tue, Nov 10, 2015 at 9:33 PM, Peter Rudenko 
> wrote:


Hi i have a question: how does the cores isolation works on spark
on yarn. E.g. i have a machine with 8 cores, but launched a worker
with --executor-cores 1, and after doing something like:

rdd.foreachPartition(=>{for all visible cores: burn core in a new
tread})

Will it see 1 core or all 8 cores?

Thanks,
Peter Rudenko






[Yarn] Executor cores isolation

2015-11-10 Thread Peter Rudenko
Hi i have a question: how does the cores isolation works on spark on 
yarn. E.g. i have a machine with 8 cores, but launched a worker with 
--executor-cores 1, and after doing something like:


rdd.foreachPartition(=>{for all visible cores: burn core in a new tread})

Will it see 1 core or all 8 cores?

Thanks,
Peter Rudenko



Re: [Yarn] Executor cores isolation

2015-11-10 Thread Saisai Shao
>From my understanding, it depends on whether you enabled CGroup isolation
or not in Yarn. By default it is not, which means you could allocate one
core but bump a lot of thread in your task to occupy the CPU resource, this
is just a logic limitation. For Yarn CPU isolation you may refer to this
post (
http://hortonworks.com/blog/apache-hadoop-yarn-in-hdp-2-2-isolation-of-cpu-resources-in-your-hadoop-yarn-clusters/
).

Thanks
Jerry

On Tue, Nov 10, 2015 at 9:33 PM, Peter Rudenko 
wrote:

> Hi i have a question: how does the cores isolation works on spark on yarn.
> E.g. i have a machine with 8 cores, but launched a worker with
> --executor-cores 1, and after doing something like:
>
> rdd.foreachPartition(=>{for all visible cores: burn core in a new tread})
>
> Will it see 1 core or all 8 cores?
>
> Thanks,
> Peter Rudenko
>
>