Hi Imran,
here is my usecase
There is 1K nodes cluster and jobs have performance degradation because of
a single node. It's rather hard to convince Cluster Ops to decommission
node because of "performance degradation". Imagine 10 dev teams chase
single ops team for valid reason (node has problems)
Serga, can you explain a bit more why you want this ability?
If the node is really bad, wouldn't you want to decomission the NM entirely?
If you've got heterogenous resources, than nodelabels seem like they would
be more appropriate -- and I don't feel great about adding workarounds for
the
You can try with Yarn node labels:
https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/NodeLabel.html
Then you can whitelist nodes.
> Am 19.01.2019 um 00:20 schrieb Serega Sheypak :
>
> Hi, is there any possibility to tell Scheduler to blacklist specific nodes in
> advance?
The new issue is https://issues.apache.org/jira/browse/SPARK-26688.
On Tue, Jan 22, 2019 at 11:30 AM Attila Zsolt Piros
wrote:
> Hi,
>
> >> Is it this one: https://github.com/apache/spark/pull/23223 ?
>
> No. My old development was https://github.com/apache/spark/pull/21068,
> which is closed.
Hi,
>> Is it this one: https://github.com/apache/spark/pull/23223 ?
No. My old development was https://github.com/apache/spark/pull/21068,
which is closed.
This would be a new improvement with a new Apache JIRA issue (
https://issues.apache.org) and with a new Github pull request.
>> Can I try
Hi Apiros, thanks for your reply.
Is it this one: https://github.com/apache/spark/pull/23223 ?
Can I try to reach you through Cloudera Support portal?
пн, 21 янв. 2019 г. в 20:06, attilapiros :
> Hello, I was working on this area last year (I have developed the
> YarnAllocatorBlacklistTracker)
Hello, I was working on this area last year (I have developed the
YarnAllocatorBlacklistTracker) and if you haven't found any solution for
your problem I can introduce a new config which would contain a sequence of
always blacklisted nodes. This way blacklisting would improve a bit again :)
--
o:* Felix Cheung
> *Cc:* Serega Sheypak; user
> *Subject:* Re: Spark on Yarn, is it possible to manually blacklist nodes
> before running spark job?
>
> on yarn it is impossible afaik. on kubernetes you can use taints to keep
> certain nodes outside of spark
>
> On Fri, Jan
From: Li Gao
Sent: Saturday, January 19, 2019 8:43 AM
To: Felix Cheung
Cc: Serega Sheypak; user
Subject: Re: Spark on Yarn, is it possible to manually blacklist nodes before
running spark job?
on yarn it is impossible afaik. on kubernetes you can use taints
y 18, 2019 3:21 PM
> *To:* user
> *Subject:* Spark on Yarn, is it possible to manually blacklist nodes
> before running spark job?
>
> Hi, is there any possibility to tell Scheduler to blacklist specific nodes
> in advance?
>
Not as far as I recall...
From: Serega Sheypak
Sent: Friday, January 18, 2019 3:21 PM
To: user
Subject: Spark on Yarn, is it possible to manually blacklist nodes before
running spark job?
Hi, is there any possibility to tell Scheduler to blacklist specific
Hi, is there any possibility to tell Scheduler to blacklist specific nodes
in advance?
12 matches
Mail list logo