Arun Suresh commented on YARN-2885:

Thank you for the review [~leftnoteasy] !! let me try to clarify your 
concerns.. [~kkaranasos], correct me if im wrong..

bq. I'm not sure if it is possibly that queueable resource requests could be 
also sent to RM with this implementation.
What we were aiming for is to not send any Queueable resource reqs to the RM. 
The Local RM (the core functionality of which is now encapsulated in the 
DistSchedulerRequestInterceptor class). As [~sriramrao] had mentioned, we do 
plan to enforce policies around how the Distributed Scheduling is actually done 
on the NM. In the first cut (this JIRA), these policies, which WILL be pushed 
down from the RM, would be stuff like *Maximum resource capability of 
containers allocated* or *set of nodes on which to target Queuable containers*. 
These would be computed at the RM and sent back as part of the AllocateResponse 
and the RegisterResponse. The plan is to have that actual computation happen in 
in the Coordinator running in the RM which we plan to tackle as part of 

bq.  I'm not quite sure why isDistributedSchedulingEnabled is required for AM's 
AllocateRequest and RegisterRequest
I totally agree that the AM should not be bothered with this.. But if you 
notice, It is actually not set by the AM, it set by the 
DistSchedulerReqeustInterceptor when it proxies the AM calls. Also, to further 
your point, I am not really happy with putting stuff in the Allocation/Register 
response, that can be seen by the AM which is only relevant to the 
DistScheduler framework. Again, I’m not really happy with this either… I was 
thinking of alternatively the following :
# creating a Wrapper Protocol (Distributed Scheduling AM Protocol) over the AM 
protocol, which basically Wraps each request/response with additional info 
which will be seen only by the DistScheduler running on the NM
# Have an Distributed Scheduler AM Service running on the RM if DS is enabled. 
This will implement the new protocol (it will delegate all the AMProtocol stuff 
to the AMService and will handle DistScheduler specific stuff)
# Instead of having the DSReqInterceptor at the begining of the AMRMProxy 
pipeline, add it to the end (or replace the DefaultReqInterceptor) and have it 
talk the new DistSchedulerAMProtocol (which wraps the Allocate/Register 
requests with the extra DS stuff)
What do you think ? will take a crack at this in the next patch.

Regarding #3, I just wanted a conf to specify that Dist Scheuling has been 
'turned on’.. which if set to false, will revert to default behavior of sending 
even the Queuable reqs to the RM.

I think most of #4 will be taken care of if we create a Wrapper protocol as I 
mentioned earlier..
.. w.r.t getContainerIdStart, technically, the containerId for each app starts 
from the RM epoch.. which is what I wanted to pass on to the NM..
.. agreed, will change the name of getNodeList
.. w.r.t containerTokenExpiryInterval.. so this gets sent from the RM and 
signifies the token expiry for allocated queue able containers.. don’t think it 
might vary per NM
.. w.r.t getMin/MaxAllocatableCapabilty.. we wanted this to be something that 
is specific to the Queueable containers and with is policy driven (or decided 
by the Dist coordinator).. I agree, we can change its name.

Regarding #5, Agreed, will make the changes to public APIs in a separate JIRA.

Hope this makes sense ?

> Create AMRMProxy request interceptor for distributed scheduling decisions for 
> queueable containers
> --------------------------------------------------------------------------------------------------
>                 Key: YARN-2885
>                 URL: https://issues.apache.org/jira/browse/YARN-2885
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: nodemanager, resourcemanager
>            Reporter: Konstantinos Karanasos
>            Assignee: Arun Suresh
>         Attachments: YARN-2885-yarn-2877.001.patch
> We propose to add a Local ResourceManager (LocalRM) to the NM in order to 
> support distributed scheduling decisions. 
> Architecturally we leverage the RMProxy, introduced in YARN-2884. 
> The LocalRM makes distributed decisions for queuable containers requests. 
> Guaranteed-start requests are still handled by the central RM.

This message was sent by Atlassian JIRA

Reply via email to