[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16191735#comment-16191735
 ] 

Wangda Tan commented on YARN-6620:
----------------------------------

Thanks [~sunilg], all good points.

bq. ResourceInformation.GPU_URI. I think this need not have to be hard coded. 
Other than memory-mb and vcores, all other resource names could be pulled from 
resource-types.xml. So is it fine to have the new resource names could be 
pulled form ResourceUtils itself.
I personally tend to make all first class supported resource type name to 
hardcoded. Part of the reason is, on NM side, many component needs to 
understand the question "what resource type name associated with the resource 
plugin". If we don't have a hard coded resource type for GPU, we have to define 
a "gpu_resource_type_name=..." in yarn-site.xml, which I want to avoid and I 
don't see any benefit of doing this. I don't expect any admin use "yarn.io" 
namespace to define non-first class supported resource types.

bq. In NM#serviceInit, ResourcePluginManager is created always. So do we need 
to have a null check in other places?
We have lots of UTs use a mocked NMContext which has null ptr of RPM. I found 
many unit test failures without this, if you really think we should update 
this, I can update UTs as well.

bq. In GPU#preStart(Container container), if requested container doesnt have 
any GPU demand, do we need to proceed further?
Yes we still need to proceed. We need to blacklist for all GPUs if a container 
doesn't have GPU request. 

bq. In case if future, an affinity constraint is coming for a given GPU device 
for a container, I guess we need a lil more changes to GpuAllocation class. 
Could we have some dummy apis defined now so that, too much of redesign is not 
needed later.
I would prefer to delay this, one of the reason is, additional affinity 
constraint might be made by RM instead of NM. For example an app requests 4 
GPUs on the same host, each of host has 8 GPUs. Only RM has the global picture 
of which host has 4 closest GPUs. So far we don't have solution of this problem 
yet. Once we want to support such use cases, there're many other places need to 
be updated beyond GpuAllocation.

bq. In GPU#bootstrap, we are retuning null. Is it correct?
Yes, we have done cgroups mounting, so we don't need to do any other ops for 
bootstrap.

bq. In ResourcePluginManager, we could avoid synchronized and keep the map as 
ConcurrentHashMap ?
I personally prefer to continue this way: RPM as well as (almost) other NM 
components don't have performance issues now and in the future. Use 
synchronized lock can enforce atomic semantics to all fields other than 
map-only. Unless we see some performance issues of these component, I prefer to 
use most simple and straightforward synchronized lock.

bq. In GpuDiscover#initialize, along with file existence, we could also check 
for file permission owner etc to ensure that its been accessed correctly.
We don't throw any exception during the initialize, it might be not useful to 
check this. I would prefer to let script executor to do this check when run 
commands.

bq. GpuDeviceInformationParser has too much xml dependency. Hadoop common has 
some xml parser, correct? could we use that ?
At the beginning I directly use XML parser, similar to how FairScheduler parses 
config files. Devaraj suggested to use JAXB and I think it makes sense because 
we can reuse all these objects once we want to support query GPU information 
from web UI.

Addressed 3/4/9.

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -------------------------------------------------------------------------------------
>
>                 Key: YARN-6620
>                 URL: https://issues.apache.org/jira/browse/YARN-6620
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Wangda Tan
>            Assignee: Wangda Tan
>         Attachments: YARN-6620.001.patch, YARN-6620.002.patch, 
> YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch, 
> YARN-6620.006-WIP.patch, YARN-6620.007.patch, YARN-6620.008.patch, 
> YARN-6620.009.patch, YARN-6620.010.patch, YARN-6620.011.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to