[ 
https://issues.apache.org/jira/browse/YARN-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917076#comment-16917076
 ] 

Eric Badger commented on YARN-9561:
-----------------------------------

bq. Do you mean C side? Java side does not have privileges to run modprobe or 
lsmod due to lack of root privileges.
I don't believe we need root privileges to run lsmod. It simply parses 
/proc/modules. On RHEL 7 this file is world-readable. So I think an {{lsmod | 
grep overlay}} would be sufficient.

bq. It took me several days to restore my cluster to a working state with 
overlay kernel module installed. In the latest patch 004, mapreduce pi job 
fails when trying to run mapreduce pi:
If you're failing in java then that means that the overlay mounts all worked 
and that runC has been correctly invoked. That's fantastic news! We're very 
close.

bq. Do we need implicit mounting of Hadoop binaries to enable existing workload 
to run with runc? If not, what step can be used to run an example app?
I don't have any Hadoop jars in the image or bind-mounted into the image. 
Instead I'm running using a Hadoop tarball in HDFS.

{noformat:title=mapred-site.xml}
  <property>
    <name>mapreduce.application.framework.path</name>
    
<value>${fs.defaultFS}/user/ebadger/hadoop-3.3.0-SNAPSHOT.tar.gz#hadoop-mapreduce</value>
  </property>

  <property>
    <name>mapreduce.application.classpath</name>
    
<value>./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/common/*,./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/common/lib/*,./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/hdfs/*,./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/hdfs/lib/*,./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/yarn/*,./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/yarn/lib/*,./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/mapreduce/*,./hadoop-mapreduce/hadoop-3.3.0-SNAPSHOT/share/hadoop/mapreduce/lib/*</value>
  </property>
{noformat}

If you would like to bind-mount the hadoop jars instead, you can add them to 
the default mount list 
{{yarn.nodemanager.runtime.linux.docker.default-rw-mounts}} or 
{{yarn.nodemanager.runtime.linux.docker.default-ro-mounts}} (don't think you 
should need them to be writable). You can choose where in the image that you'd 
like them to be mounted and then set your classpath up to reflect where the 
jars are located.

{noformat:title=Default Mount List Example}
 <property>
   <name>yarn.nodemanager.runtime.linux.docker.default-rw-mounts</name>
   <value>/var/run/nscd:/var/run/nscd</value>
 </property>
{noformat}


> Add C changes for the new RuncContainerRuntime
> ----------------------------------------------
>
>                 Key: YARN-9561
>                 URL: https://issues.apache.org/jira/browse/YARN-9561
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Eric Badger
>            Assignee: Eric Badger
>            Priority: Major
>         Attachments: YARN-9561.001.patch, YARN-9561.002.patch, 
> YARN-9561.003.patch, YARN-9561.004.patch
>
>
> This JIRA will be used to add the C changes to the container-executor native 
> binary that are necessary for the new RuncContainerRuntime. There should be 
> no changes to existing code paths. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to