[jira] [Created] (YARN-8638) Allow linux container runtimes to be pluggable
Craig Condit created YARN-8638: -- Summary: Allow linux container runtimes to be pluggable Key: YARN-8638 URL: https://issues.apache.org/jira/browse/YARN-8638 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Affects Versions: 3.2.0 Reporter: Craig Condit YARN currently supports three different Linux container runtimes (default, docker, and javasandbox). However, it would be relatively straightforward to support arbitrary runtime implementations. This would enable easier experimentation with new and emerging runtime technologies (runc, containerd, etc.) without requiring a rebuild and redeployment of Hadoop. This could be accomplished via a simple configuration change: {code:xml} yarn.nodemanager.runtime.linux.allowed-runtimes default,docker,experimental yarn.nodemanager.runtime.linux.experimental.class com.somecompany.yarn.runtime.ExperimentalLinuxContainerRuntime {code} In this example, {{yarn.nodemanager.runtime.linux.allowed-runtimes}} would now allow arbitrary values. Additionally, {{yarn.nodemanager.runtime.linux.\{RUNTIME_KEY}.class}} would indicate the {{LinuxContainerRuntime}} implementation to instantiate. A no-argument constructor should be sufficient, as {{LinuxContainerRuntime}} already provides an {{initialize()}} method. {{DockerLinuxContainerRuntime.isDockerContainerRequested(Map env)}} and {{JavaSandboxLinuxContainerRuntime.isSandboxContainerRequested()}} could be generalized to {{isRuntimeRequested(Map env)}} and added to the {{LinuxContainerRuntime}} interface. This would allow {{DelegatingLinuxContainerRuntime}} to select an appropriate runtime based on whether that runtime claimed ownership of the current container execution. For backwards compatibility, the existing values (default,docker,javasandbox) would continue to be supported as-is. Under the current logic, the evaluation order is javasandbox, docker, default (with default being chosen if no other candidates are available). Under the new evaluation logic, pluggable runtimes would be evaluated after docker and before default, in the order in which they are defined in the allowed-runtimes list. This will change no behavior on current clusters (as there would be no pluggable runtimes defined), and preserves behavior with respect to ordering of existing runtimes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.1.1 - RC0
Includes these additional votings we got 5 binding votings, and 12 non-binding votings, we have many others like Bibin/Akhil gave offline suggestions and testing results. Thanks everyone for your help! I've done most of staging/pushing works for the 3.1.1 release. Given Apache mirrors will take some time to finish syncing tasks, I plan to send out the release announcement by Thu noon PDT time. If there anything wanna to highlight for the announcement of 3.1.1, please let me know by Thu 10 AM PDT. Thanks again for your help of the release! Best, Wangda On Tue, Aug 7, 2018 at 7:57 PM Chandni Singh wrote: > Thanks Wangda! > > +1 (non-binding) > > Tested the following: > - Built from source and ran a single node cluster > - Ran the example pi job > - Launched yarn service sleep example > - Verified upgrade of yarn service sleep > > Thanks, > Chandni > > > On Tue, Aug 7, 2018 at 7:02 PM Suma Shivaprasad < > sumasai.shivapra...@gmail.com> wrote: > >> Thanks Wangda! >> >> +1 (non-binding) >> >> Tested the following: >> - Built from source >> - Setup single node cluster >> - Tested Dynamic queues >> - Tested MR and DS with default, docker runtime >> - Tested Yarn Services with various restart policies >> >> Thanks >> Suma >> >> >> On Tue, Aug 7, 2018 at 2:45 PM Eric Payne > .invalid> >> wrote: >> >> > Thanks Wangda for creating this release. >> > >> > +1 (binding) >> > Tested: >> > - Built from source >> > - Deployed to 6-node, multi-tennant, unsecured pseudo cluster with >> > hierarchical queue structure (CS) >> > - Refreshed queue (CS) properties >> > - Intra-queue preemption (CS) >> > - inter-queue preemption (CS) >> > - User weights (CS) >> > >> > Issues: >> > - Inter-queue preemption seems to be preempting unnecessarily (flapping) >> > when the queue balancing feature is enabled. This does not seem to be >> > specific to this release. >> > - The preemption-to-balance-queue-after-satisfied.enabled property seems >> > to always be enabled, but again, that is not specific to this release. >> > >> > >> > Eric >> > >> > >> > On Thursday, August 2, 2018, 1:44:22 PM CDT, Wangda Tan < >> > wheele...@gmail.com> wrote: >> > >> > >> > >> > >> > >> > Hi folks, >> > >> > I've created RC0 for Apache Hadoop 3.1.1. The artifacts are available >> here: >> > >> > http://people.apache.org/~wangda/hadoop-3.1.1-RC0/ >> > >> > The RC tag in git is release-3.1.1-RC0: >> > https://github.com/apache/hadoop/commits/release-3.1.1-RC0 >> > >> > The maven artifacts are available via repository.apache.org at >> > >> https://repository.apache.org/content/repositories/orgapachehadoop-1139/ >> > >> > You can find my public key at >> > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS >> > >> > This vote will run 5 days from now. >> > >> > 3.1.1 contains 435 [1] fixed JIRA issues since 3.1.0. >> > >> > I have done testing with a pseudo cluster and distributed shell job. My >> +1 >> > to start. >> > >> > Best, >> > Wangda Tan >> > >> > [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.1) >> > ORDER BY priority DESC >> > >> > - >> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org >> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org >> > >> > >> >
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/552/ [Aug 7, 2018 5:15:28 PM] (virajith) HDFS-13796. Allow verbosity of InMemoryLevelDBAliasMapServer to be [Aug 7, 2018 7:36:55 PM] (wangda) YARN-8629. Container cleanup fails while trying to delete Cgroups. (Suma [Aug 7, 2018 7:37:32 PM] (wangda) YARN-7089. Mark the log-aggregation-controller APIs as public. (Zian [Aug 7, 2018 8:01:13 PM] (wangda) YARN-8407. Container launch exception in AM log should be printed in [Aug 7, 2018 10:33:16 PM] (gifuma) YARN-8626. Create HomePolicyManager that sends all the requests to the [Aug 7, 2018 11:13:41 PM] (xiao) HDFS-13799. TestEditLogTailer#testTriggersLogRollsForAllStandbyNN fails [Aug 7, 2018 11:40:33 PM] (aengineer) HDDS-124. Validate all required configs needed for ozone-site.xml and [Aug 8, 2018 2:13:09 AM] (mackrorysd) HADOOP-15400. Improve S3Guard documentation on Authoritative Mode [Aug 8, 2018 2:23:17 AM] (vinayakumarb) HDFS-13786. EC: Display erasure coding policy for sub-directories is not [Aug 8, 2018 5:05:17 AM] (xiao) HDFS-13728. Disk Balancer should not fail if volume usage is greater [Aug 8, 2018 7:12:20 AM] (vinayakumarb) HDFS-13785. EC: 'removePolicy' is not working for built-in/system [Aug 8, 2018 11:50:23 AM] (ewan.higgs) HADOOP-15576. S3A Multipart Uploader to work with S3Guard and encryption -1 overall The following subsystems voted -1: compile mvninstall pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 00m 00s) unit Specific tests: Failed junit tests : hadoop.crypto.key.kms.server.TestKMS hadoop.cli.TestAclCLI hadoop.cli.TestAclCLIWithPosixAclInheritance hadoop.cli.TestCacheAdminCLI hadoop.cli.TestCryptoAdminCLI hadoop.cli.TestDeleteCLI hadoop.cli.TestErasureCodingCLI hadoop.cli.TestHDFSCLI hadoop.cli.TestXAttrCLI hadoop.fs.contract.hdfs.TestHDFSContractAppend hadoop.fs.contract.hdfs.TestHDFSContractConcat hadoop.fs.contract.hdfs.TestHDFSContractCreate hadoop.fs.contract.hdfs.TestHDFSContractDelete hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus hadoop.fs.contract.hdfs.TestHDFSContractMkdir hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader hadoop.fs.contract.hdfs.TestHDFSContractOpen hadoop.fs.contract.hdfs.TestHDFSContractPathHandle hadoop.fs.contract.hdfs.TestHDFSContractRename hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory hadoop.fs.contract.hdfs.TestHDFSContractSeek hadoop.fs.contract.hdfs.TestHDFSContractSetTimes hadoop.fs.loadGenerator.TestLoadGenerator hadoop.fs.permission.TestStickyBit hadoop.fs.shell.TestHdfsTextCommand hadoop.fs.TestEnhancedByteBufferAccess hadoop.fs.TestFcHdfsCreateMkdir hadoop.fs.TestFcHdfsPermission hadoop.fs.TestFcHdfsSetUMask hadoop.fs.TestGlobPaths hadoop.fs.TestHDFSFileContextMainOperations hadoop.fs.TestHdfsNativeCodeLoader hadoop.fs.TestResolveHdfsSymlink hadoop.fs.TestSWebHdfsFileContextMainOperations hadoop.fs.TestSymlinkHdfsDisable hadoop.fs.TestSymlinkHdfsFileContext hadoop.fs.TestSymlinkHdfsFileSystem hadoop.fs.TestUnbuffer hadoop.fs.TestUrlStreamHandler hadoop.fs.TestWebHdfsFileContextMainOperations hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot hadoop.fs.viewfs.TestViewFileSystemHdfs hadoop.fs.viewfs.TestViewFileSystemLinkFallback hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash hadoop.fs.viewfs.TestViewFileSystemWithAcls hadoop.fs.viewfs.TestViewFileSystemWithTruncate hadoop.fs.viewfs.TestViewFileSystemWithXAttrs hadoop.fs.viewfs.TestViewFsAtHdfsRoot hadoop.fs.viewfs.TestViewFsDefaultValue hadoop.fs.viewfs.TestViewFsFileStatusHdfs hadoop.fs.viewfs.TestViewFsHdfs hadoop.fs.viewfs.TestViewFsWithAcls hadoop.fs.viewfs.TestViewFsWithXAttrs hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy hadoop.hdfs.client.impl.TestBlockReaderRemote hadoop.hdfs.client.impl.TestClientBlockVerification hadoop.hdfs.crypto.TestHdfsCryptoStreams hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer hadoop.hdfs.qjournal.client.TestEpochsAreUnique hadoop.hdfs.qjournal.client.TestQJMWithFaults hadoop.hdfs.qjournal.client.TestQuorumJournalManager hadoop.hdfs.qjournal.server.TestJournal hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.qjournal.server.TestJournalNodeMXBean hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.qjournal.server.TestJournalNodeSync
Re: [DISCUSS] Alpha Release of Ozone
Thanks for reporting this issue. I have filed a JIRA to address this issue. https://issues.apache.org/jira/browse/HDDS-341 >So, consider this as a report. IMHO, cutting an Ozone release prior to a >Hadoop release ill-advised given the distribution impact and the requirements >of the merge vote. The Ozone release is being planned to address issues like these; In my mind if we go thru a release exercise, we will be able to identify all ozone and Hadoop related build and release issues. Ozone will tremendously benefit from a release exercise and the community review that comes from that. Thanks Anu On 8/8/18, 1:19 PM, "Allen Wittenauer" wrote: > On Aug 8, 2018, at 12:56 PM, Anu Engineer wrote: > >> Has anyone verified that a Hadoop release doesn't have _any_ of the extra ozone bits that are sprinkled outside the maven modules? > As far as I know that is the state, we have had multiple Hadoop releases after ozone has been merged. So far no one has reported Ozone bits leaking into Hadoop. If we find something like that, it would be a bug. There hasn't been a release from a branch where Ozone has been merged yet. The first one will be 3.2.0. Running create-release off of trunk presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in the Hadoop source tar ball. So, consider this as a report. IMHO, cutting an Ozone release prior to a Hadoop release ill-advised given the distribution impact and the requirements of the merge vote.
[jira] [Created] (YARN-8637) Add FederationStateStore getAppInfo API for GlobalPolicyGenerator
Botong Huang created YARN-8637: -- Summary: Add FederationStateStore getAppInfo API for GlobalPolicyGenerator Key: YARN-8637 URL: https://issues.apache.org/jira/browse/YARN-8637 Project: Hadoop YARN Issue Type: Task Reporter: Botong Huang Assignee: Botong Huang The core api for FederationStateStore is provided in _FederationStateStore_. In this patch, we are added a _FederationGPGStateStore_ api just for GPG. Specifically, we are adding the API to get full application info from statestore with the starting timestamp of the app entry, so that the _ApplicationCleaner_ (YARN-7599) in GPG can delete and cleanup old entries in the table. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [DISCUSS] Alpha Release of Ozone
> On Aug 8, 2018, at 12:56 PM, Anu Engineer wrote: > >> Has anyone verified that a Hadoop release doesn't have _any_ of the extra >> ozone bits that are sprinkled outside the maven modules? > As far as I know that is the state, we have had multiple Hadoop releases > after ozone has been merged. So far no one has reported Ozone bits leaking > into Hadoop. If we find something like that, it would be a bug. There hasn't been a release from a branch where Ozone has been merged yet. The first one will be 3.2.0. Running create-release off of trunk presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in the Hadoop source tar ball. So, consider this as a report. IMHO, cutting an Ozone release prior to a Hadoop release ill-advised given the distribution impact and the requirements of the merge vote. - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [DISCUSS] Alpha Release of Ozone
> Given that there are some Ozone components spread out past the core maven > modules, is the plan to release a Hadoop Trunk + Ozone tar ball or is more > work going to go into segregating the Ozone components prior to release? The official release will be a source tarball, we intend to release an ozone only binaries to make it easy to for people to deploy. We are still formulating the plans and you are welcome to leave your comments on HDDS-214. > Has anyone verified that a Hadoop release doesn't have _any_ of the extra > ozone bits that are sprinkled outside the maven modules? As far as I know that is the state, we have had multiple Hadoop releases after ozone has been merged. So far no one has reported Ozone bits leaking into Hadoop. If we find something like that, it would be a bug. Thanks Anu On 8/8/18, 12:04 PM, "Allen Wittenauer" wrote: Given that there are some Ozone components spread out past the core maven modules, is the plan to release a Hadoop Trunk + Ozone tar ball or is more work going to go into segregating the Ozone components prior to release? Has anyone verified that a Hadoop release doesn't have _any_ of the extra ozone bits that are sprinkled outside the maven modules? - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [DISCUSS] Alpha Release of Ozone
Given that there are some Ozone components spread out past the core maven modules, is the plan to release a Hadoop Trunk + Ozone tar ball or is more work going to go into segregating the Ozone components prior to release? Has anyone verified that a Hadoop release doesn't have _any_ of the extra ozone bits that are sprinkled outside the maven modules? - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/ [Aug 7, 2018 9:33:14 AM] (msingh) HDDS-230. ContainerStateMachine should implement readStateMachineData [Aug 7, 2018 10:39:53 AM] (msingh) HDDS-301. ozone command shell does not contain subcommand to run ozoneFS [Aug 7, 2018 5:15:28 PM] (virajith) HDFS-13796. Allow verbosity of InMemoryLevelDBAliasMapServer to be [Aug 7, 2018 7:36:55 PM] (wangda) YARN-8629. Container cleanup fails while trying to delete Cgroups. (Suma [Aug 7, 2018 7:37:32 PM] (wangda) YARN-7089. Mark the log-aggregation-controller APIs as public. (Zian [Aug 7, 2018 8:01:13 PM] (wangda) YARN-8407. Container launch exception in AM log should be printed in [Aug 7, 2018 10:33:16 PM] (gifuma) YARN-8626. Create HomePolicyManager that sends all the requests to the [Aug 7, 2018 11:13:41 PM] (xiao) HDFS-13799. TestEditLogTailer#testTriggersLogRollsForAllStandbyNN fails [Aug 7, 2018 11:40:33 PM] (aengineer) HDDS-124. Validate all required configs needed for ozone-site.xml and -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.client.TestApplicationMasterServiceProtocolForTimelineV2 hadoop.mapred.TestMRTimelineEventHandling cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/diff-checkstyle-root.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-hdds_client.txt [60K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-hdds_framework.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-hdds_tools.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-ozone_client.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-ozone_common.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/branch-findbugs-hadoop-ozone_tools.txt [4.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/diff-javadoc-javadoc-root.txt [760K] CTEST: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/862/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ct
[jira] [Created] (YARN-8636) TimelineSchemaCreator command not working
Bibin A Chundatt created YARN-8636: -- Summary: TimelineSchemaCreator command not working Key: YARN-8636 URL: https://issues.apache.org/jira/browse/YARN-8636 Project: Hadoop YARN Issue Type: Bug Reporter: Bibin A Chundatt {code:java} nodemanager/bin> ./hadoop org.apache.yarn.timelineservice.storage.TimelineSchemaCreator -create Error: Could not find or load main class org.apache.yarn.timelineservice.storage.TimelineSchemaCreator {code} share/hadoop/yarn/timelineservice/ is not part of class path -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-8635) Container fails to start if umask is 077
Bibin A Chundatt created YARN-8635: -- Summary: Container fails to start if umask is 077 Key: YARN-8635 URL: https://issues.apache.org/jira/browse/YARN-8635 Project: Hadoop YARN Issue Type: Bug Affects Versions: 3.1.0 Reporter: Bibin A Chundatt {code} java.io.IOException: Application application_1533652359071_0001 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is mapred main : requested yarn user is mapred Path /opt/HA/OSBR310/nmlocal/usercache/mapred/appcache/application_1533652359071_0001 has permission 700 but needs permission 750. Did not create any app directories at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:411) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1229) Caused by: org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=255: at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:402) ... 1 more Caused by: ExitCodeException exitCode=255: at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009) at org.apache.hadoop.util.Shell.run(Shell.java:902) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227) at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152) ... 2 more 2018-08-08 17:43:26,918 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e04_1533652359071_0001_01_27 transitioned from LOCALIZING to LOCALIZATION_FAILED 2018-08-08 17:43:26,916 WARN org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code from container container_e04_1533652359071_0001_01_31 startLocalizer is : 255 org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=255: at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:402) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1229) Caused by: ExitCodeException exitCode=255: at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009) at org.apache.hadoop.util.Shell.run(Shell.java:902) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227) at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152) ... 2 more 2018-08-08 17:43:26,923 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Localizer failed for containe {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org