[jira] [Created] (HADOOP-14865) Mvnsite fail to execute macro defined in the document HDFSErasureCoding.md
SammiChen created HADOOP-14865: -- Summary: Mvnsite fail to execute macro defined in the document HDFSErasureCoding.md Key: HADOOP-14865 URL: https://issues.apache.org/jira/browse/HADOOP-14865 Project: Hadoop Common Issue Type: Bug Components: build Reporter: SammiChen [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project hadoop-hdfs: Error parsing '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md': line [-1] Error parsing the model: Unable to execute macro in the document: toc -> [Help 1] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen reopened HADOOP-14521: > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
John Zhuge created HADOOP-14864: --- Summary: FSDataInputStream#unbuffer UOE exception should print the stream class name Key: HADOOP-14864 URL: https://issues.apache.org/jira/browse/HADOOP-14864 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.6.4 Reporter: John Zhuge Priority: Minor The current exception message: {noformat} org/apache/hadoop/fs/ failed: error: UnsupportedOperationException: this stream does not support unbuffering.java.lang.UnsupportedOperationException: this stream does not support unbuffering. at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14863) branch-2 native compilation broken in hadoop-yarn-server-nodemanager
Varun Saxena created HADOOP-14863: - Summary: branch-2 native compilation broken in hadoop-yarn-server-nodemanager Key: HADOOP-14863 URL: https://issues.apache.org/jira/browse/HADOOP-14863 Project: Hadoop Common Issue Type: Bug Reporter: Varun Saxena {noformat} [WARNING] make[2]: Leaving directory `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' [WARNING] make[1]: Leaving directory `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c: In function ‘all_numbers’: [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [WARNING]for (int i = 0; i < strlen(input); i++) { [WARNING]^ [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: note: use option -std=c99 or -std=gnu99 to compile your code [WARNING] make[2]: *** [CMakeFiles/container.dir/main/native/container-executor/impl/utils/string-utils.c.o] Error 1 [WARNING] make[2]: *** Waiting for unfinished jobs [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c: In function ‘tokenize_docker_command’: [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1193:7: warning: unused variable ‘c’ [-Wunused-variable] [WARNING]int c = 0; [WARNING]^ [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2 [WARNING] make: *** [all] Error 2 {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Patch testing on Windows [BETA]
I’m a little hesitant to share this because it’s really Not Quite Ready for primetime, but I figured others might want to play with it early anyway. https://builds.apache.org/view/H-L/view/Hadoop/job/Precommit-hadoop-win/ Will let you test patches on Windows. It does have some big caveats though: * It will NOT update the JIRA. You’ll need to go back and check on the results later. * It pre-applies two other patches to the source tree: * HADOOP-14667 modifies how Visual Studio is used. This patch still needs some cleanup in order to work in a compatible way with precommit. * HADOOP-14696 changes how the parallel directories are created during unit tests. Steve had some problems with it that I haven’t been able to replicate. I’ll likely just un-optimize the changes at some point. * It currently only runs on the windows-2012-2 node. We just need INFRA-15010 to be completed on the other nodes. * It’s running a slightly modified version of hadoop’s Apache Yetus personality (see YETUS-545). * It’s using a shared maven cache, so there is a risk of classes missing/corruption like we used to have on the Linux test boxes two years ago. This is an easy fix and I just haven’t gotten around to it. * A good number of the unit tests on Windows are really broken. Badly. Let this be a catalyst to fix them. - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Merge YARN-3926 (resource profile) to trunk
Hi all, Given we have 3 binding +1s, the vote passes. I just push changes to trunk. Will update JIRAs accordingly. Thanks everybody for helping this feature and voting! Best, Wangda On Sat, Aug 26, 2017 at 8:58 AM, Sunil Gwrote: > Hi Daniel > > Thank you very much for the support. > > * When you say that the feature can be turned > off, do you mean resource types or resource profiles? I know there's an > off-by-default property that governs resource profiles, but I didn't see > any way to turn off resource types. > Yes,*yarn.resourcemanager.resource-profiles.enabled* is false by default > and controls off/on of this feature. Now regarding new resource types, its > been loaded from "*resource-types.xml"* and by default this XML file is not > available in the package. Thus prevents any issues in default case. Once > this file is added to a cluster then new resources will be loaded from > same. > > * Even if only CPU and memory are configured, i.e. no additional resource > types, the code path is different than it was. > Earlier primitive data types were used to represent vcores and memory. As > per resource profile work, all resources under YARN is categorized as > ResourceInformation and placed under existing Resource object. So memory > and vcores will be accessible and operable with same set of public apis > from Resources or ResourceCalculator (DRC) same as earlier even when > feature is off (Code path is same, but improved to support a unified > ResourceInformation class instead of memory/vcores primitive types). > > Thanks > Sunil > > > > > On Sat, Aug 26, 2017 at 8:10 PM Daniel Templeton > wrote: > > > Quick question, Wangda. When you say that the feature can be turned > > off, do you mean resource types or resource profiles? I know there's an > > off-by-default property that governs resource profiles, but I didn't see > > any way to turn off resource types. Even if only CPU and memory are > > configured, i.e. no additional resource types, the code path is > > different than it was. Specifically, where CPU and memory were > > primitives before, they're now entries in an array whose indexes have to > > be looked up through the ResourceUtils class. Did I miss something? > > > > For those who haven't followed the feature closely, there are really two > > features here. Resource types allows for declarative extension of the > > resource system in YARN. Resource profiles builds on top of resource > > types to allow a user to request a group of resources as a profile, much > > like EC2 instance types, e.g. "fast-compute" might mean 32GB RAM, 8 > > vcores, and 2 GPUs. > > > > Daniel > > > > On 8/23/17 11:49 AM, Wangda Tan wrote: > > > Hi folks, > > > > > > Per earlier discussion [1], I'd like to start a formal vote to merge > > > feature branch YARN-3926 (Resource profile) to trunk. The vote will run > > for > > > 7 days and will end August 30 10:00 AM PDT. > > > > > > Briefly, YARN-3926 can extend resource model of YARN to support > resource > > > types other than CPU and memory, so it will be a cornerstone of > features > > > like GPU support (YARN-6223), disk scheduling/isolation (YARN-2139), > FPGA > > > support (YARN-5983), network IO scheduling/isolation (YARN-2140). In > > > addition to that, YARN-3926 allows admin to preconfigure resource > > profiles > > > in the cluster, for example, m3.large means <2 vcores, 8 GB memory, 64 > GB > > > disk>, so applications can request "m3.large" profile instead of > > specifying > > > all resource types’s values. > > > > > > There are 32 subtasks that were completed as part of this effort. > > > > > > This feature needs to be explicitly turned on before use. We paid close > > > attention to compatibility, performance, and scalability of this > feature, > > > mentioned in [1], we didn't see observable performance regression in > > large > > > scale SLS (scheduler load simulator) executions and saw less than 5% > > > performance regression by using micro benchmark added by YARN-6775. > > > > > > This feature works from end-to-end (including > UI/CLI/application/server), > > > we have setup a cluster with this feature turned on runs for several > > weeks, > > > we didn't see any issues by far. > > > > > > Merge JIRA: YARN-7013 (Jenkins gave +1 already). > > > Documentation: YARN-7056 > > > > > > Special thanks to a team of folks who worked hard and contributed > towards > > > this effort including design discussion/development/reviews, etc.: > Varun > > > Vasudev, Sunil Govind, Daniel Templeton, Vinod Vavilapalli, Yufei Gu, > > > Karthik Kambatla, Jason Lowe, Arun Suresh. > > > > > > Regards, > > > Wangda Tan > > > > > > [1] > > > > > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/ > 201708.mbox/%3CCAD%2B%2BeCnjEHU%3D-M33QdjnND0ZL73eKwxRua4% > 3DBbp4G8inQZmaMg%40mail.gmail.com%3E > > > > > > > > > - > > To unsubscribe, e-mail:
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/ [Sep 11, 2017 5:06:05 PM] (lei) Revert "HDFS-12349. Improve log message when it could not alloc enough [Sep 11, 2017 7:47:55 PM] (haibochen) YARN-7181. CPUTimeTracker.updateElapsedJiffies can report negative [Sep 11, 2017 8:33:42 PM] (rchiang) HADOOP-14654. Update httpclient version to 4.5.3. (rchiang) [Sep 11, 2017 8:54:40 PM] (rchiang) HADOOP-14655. Update httpcore version to 4.4.6. (rchiang) [Sep 11, 2017 9:48:07 PM] (cliang) HDFS-12406. dfsadmin command prints "Exception encountered" even if [Sep 11, 2017 10:46:23 PM] (templedf) YARN-6022. Document Docker work as experimental (Contributed by Varun [Sep 11, 2017 11:14:18 PM] (templedf) Revert "YARN-6022. Document Docker work as experimental (Contributed by [Sep 11, 2017 11:14:31 PM] (templedf) YARN-6622. Document Docker work as experimental (Contributed by Varun [Sep 11, 2017 11:20:20 PM] (haibochen) YARN-7128. The error message in TimelineSchemaCreator is not enough to [Sep 12, 2017 3:42:49 AM] (haibochen) YARN-7132. FairScheduler.initScheduler() contains a surprising unary [Sep 12, 2017 3:52:08 AM] (wangda) YARN-7173. Container update RM-NM communication fix for backward -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Hard coded reference to an absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:[line 490] Failed junit tests : hadoop.net.TestDNS hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 hadoop.hdfs.server.blockmanagement.TestReplicationPolicy hadoop.hdfs.TestLeaseRecoveryStriped hadoop.hdfs.TestReconstructStripedFile hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 hadoop.hdfs.web.TestWebHDFSAcl hadoop.hdfs.server.namenode.ha.TestPipelinesFailover hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation hadoop.yarn.server.TestDiskFailures hadoop.yarn.client.api.impl.TestAMRMClient hadoop.mapreduce.v2.hs.webapp.TestHSWebApp hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck hadoop.fs.azure.TestNativeAzureFileSystemMocked hadoop.fs.azure.TestWasbFsck hadoop.fs.azure.TestOutOfBandAzureBlobOperations hadoop.fs.azure.TestNativeAzureFileSystemContractMocked hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked hadoop.fs.azure.TestNativeAzureFileSystemConcurrency hadoop.yarn.sls.TestReservationSystemInvariants hadoop.yarn.sls.TestSLSRunner Timed out junit tests : org.apache.hadoop.hdfs.TestWriteReadStripedFile org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/diff-compile-javac-root.txt [292K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/521/artifact/out/whitespace-tabs.txt [1.2M] findbugs:
[jira] [Created] (HADOOP-14862) Metrics for AdlFileSystem
John Zhuge created HADOOP-14862: --- Summary: Metrics for AdlFileSystem Key: HADOOP-14862 URL: https://issues.apache.org/jira/browse/HADOOP-14862 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Add a Metrics2 source {{AdlFileSystemInstrumentation}} for {{AdlFileSystem}}. Consider per-thread statistics data if possible. Atomic variables are not totally free in multi-core arch. Don't think Java can do per-cpu data structure. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org