[jira] [Created] (HADOOP-14865) Mvnsite fail to execute macro defined in the document HDFSErasureCoding.md
SammiChen created HADOOP-14865: -- Summary: Mvnsite fail to execute macro defined in the document HDFSErasureCoding.md Key: HADOOP-14865 URL: https://issues.apache.org/jira/browse/HADOOP-14865 Project: Hadoop Common Issue Type: Bug Components: build Reporter: SammiChen [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project hadoop-hdfs: Error parsing '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md': line [-1] Error parsing the model: Unable to execute macro in the document: toc -> [Help 1] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164153#comment-16164153 ] Hadoop QA commented on HADOOP-14089: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 21s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules/hadoop-client-api hadoop-client-modules/hadoop-client-runtime hadoop-client-modules/hadoop-client-check-invariants hadoop-client-modules/hadoop-client-minicluster hadoop-client-modules/hadoop-client-check-test-invariants hadoop-client-modules/hadoop-client-integration-tests . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 23s{color} | {color:red} hadoop-mapreduce-client-shuffle in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 54s{color} | {color:red} hadoop-client-api in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 17s{color} | {color:red} hadoop-client-runtime in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} hadoop-client-check-invariants in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 55s{color} | {color:red} hadoop-client-minicluster in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s{color} | {color:red} hadoop-client-check-test-invariants in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hadoop-client-integration-tests in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 2s{color} | {color:red} The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 10s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules/hadoop-client-api hadoop-client-modules/hadoop-client-runtime
[jira] [Commented] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164121#comment-16164121 ] Hadoop QA commented on HADOOP-14652: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 38s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} hadoop-kms in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 33s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}172m 43s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}272m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.TestFileCreation | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14652 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886755/HADOOP-14652.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 11611d6b31d1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 68282c8 | | Default Java | 1.8.0_144 | | mvninstall |
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164114#comment-16164114 ] Sean Busbey commented on HADOOP-14857: -- I reran testing this with maven 3.3.9 and everything passed, fyi. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164088#comment-16164088 ] Hudson commented on HADOOP-14521: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12858 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12858/]) Revert "HADOOP-14521. KMS client needs retry logic. Contributed by (xiao: rev fa6cc43edd3f6e886a40b90b062c9f16189c50d1) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/kms/TestLoadBalancingKMSClientProvider.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164076#comment-16164076 ] Sean Busbey commented on HADOOP-14089: -- still same point on trunk with attached patch now with maven 3.3.9: {code} $ mvn -version Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00) Maven home: /home/busbey/lib/apache-maven-3.3.9 Java version: 1.8.0_131, vendor: Oracle Corporation Java home: /opt/toolchain/sun-jdk-64bit-1.8.0.131/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.16.0-77-generic", arch: "amd64", family: "unix" {code} > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164065#comment-16164065 ] Xiao Chen commented on HADOOP-14521: Reverted from trunk, branch-2 and branch-2.8. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14521: --- Fix Version/s: (was: 3.1.0) (was: 2.8.3) (was: 2.9.0) > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen reopened HADOOP-14521: > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164064#comment-16164064 ] Xiao Chen commented on HADOOP-14521: Thanks Andrew for reverting. I'll take the simple route to revert this everywhere, and re-commit after improvement. Rushabh please feel free to take a shot, I can work on the improvement patch next week if you're busy. Thanks again for the contribution so far. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 2.8.3, 3.1.0 > > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164061#comment-16164061 ] Sean Busbey commented on HADOOP-14089: -- I'm on trunk at 123342c and don't get this. let me download maven 3.3.9 > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164054#comment-16164054 ] Sean Busbey commented on HADOOP-14857: -- so are we presuming maven 3.3.9 is fine, or am I setting up a machine with it? > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164053#comment-16164053 ] Sean Busbey commented on HADOOP-14089: -- no, not expected. supposed to be excluded at hadoop-mapreduce-client-shuffle. let me rebase again. > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164045#comment-16164045 ] Hadoop QA commented on HADOOP-13055: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} root: The patch generated 0 new + 196 unchanged - 9 fixed = 196 total (was 205) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 16s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy | | | hadoop.hdfs.client.impl.TestClientBlockVerification | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-13055 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886752/HADOOP-13055.06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 054bb5fbe259 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 82c5dd1 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/13271/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | unit
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164040#comment-16164040 ] Andrew Wang commented on HADOOP-14089: -- Running "mvn verify -Dtest=NoSuchTest" this fails on some microsoft classes in the runtime jar, expected? {noformat} -> % jar tf ./hadoop-client-runtime/target/hadoop-client-runtime-3.1.0-SNAPSHOT.jar | sort | egrep -v "^org/apache/hadoop" | egrep -v "^META-INF" | egrep -v "^webapps" ehcache-107ext.xsd ehcache-core.xsd jetty-dir.css krb5-template.conf krb5_udp-template.conf microsoft/ microsoft/sql/ microsoft/sql/DateTimeOffset$1.class microsoft/sql/DateTimeOffset.class microsoft/sql/DateTimeOffset$SerializationProxy.class microsoft/sql/Types.class mozilla/ mozilla/public-suffix-list.txt org/ org/apache/ properties.dtd PropertyList-1.0.dtd {noformat} > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164035#comment-16164035 ] Sean Busbey commented on HADOOP-13918: -- {quote} 1. Because Hbase is using hadoop internals, do you means hbase is using some API's which are not marked public? Is this the reason for moving this task to 3.1? {quote} Right, HBase uses a bunch of non-public interfaces and I have to get those uses isolated before I can move the bulk of hbase to rely on the client artifacts. {quote} 2. especially in a way that can swap in Hadoop 2 via maven profiles (I think we can pass in hadoop version to hbase during compilation, is this is what what you mean here? {quote} Yeah, HBase relies on maven profiles to handle picking out particular dependencies and versions for its Hadoop use. > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Hongfei updated HADOOP-14804: -- Target Version/s: 3.0.0-alpha4 Status: In Progress (was: Patch Available) > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, > HADOOP-14804.003.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163967#comment-16163967 ] Andrew Wang commented on HADOOP-14857: -- Okay, it's probably something environmental on my end, let's just leave the Maven version requirement at 3.3 then. Maven 3.5.0 is pretty nice though, it has colorized output :) > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163950#comment-16163950 ] Bharat Viswanadham edited comment on HADOOP-14857 at 9/13/17 12:41 AM: --- +1 (non-binding) [[~andrew.wang] I have applied the patch and ran on my mac machine. Maven version used:3.3.9 What is the error you are facing? I am able to successfully build. {noformat} INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-client-modules --- [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-client-modules --- [INFO] No site descriptor found: nothing to attach. [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-modules --- [INFO] [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ hadoop-client-modules --- [INFO] Installing /Users/bviswanadham/workspace/hadoop-open/hadoop/hadoop-client-modules/pom.xml to /Users/bviswanadham/.m2/repository/org/apache/hadoop/hadoop-client-modules/3.1.0-SNAPSHOT/hadoop-client-modules-3.1.0-SNAPSHOT.pom [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Client Aggregator SUCCESS [ 6.428 s] [INFO] Apache Hadoop Client API ... SUCCESS [01:11 min] [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:01 min] [INFO] Apache Hadoop Client Test Minicluster .. SUCCESS [01:21 min] [INFO] Apache Hadoop Client Packaging Invariants .. SUCCESS [ 0.207 s] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.139 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 11.380 s] [INFO] Apache Hadoop Client Modules ... SUCCESS [ 0.034 s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 03:54 min [INFO] Finished at: 2017-09-12T17:33:11-07:00 [INFO] Final Memory: 71M/1049M [INFO] HW13762:hadoop-client-check-invariants bviswanadham$ mvn -v Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00) Maven home: /usr/local/apache-maven-3.3.9 Java version: 1.8.0_131, vendor: Oracle Corporation Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.12.5", arch: "x86_64", family: "mac" {noformat} was (Author: bharatviswa): +1 (non-binding) [[~andrew.wang] I have applied the patch and ran on my mac machine. Maven version used:3.3.9 What is the error you are facing? I am able to successfully build. {noformat} INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-client-modules --- [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-client-modules --- [INFO] No site descriptor found: nothing to attach. [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-modules --- [INFO] [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ hadoop-client-modules --- [INFO] Installing /Users/bviswanadham/workspace/hadoop-open/hadoop/hadoop-client-modules/pom.xml to /Users/bviswanadham/.m2/repository/org/apache/hadoop/hadoop-client-modules/3.1.0-SNAPSHOT/hadoop-client-modules-3.1.0-SNAPSHOT.pom [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Client Aggregator SUCCESS [ 6.428 s] [INFO] Apache Hadoop Client API ... SUCCESS [01:11 min] [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:01 min] [INFO] Apache Hadoop Client Test Minicluster .. SUCCESS [01:21 min] [INFO] Apache Hadoop Client Packaging Invariants .. SUCCESS [ 0.207 s] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.139 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 11.380 s] [INFO] Apache Hadoop Client Modules ... SUCCESS [ 0.034 s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 03:54 min [INFO] Finished at: 2017-09-12T17:33:11-07:00 [INFO] Final Memory: 71M/1049M [INFO] {noformat} > downstream client artifact IT fails > --- > > Key: HADOOP-14857 >
[jira] [Comment Edited] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163950#comment-16163950 ] Bharat Viswanadham edited comment on HADOOP-14857 at 9/13/17 12:39 AM: --- +1 (non-binding) [[~andrew.wang] I have applied the patch and ran on my mac machine. Maven version used:3.3.9 What is the error you are facing? I am able to successfully build. {noformat} INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-client-modules --- [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-client-modules --- [INFO] No site descriptor found: nothing to attach. [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-modules --- [INFO] [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ hadoop-client-modules --- [INFO] Installing /Users/bviswanadham/workspace/hadoop-open/hadoop/hadoop-client-modules/pom.xml to /Users/bviswanadham/.m2/repository/org/apache/hadoop/hadoop-client-modules/3.1.0-SNAPSHOT/hadoop-client-modules-3.1.0-SNAPSHOT.pom [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Client Aggregator SUCCESS [ 6.428 s] [INFO] Apache Hadoop Client API ... SUCCESS [01:11 min] [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:01 min] [INFO] Apache Hadoop Client Test Minicluster .. SUCCESS [01:21 min] [INFO] Apache Hadoop Client Packaging Invariants .. SUCCESS [ 0.207 s] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.139 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 11.380 s] [INFO] Apache Hadoop Client Modules ... SUCCESS [ 0.034 s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 03:54 min [INFO] Finished at: 2017-09-12T17:33:11-07:00 [INFO] Final Memory: 71M/1049M [INFO] {noformat} was (Author: bharatviswa): +1 (non-binding) [[~andrew.wang] I have applied the patch and ran on my mac machine. Maven version used:3.3.9 I am able to successfully build. {noformat} INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-client-modules --- [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-client-modules --- [INFO] No site descriptor found: nothing to attach. [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-modules --- [INFO] [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ hadoop-client-modules --- [INFO] Installing /Users/bviswanadham/workspace/hadoop-open/hadoop/hadoop-client-modules/pom.xml to /Users/bviswanadham/.m2/repository/org/apache/hadoop/hadoop-client-modules/3.1.0-SNAPSHOT/hadoop-client-modules-3.1.0-SNAPSHOT.pom [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Client Aggregator SUCCESS [ 6.428 s] [INFO] Apache Hadoop Client API ... SUCCESS [01:11 min] [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:01 min] [INFO] Apache Hadoop Client Test Minicluster .. SUCCESS [01:21 min] [INFO] Apache Hadoop Client Packaging Invariants .. SUCCESS [ 0.207 s] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.139 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 11.380 s] [INFO] Apache Hadoop Client Modules ... SUCCESS [ 0.034 s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 03:54 min [INFO] Finished at: 2017-09-12T17:33:11-07:00 [INFO] Final Memory: 71M/1049M [INFO] {noformat} > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading.
[jira] [Comment Edited] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163950#comment-16163950 ] Bharat Viswanadham edited comment on HADOOP-14857 at 9/13/17 12:38 AM: --- +1 (non-binding) [[~andrew.wang] I have applied the patch and ran on my mac machine. Maven version used:3.3.9 I am able to successfully build. {noformat} INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-client-modules --- [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-client-modules --- [INFO] No site descriptor found: nothing to attach. [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-modules --- [INFO] [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ hadoop-client-modules --- [INFO] Installing /Users/bviswanadham/workspace/hadoop-open/hadoop/hadoop-client-modules/pom.xml to /Users/bviswanadham/.m2/repository/org/apache/hadoop/hadoop-client-modules/3.1.0-SNAPSHOT/hadoop-client-modules-3.1.0-SNAPSHOT.pom [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Client Aggregator SUCCESS [ 6.428 s] [INFO] Apache Hadoop Client API ... SUCCESS [01:11 min] [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:01 min] [INFO] Apache Hadoop Client Test Minicluster .. SUCCESS [01:21 min] [INFO] Apache Hadoop Client Packaging Invariants .. SUCCESS [ 0.207 s] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.139 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 11.380 s] [INFO] Apache Hadoop Client Modules ... SUCCESS [ 0.034 s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 03:54 min [INFO] Finished at: 2017-09-12T17:33:11-07:00 [INFO] Final Memory: 71M/1049M [INFO] {noformat} was (Author: bharatviswa): +1 (non-binding) [[~andrew.wang] I have applied the patch and ran on my mac machine. Maven version used:3.3.9 I am able to successfully build. INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-client-modules --- [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-client-modules --- [INFO] No site descriptor found: nothing to attach. [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-modules --- [INFO] [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ hadoop-client-modules --- [INFO] Installing /Users/bviswanadham/workspace/hadoop-open/hadoop/hadoop-client-modules/pom.xml to /Users/bviswanadham/.m2/repository/org/apache/hadoop/hadoop-client-modules/3.1.0-SNAPSHOT/hadoop-client-modules-3.1.0-SNAPSHOT.pom [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Client Aggregator SUCCESS [ 6.428 s] [INFO] Apache Hadoop Client API ... SUCCESS [01:11 min] [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:01 min] [INFO] Apache Hadoop Client Test Minicluster .. SUCCESS [01:21 min] [INFO] Apache Hadoop Client Packaging Invariants .. SUCCESS [ 0.207 s] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.139 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 11.380 s] [INFO] Apache Hadoop Client Modules ... SUCCESS [ 0.034 s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 03:54 min [INFO] Finished at: 2017-09-12T17:33:11-07:00 [INFO] Final Memory: 71M/1049M [INFO] > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code}
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163950#comment-16163950 ] Bharat Viswanadham commented on HADOOP-14857: - +1 (non-binding) [[~andrew.wang] I have applied the patch and ran on my mac machine. Maven version used:3.3.9 I am able to successfully build. INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-client-modules --- [INFO] [INFO] --- maven-site-plugin:3.6:attach-descriptor (attach-descriptor) @ hadoop-client-modules --- [INFO] No site descriptor found: nothing to attach. [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-modules --- [INFO] [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ hadoop-client-modules --- [INFO] Installing /Users/bviswanadham/workspace/hadoop-open/hadoop/hadoop-client-modules/pom.xml to /Users/bviswanadham/.m2/repository/org/apache/hadoop/hadoop-client-modules/3.1.0-SNAPSHOT/hadoop-client-modules-3.1.0-SNAPSHOT.pom [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Client Aggregator SUCCESS [ 6.428 s] [INFO] Apache Hadoop Client API ... SUCCESS [01:11 min] [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:01 min] [INFO] Apache Hadoop Client Test Minicluster .. SUCCESS [01:21 min] [INFO] Apache Hadoop Client Packaging Invariants .. SUCCESS [ 0.207 s] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.139 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 11.380 s] [INFO] Apache Hadoop Client Modules ... SUCCESS [ 0.034 s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 03:54 min [INFO] Finished at: 2017-09-12T17:33:11-07:00 [INFO] Final Memory: 71M/1049M [INFO] > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163939#comment-16163939 ] Andrew Wang commented on HADOOP-14857: -- The code changes here also look fine. Maybe we drop the debug line in HttpServer2, but I'm not picky. I peeked at Yetus and it appears to apt-get maven from the default Xenial repo, and so should be on the same version as me (3.3.9). Dunno why the build seems to work for precommit and not for me. I'd be concerned about requiring 3.5.0 since it requires updating Yetus as well as the Hadoop Dockerfile, and it's a worse out-of-the-box experience for devs. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163931#comment-16163931 ] Hadoop QA commented on HADOOP-14857: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 6s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules/hadoop-client-minicluster hadoop-client-modules/hadoop-client-integration-tests . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules/hadoop-client-minicluster hadoop-client-modules/hadoop-client-integration-tests . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 46s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}135m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14857 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886730/HADOOP-14857.3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux dd21b7b2189c 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 86f4d1c | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163930#comment-16163930 ] Bharat Viswanadham commented on HADOOP-13918: - [~busbey] Thanks for info. Some questions, to understand more. 1. Because Hbase is using hadoop internals, do you means hbase is using some API's which are not marked public? Is this the reason for moving this task to 3.1? 2. especially in a way that can swap in Hadoop 2 via maven profiles (I think we can pass in hadoop version to hbase during compilation, is this is what what you mean here? > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163923#comment-16163923 ] Andrew Wang commented on HADOOP-14857: -- Posting without more detailed review to say that with the patch applied, it failed with Maven 3.3 and passed with Maven 3.5.0. This is unfortunate since 3.3 is the default on Ubuntu 16.04. What's the right enforcer rule? Require 3.5.0? > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163908#comment-16163908 ] Sean Busbey commented on HADOOP-13918: -- This is to add an IT in hadoop that flexes parts of our API in a way that looks like how HBase interacts with us. I had planned to structure this as-of-yet-non-existent test based on my experience getting HBase to make use of the shaded Hadoop client. Unfortunately, HBase is still using a large number of Hadoop internals, so getting that change over done (especially in a way that can swap in Hadoop 2 via maven profiles) is taking much longer than I originally thought. Make sense? > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-14864: --- Assignee: Bharat Viswanadham > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14864: Labels: newbie supportability (was: newbie) > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Priority: Minor > Labels: newbie, supportability > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14864: Labels: newbie (was: ) > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Priority: Minor > Labels: newbie > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
John Zhuge created HADOOP-14864: --- Summary: FSDataInputStream#unbuffer UOE exception should print the stream class name Key: HADOOP-14864 URL: https://issues.apache.org/jira/browse/HADOOP-14864 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.6.4 Reporter: John Zhuge Priority: Minor The current exception message: {noformat} org/apache/hadoop/fs/ failed: error: UnsupportedOperationException: this stream does not support unbuffering.java.lang.UnsupportedOperationException: this stream does not support unbuffering. at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163861#comment-16163861 ] Bharat Viswanadham edited comment on HADOOP-13918 at 9/12/17 11:52 PM: --- Hi [~busbey] Is this task is to add IT code in hadoop or use shaded client of hadoop in hbase IT? I am not clear on your comment, am i missing something here? was (Author: bharatviswa): Hi [~busbey] Is this task is to add IT code in hadoop or use shaded client of hadoop in hbase IT? I have not clear on your comment, am i missing something here? > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14089: - Priority: Blocker (was: Critical) > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14089: - Status: Patch Available (was: In Progress) setting to patch available, but it'll be more meaningful to use precommit once HADOOP-14857 lands so that YETUS-543 can be merged. > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14089: - Attachment: HADOOP-14089.2.patch -2 ready for review - rebase to recent trunk - squash - includes latest patch for HADOOP-14857 - fixes relocation to be consistent across all three artifacts so the resultant jars work together in the IT. dependency reduced poms look reasonable. the minicluster one has some additional optional dependencies that are surprising, but since they're optional, and thus won't be see by downstream folks, I think we're fine for now and can chase them down later. {code} [INFO] [INFO] Building Apache Hadoop Client API 3.1.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hadoop-client-api --- [INFO] org.apache.hadoop:hadoop-client-api:jar:3.1.0-SNAPSHOT {code} {code} [INFO] [INFO] Building Apache Hadoop Client Runtime 3.1.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hadoop-client-runtime --- [INFO] org.apache.hadoop:hadoop-client-runtime:jar:3.1.0-SNAPSHOT [INFO] +- org.apache.hadoop:hadoop-client-api:jar:3.1.0-SNAPSHOT:runtime [INFO] +- org.apache.htrace:htrace-core4:jar:4.1.0-incubating:runtime [INFO] +- org.slf4j:slf4j-api:jar:1.7.25:runtime [INFO] +- commons-logging:commons-logging:jar:1.1.3:runtime [INFO] +- com.google.code.findbugs:jsr305:jar:3.0.0:runtime [INFO] \- log4j:log4j:jar:1.2.17:runtime (optional) {code} {code} [INFO] [INFO] Building Apache Hadoop Client Test Minicluster 3.1.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hadoop-client-minicluster --- [INFO] org.apache.hadoop:hadoop-client-minicluster:jar:3.1.0-SNAPSHOT [INFO] +- org.apache.hadoop:hadoop-client-api:jar:3.1.0-SNAPSHOT:runtime [INFO] +- org.apache.hadoop:hadoop-client-runtime:jar:3.1.0-SNAPSHOT:runtime [INFO] | +- org.apache.htrace:htrace-core4:jar:4.1.0-incubating:runtime [INFO] | +- org.slf4j:slf4j-api:jar:1.7.25:runtime [INFO] | +- commons-logging:commons-logging:jar:1.1.3:runtime [INFO] | \- com.google.code.findbugs:jsr305:jar:3.0.0:runtime [INFO] +- junit:junit:jar:4.11:runtime [INFO] | \- org.hamcrest:hamcrest-core:jar:1.3:runtime [INFO] +- org.apache.hadoop:hadoop-annotations:jar:3.1.0-SNAPSHOT:compile (optional) [INFO] +- org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.1.0-SNAPSHOT:runtime (optional) [INFO] +- org.apache.hadoop:hadoop-common:test-jar:tests:3.1.0-SNAPSHOT:compile (optional) [INFO] +- org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.1.0-SNAPSHOT:compile (optional) [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.1.0-SNAPSHOT:compile (optional) {code} > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163861#comment-16163861 ] Bharat Viswanadham commented on HADOOP-13918: - Hi [~busbey] Is this task is to add IT code in hadoop or use shaded client of hadoop in hbase IT? I have not clear on your comment, am i missing something here? > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14089: - Summary: Automated checking for malformed client artifacts. (was: Shaded Hadoop client runtime includes non-shaded classes) > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14652: Attachment: HADOOP-14652.005.patch * Rebase again against latest NOTICE > Update metrics-core version to 3.2.4 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, > HADOOP-14652.003.patch, HADOOP-14652.004.patch, HADOOP-14652.005.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.4 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14217) Object Storage: support colon in object path
[ https://issues.apache.org/jira/browse/HADOOP-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163849#comment-16163849 ] Yuliya Feldman commented on HADOOP-14217: - I have added few tests to FileSystemContractBaseTest and updated HDFS and WebHDFS to not run those tests, as HDFS and subsequently WebHdfs currently do not support colon in any portion of the path including FileName(s). I did test with RawLocalFileSystem, S3(a,n). Since I can't test Azure, Swift, other FSs would be great to get feedback whether they support colon and can run those tests. > Object Storage: support colon in object path > > > Key: HADOOP-14217 > URL: https://issues.apache.org/jira/browse/HADOOP-14217 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/oss >Affects Versions: 2.8.1 >Reporter: Genmao Yu >Assignee: Yuliya Feldman > Attachments: Colon handling in hadoop Path.pdf > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163841#comment-16163841 ] Sean Busbey commented on HADOOP-13918: -- HBase is pretty caught up in getting their HBase 2.y release line started, so I don't think I can get them onto the shaded client in time for it. That makes shaping this IT nearly impossible. I'd like to push this out to 3.1 if there aren't any objections. > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HADOOP-13055: Attachment: HADOOP-13055.06.patch Attached v06 patch to address the following: 1. Support for LinkType.MERGE_SLASH and LinkType.SINGLE_FALLBACK (HADOOP-14136) 2. Helper class LinkEntry for building the mount table for the links configured 3. Code cleanups in {{ViewFileSystem}} and {{InodeTree}} to avoid referring to member variables directly. 4. Tests for LinkType.MERGE_SLASH and LinkType.SINGLE_FALLBACK [~andrew.wang], [~chris.douglas], [~xkrogen] can you please review the patch? Thanks. > Implement linkMergeSlash for ViewFileSystem > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Affects Versions: 2.7.5 >Reporter: Zhe Zhang >Assignee: Manoj Govindassamy > Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, > HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, > HADOOP-13055.05.patch, HADOOP-13055.06.patch > > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163789#comment-16163789 ] Ray Chiang edited comment on HADOOP-14652 at 9/12/17 10:38 PM: --- TestRaceWhenRelogin doesn't look like HADOOP-14078. Will check. TestKDiag looks like HADOOP-14636. was (Author: rchiang): TestRaceWhenRelogin looks like HADOOP-14078. TestKDiag looks like HADOOP-14636. > Update metrics-core version to 3.2.4 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, > HADOOP-14652.003.patch, HADOOP-14652.004.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.4 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163789#comment-16163789 ] Ray Chiang commented on HADOOP-14652: - TestRaceWhenRelogin looks like HADOOP-14078. TestKDiag looks like HADOOP-14636. > Update metrics-core version to 3.2.4 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, > HADOOP-14652.003.patch, HADOOP-14652.004.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.4 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163771#comment-16163771 ] Hadoop QA commented on HADOOP-14652: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 34s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14652 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886718/HADOOP-14652.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux b9a741fdc706 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f1d751b | | Default Java | 1.8.0_144 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13269/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13269/testReport/ | | modules | C: hadoop-project hadoop-common-project/hadoop-kms hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-tools/hadoop-sls . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13269/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update metrics-core version to 3.2.4 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang >
[jira] [Commented] (HADOOP-14843) Improve FsPermission symbolic parsing unit test coverage
[ https://issues.apache.org/jira/browse/HADOOP-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163759#comment-16163759 ] Hudson commented on HADOOP-14843: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12852 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12852/]) HADOOP-14843. Improve FsPermission symbolic parsing unit test coverage. (jlowe: rev 86f4d1c66c8b541465ff769e5d951305c41c715c) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/permission/TestFsPermission.java > Improve FsPermission symbolic parsing unit test coverage > > > Key: HADOOP-14843 > URL: https://issues.apache.org/jira/browse/HADOOP-14843 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.4, 2.8.1 >Reporter: Jason Lowe >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HADOOP-14843.01.patch, HADOOP-14843.02.patch, > HADOOP-14843.03.patch, HADOOP-14843.04.patch, HADOOP-14843.patch > > > A user misunderstood the syntax format for the FsPermission symbolic > constructor and passed the argument "-rwr" instead of "u=rw,g=r". In 2.7 and > earlier this was silently misinterpreted as mode 0777 and in 2.8 it oddly > became mode . In either case FsPermission should have flagged "-rwr" as > an invalid argument rather than silently misinterpreting it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163754#comment-16163754 ] Sean Busbey commented on HADOOP-14857: -- [~andrew.wang] can you try out this patch on both your current Maven and Maven 3.5.0? > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14857: - Status: Patch Available (was: In Progress) > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14857: - Attachment: HADOOP-14857.3.patch -3 - mark mockito as an optional dependency for the shaded minicluster - note that this (and all the optionals) are a workaround for MNG-5899 and maven versions 3.3+ > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14843) Improve FsPermission symbolic parsing unit test coverage
[ https://issues.apache.org/jira/browse/HADOOP-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-14843: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Thanks, [~bharatviswa]! I committed this to trunk, branch-3.0, and branch-2. > Improve FsPermission symbolic parsing unit test coverage > > > Key: HADOOP-14843 > URL: https://issues.apache.org/jira/browse/HADOOP-14843 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.4, 2.8.1 >Reporter: Jason Lowe >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HADOOP-14843.01.patch, HADOOP-14843.02.patch, > HADOOP-14843.03.patch, HADOOP-14843.04.patch, HADOOP-14843.patch > > > A user misunderstood the syntax format for the FsPermission symbolic > constructor and passed the argument "-rwr" instead of "u=rw,g=r". In 2.7 and > earlier this was silently misinterpreted as mode 0777 and in 2.8 it oddly > became mode . In either case FsPermission should have flagged "-rwr" as > an invalid argument rather than silently misinterpreting it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163735#comment-16163735 ] Hadoop QA commented on HADOOP-14652: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}159m 27s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}252m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14652 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886169/HADOOP-14652.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux c20ae0958f84 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / af45cd1 | | Default Java | 1.8.0_144 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13263/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13263/testReport/ | | modules | C: hadoop-project hadoop-common-project/hadoop-kms hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-tools/hadoop-sls . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13263/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update metrics-core version to 3.2.4 > > >
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163726#comment-16163726 ] Sean Busbey commented on HADOOP-14857: -- ah of course. this is MNG-5899. we're already working around it for the rest of the dependencies. I'll fix this and add a note so it's easier to recognize in the future. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163692#comment-16163692 ] Hadoop QA commented on HADOOP-13055: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 49s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 6s{color} | {color:orange} root: The patch generated 21 new + 196 unchanged - 9 fixed = 217 total (was 205) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 56s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 31s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}189m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | | hadoop.hdfs.TestLeaseRecoveryStriped | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-13055 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886565/HADOOP-13055.05.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 92f860dc314b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163691#comment-16163691 ] Hadoop QA commented on HADOOP-13055: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} root: The patch generated 21 new + 196 unchanged - 9 fixed = 217 total (was 205) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 55s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 55s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}195m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.security.TestKDiag | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestListFilesInFileContext | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
[jira] [Commented] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163654#comment-16163654 ] Hudson commented on HADOOP-14856: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12851 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12851/]) HADOOP-14856. Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt. (rchiang: rev 2ffe93ab609dfb54c3d1a53273ac2bc5ad15a5dd) * (edit) NOTICE.txt > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14856.001.patch, HADOOP-14856.002.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14862) Metrics for AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-14862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163651#comment-16163651 ] John Zhuge commented on HADOOP-14862: - Similar instrumentation code in S3a and Wasb: S3AInstrumentation and AzureFileSystemInstrumentation. > Metrics for AdlFileSystem > - > > Key: HADOOP-14862 > URL: https://issues.apache.org/jira/browse/HADOOP-14862 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl >Affects Versions: 2.8.0 >Reporter: John Zhuge > > Add a Metrics2 source {{AdlFileSystemInstrumentation}} for {{AdlFileSystem}}. > Consider per-thread statistics data if possible. Atomic variables are not > totally free in multi-core arch. Don't think Java can do per-cpu data > structure. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14856: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14856.001.patch, HADOOP-14856.002.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163621#comment-16163621 ] Ray Chiang commented on HADOOP-14856: - Committed to trunk and branch-3.0. Thanks [~andrew.wang] for the review! > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14856.001.patch, HADOOP-14856.002.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14652: Description: The current artifact is: com.codehale.metrics:metrics-core:3.0.1 That version could either be bumped to 3.0.2 (the latest of that line), or use the latest artifact: io.dropwizard.metrics:metrics-core:3.2.4 was: The current artifact is: com.codehale.metrics:metrics-core:3.0.1 That version could either be bumped to 3.0.2 (the latest of that line), or use the latest artifact: io.dropwizard.metrics:metrics-core:3.2.3 > Update metrics-core version to 3.2.4 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, > HADOOP-14652.003.patch, HADOOP-14652.004.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.4 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14652: Attachment: HADOOP-14652.004.patch * Fix NOTICE.txt to correct version of metrics-core > Update metrics-core version to 3.2.4 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, > HADOOP-14652.003.patch, HADOOP-14652.004.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.3 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14652: Summary: Update metrics-core version to 3.2.4 (was: Update metrics-core version to 3.2.3) > Update metrics-core version to 3.2.4 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, > HADOOP-14652.003.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.3 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163595#comment-16163595 ] Sean Busbey commented on HADOOP-14857: -- progress! with maven 3.5 I do hit an error, but a different one. {code} $ mvn -version Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T12:39:06-07:00) Maven home: /home/busbey/lib/apache-maven-3.5.0 Java version: 1.8.0_131, vendor: Oracle Corporation Java home: /opt/toolchain/sun-jdk-64bit-1.8.0.131/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.16.0-77-generic", arch: "amd64", family: "unix" $ mvn -Dtest=NoUnitTests -pl hadoop-client-modules/hadoop-client-check-invariants -pl hadoop-client-modules/hadoop-client-check-test-invariants -pl hadoop-client-modules/hadoop-client-integration-tests -am install > ../mvn.log Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 $ [INFO] [INFO] Building Apache Hadoop Client Packaging Invariants for Test 3.1.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-client-check-test-invariants --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-banned-dependencies) @ hadoop-client-check-test-invariants --- [INFO] Adding ignorable dependency: org.apache.hadoop:hadoop-annotations:null [INFO] Adding ignore: * [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BanTransitiveDependencies failed with message: org.apache.hadoop:hadoop-client-check-test-invariants:pom:3.1.0-SNAPSHOT org.apache.hadoop:hadoop-client-minicluster:jar:3.1.0-SNAPSHOT:compile has transitive dependencies: junit:junit:jar:4.11:runtime [excluded] org.mockito:mockito-all:jar:1.8.5:compile {code} let me figure out why that's not reduced. should be fast. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14840) Tool to estimate resource requirements of an application pipeline based on prior executions
[ https://issues.apache.org/jira/browse/HADOOP-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HADOOP-14840: Attachment: ResourceEstimator-design-v1.pdf Attaching design doc > Tool to estimate resource requirements of an application pipeline based on > prior executions > --- > > Key: HADOOP-14840 > URL: https://issues.apache.org/jira/browse/HADOOP-14840 > Project: Hadoop Common > Issue Type: New Feature > Components: tools >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: ResourceEstimator-design-v1.pdf > > > We have been working on providing SLAs for job execution on Hadoop. At high > level this involves 2 parts: deriving the resource requirements of a job and > guaranteeing the estimated resources at runtime. The {{YARN > ReservationSystem}} (YARN-1051/YARN-2572/YARN-5326) enable the latter and in > this JIRA, we propose to add a tool to Hadoop to predict the resource > requirements of a job based on past executions of the job. The system (aka > *Morpheus*) deep dive can be found in our OSDI'16 paper > [here|https://www.usenix.org/conference/osdi16/technical-sessions/presentation/jyothi]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14863) branch-2 native compilation broken in hadoop-yarn-server-nodemanager
Varun Saxena created HADOOP-14863: - Summary: branch-2 native compilation broken in hadoop-yarn-server-nodemanager Key: HADOOP-14863 URL: https://issues.apache.org/jira/browse/HADOOP-14863 Project: Hadoop Common Issue Type: Bug Reporter: Varun Saxena {noformat} [WARNING] make[2]: Leaving directory `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' [WARNING] make[1]: Leaving directory `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c: In function ‘all_numbers’: [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [WARNING]for (int i = 0; i < strlen(input); i++) { [WARNING]^ [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: note: use option -std=c99 or -std=gnu99 to compile your code [WARNING] make[2]: *** [CMakeFiles/container.dir/main/native/container-executor/impl/utils/string-utils.c.o] Error 1 [WARNING] make[2]: *** Waiting for unfinished jobs [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c: In function ‘tokenize_docker_command’: [WARNING] /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1193:7: warning: unused variable ‘c’ [-Wunused-variable] [WARNING]int c = 0; [WARNING]^ [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2 [WARNING] make: *** [all] Error 2 {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13786: Status: Open (was: Patch Available) > Add S3Guard committer for zero-rename commits to S3 endpoints > - > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: cloud-intergration-test-failure.log, > HADOOP-13786-036.patch, HADOOP-13786-HADOOP-13345-001.patch, > HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, > HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, > HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, > HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, > HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, > HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, > HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, > HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, > HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, > HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, > HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, > HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, > HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, > HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, > HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, > HADOOP-13786-HADOOP-13345-032.patch, HADOOP-13786-HADOOP-13345-033.patch, > HADOOP-13786-HADOOP-13345-035.patch, objectstore.pdf, s3committer-master.zip > > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-14303) Review retry logic on all S3 SDK calls, implement where needed
[ https://issues.apache.org/jira/browse/HADOOP-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-14303 started by Steve Loughran. --- > Review retry logic on all S3 SDK calls, implement where needed > -- > > Key: HADOOP-14303 > URL: https://issues.apache.org/jira/browse/HADOOP-14303 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > AWS S3, IAM, KMS, DDB etc all throttle callers: the S3A code needs to handle > this without failing, as if it slows down its requests it can recover. > 1. Look at all the places where we are calling S3A via the AWS SDK and make > sure we are retrying with some backoff & jitter policy, ideally something > unified. This must be more systematic than the case-by-case, > problem-by-problem strategy we are implicitly using. > 2. Many of the AWS S3 SDK calls do implement retry (e.g PUT/multipart PUT), > but we need to check the other parts of the process: login, initiate/complete > MPU, ... > Related > HADOOP-13811 Failed to sanitize XML document destined for handler class > HADOOP-13664 S3AInputStream to use a retry policy on read failures > This stuff is all hard to test. A key need is to be able to differentiate > recoverable throttle & network failures from unrecoverable problems like: > auth, network config (e.g bad endpoint), etc. > May be the opportunity to add a faulting subclass of Amazon S3 client which > can be configured in IT Tests to fail at specific points. Ryan Blue's mcok S3 > client does this in HADOOP-13786, but it is for 100% mock. I'm thinking of > something with similar fault raising, but in front of the real S3A client -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14303) Review retry logic on all S3 SDK calls, implement where needed
[ https://issues.apache.org/jira/browse/HADOOP-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-14303: --- Assignee: Steve Loughran > Review retry logic on all S3 SDK calls, implement where needed > -- > > Key: HADOOP-14303 > URL: https://issues.apache.org/jira/browse/HADOOP-14303 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > AWS S3, IAM, KMS, DDB etc all throttle callers: the S3A code needs to handle > this without failing, as if it slows down its requests it can recover. > 1. Look at all the places where we are calling S3A via the AWS SDK and make > sure we are retrying with some backoff & jitter policy, ideally something > unified. This must be more systematic than the case-by-case, > problem-by-problem strategy we are implicitly using. > 2. Many of the AWS S3 SDK calls do implement retry (e.g PUT/multipart PUT), > but we need to check the other parts of the process: login, initiate/complete > MPU, ... > Related > HADOOP-13811 Failed to sanitize XML document destined for handler class > HADOOP-13664 S3AInputStream to use a retry policy on read failures > This stuff is all hard to test. A key need is to be able to differentiate > recoverable throttle & network failures from unrecoverable problems like: > auth, network config (e.g bad endpoint), etc. > May be the opportunity to add a faulting subclass of Amazon S3 client which > can be configured in IT Tests to fail at specific points. Ryan Blue's mcok S3 > client does this in HADOOP-13786, but it is for 100% mock. I'm thinking of > something with similar fault raising, but in front of the real S3A client -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14818) Can not show help message of namenode/datanode/nodemanager when process started.
[ https://issues.apache.org/jira/browse/HADOOP-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HADOOP-14818: --- Assignee: (was: Ajay Kumar) > Can not show help message of namenode/datanode/nodemanager when process > started. > > > Key: HADOOP-14818 > URL: https://issues.apache.org/jira/browse/HADOOP-14818 > Project: Hadoop Common > Issue Type: Improvement > Components: bin >Affects Versions: 3.0.0-beta1 >Reporter: Wenxin He >Priority: Minor > > We should always get the help message whenever the process is started or not. > But now, > when datanode starts, we get an error message: > {noformat} > hadoop# bin/hdfs datanode -h > datanode is running as process 1701. Stop it first. > {noformat} > when datanode stops, we get what we want: > {noformat} > hadoop# bin/hdfs --daemon stop datanode > hadoop# bin/hdfs datanode -h > Usage: hdfs datanode [-regular | -rollback | -rollingupgrade rollback ] > -regular : Normal DataNode startup (default). > ... > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14857: - Attachment: HADOOP-14857.2.patch -2 - rebased on to f1d751b for origin/trunk - excluded hamcrest brought in by mockito - update dependency plugin to latest This passed for me using {code} $ mvn -version Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 Apache Maven 3.2.2 (45f7c06d68e745d05611f7fd14efb6594181933e; 2014-06-17T06:51:42-07:00) Maven home: /opt/toolchain/apache-maven-3.2.2 Java version: 1.8.0_131, vendor: Oracle Corporation Java home: /opt/toolchain/sun-jdk-64bit-1.8.0.131/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.16.0-77-generic", arch: "amd64", family: "unix" $ mvn -Dtest=NoUnitTests -pl hadoop-client-modules/hadoop-client-check-invariants -pl hadoop-client-modules/hadoop-client-check-test-invariants -pl hadoop-client-modules/hadoop-client-integration-tests -am install > ../mvn.log {code} let me find a copy of maven 3.3.9 and see if we're in a nightmare scenario. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13059) S3a over-reacts to potentially transient network problems in its init() logic
[ https://issues.apache.org/jira/browse/HADOOP-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163533#comment-16163533 ] Steve Loughran commented on HADOOP-13059: - the new lambda-operator retry logic will retry in init(), but it'll give up there if things aren't working. that way, you're allowed a brief bit of failure with the same retry logic as other idempotent calls get. > S3a over-reacts to potentially transient network problems in its init() logic > - > > Key: HADOOP-13059 > URL: https://issues.apache.org/jira/browse/HADOOP-13059 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13059-001.patch > > > If there's a reason for s3a not being able to connect to AWS, then the > constructor fails, even if this is a potentially transient event. > This happens because the code to check for a bucket existing will relay the > exceptions. > The constructor should catch IOEs against the remote FS, downgrade to warn > and let the code continue; it may fail later, but it may also recover. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13059) S3a over-reacts to potentially transient network problems in its init() logic
[ https://issues.apache.org/jira/browse/HADOOP-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13059: Status: Open (was: Patch Available) > S3a over-reacts to potentially transient network problems in its init() logic > - > > Key: HADOOP-13059 > URL: https://issues.apache.org/jira/browse/HADOOP-13059 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13059-001.patch > > > If there's a reason for s3a not being able to connect to AWS, then the > constructor fails, even if this is a potentially transient event. > This happens because the code to check for a bucket existing will relay the > exceptions. > The constructor should catch IOEs against the remote FS, downgrade to warn > and let the code continue; it may fail later, but it may also recover. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163520#comment-16163520 ] Sean Busbey commented on HADOOP-14857: -- I'll rebase again and check before posting my updated patch that I think gets this going. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14805) Upgrade to zstd 1.3.1
[ https://issues.apache.org/jira/browse/HADOOP-14805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163508#comment-16163508 ] Andrew Purtell commented on HADOOP-14805: - There might not be a direct dependency but would it be useful to add checking in the cmakefile that the zstandard libraries are version >= 1.3.1, for better licensing/compliance assurance? Would help those building custom packages which include zstd in the binaries. > Upgrade to zstd 1.3.1 > - > > Key: HADOOP-14805 > URL: https://issues.apache.org/jira/browse/HADOOP-14805 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0, 3.0.0-alpha2 >Reporter: Andrew Wang > > zstandard 1.3.1 has been dual licensed under GPL and BSD. This clears up the > concerns regarding the Facebook-specific PATENTS file. If we upgrade to > 1.3.1, we can bundle zstd with binary distributions of Hadoop. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163506#comment-16163506 ] Hudson commented on HADOOP-14797: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12850 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12850/]) HADOOP-14797. Update re2j version to 1.1. (rchiang) (rchiang: rev f1d751bd05988fb77a85e4ad5b07026a4c5a86af) * (edit) LICENSE.txt * (edit) hadoop-project/pom.xml > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14797.001.patch, HADOOP-14797.002.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors
[ https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163487#comment-16163487 ] Andrew Wang commented on HADOOP-14835: -- [~aw], are you comfortable doing a review? Otherwise I can ask someone else. > mvn site build throws SAX errors > > > Key: HADOOP-14835 > URL: https://issues.apache.org/jira/browse/HADOOP-14835 > Project: Hadoop Common > Issue Type: Bug > Components: build, site >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14835.001.patch, HADOOP-14835.002.patch > > > Running mvn install site site:stage -DskipTests -Pdist,src > -Preleasedocs,docs results in a stack trace when run on a fresh .m2 > directory. It appears to be coming from the jdiff doclets in the annotations > code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14797: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.0. Thanks [~andrew.wang] for the review! > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14797.001.patch, HADOOP-14797.002.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14829) Path should support colon
[ https://issues.apache.org/jira/browse/HADOOP-14829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163458#comment-16163458 ] Yuliya Feldman commented on HADOOP-14829: - I have added few tests to FileSystemContractBaseTest and updated HDFS and WebHDFS to not run those tests, as HDFS and subsequently WebHdfs currently do not support colon in any portion of the path including FileName(s). I did test with RawLocalFileSystem, S3(a,n). *Since I can't test Azure, Swift, other FSs would be great to get feedback whether they support colon and can run those tests.* > Path should support colon > - > > Key: HADOOP-14829 > URL: https://issues.apache.org/jira/browse/HADOOP-14829 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Yuliya Feldman > Attachments: Colon handling in hadoop Path.pdf > > > Object storages today support colon ( : ) in names, while Hadoop Path does > not support it. > This JIRA is to allow Path to support colon in URI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14653) Update joda-time version to 2.9.9
[ https://issues.apache.org/jira/browse/HADOOP-14653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163447#comment-16163447 ] Hudson commented on HADOOP-14653: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12848 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12848/]) HADOOP-14653. Update joda-time version to 2.9.9. (rchiang) (rchiang: rev a6432ba5a177ec3d3a95fa79e313a9bbc531a1e7) * (edit) hadoop-project/pom.xml * (edit) NOTICE.txt > Update joda-time version to 2.9.9 > - > > Key: HADOOP-14653 > URL: https://issues.apache.org/jira/browse/HADOOP-14653 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14653.001.patch, HADOOP-14653.002.patch > > > Update the dependency > joda-time:joda-time:2.9.4 > to the latest (2.9.9). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163441#comment-16163441 ] Hadoop QA commented on HADOOP-14856: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14856 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886688/HADOOP-14856.002.patch | | Optional Tests | asflicense | | uname | Linux 38ea1c24235e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 80ee89b | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13268/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14856.001.patch, HADOOP-14856.002.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163434#comment-16163434 ] Andrew Wang commented on HADOOP-14857: -- I just pulled the latest trunk (af45cd1be63c3eee968f0c820e7d70230a7bf246) and reproduced the same error. {noformat} -> % mvn -version Apache Maven 3.3.9 Maven home: /usr/share/maven Java version: 1.8.0_131, vendor: Oracle Corporation Java home: /usr/lib/jvm/java-8-oracle/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "4.8.0-32-generic", arch: "amd64", family: "unix" {noformat} I assume I'm on dep plugin version 2.10, as specified in the pom.xml. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14856: Attachment: HADOOP-14856.002.patch * Rebased against trunk > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14856.001.patch, HADOOP-14856.002.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163421#comment-16163421 ] Andrew Wang commented on HADOOP-14797: -- Thanks for checking Ray. +1 then, let's get this in. > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14797.001.patch, HADOOP-14797.002.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14648) Bump commons-configuration2 to 2.1.1
[ https://issues.apache.org/jira/browse/HADOOP-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163411#comment-16163411 ] Hudson commented on HADOOP-14648: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12847 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12847/]) HADOOP-14648. Bump commons-configuration2 to 2.1.1. (rchiang) (rchiang: rev 39818259c3b28a52b2cd7ee65b2422212d584664) * (edit) hadoop-project/pom.xml > Bump commons-configuration2 to 2.1.1 > > > Key: HADOOP-14648 > URL: https://issues.apache.org/jira/browse/HADOOP-14648 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14648.001.patch, HADOOP-14648.002.patch > > > Update the dependency > org.apache.commons: commons-configuration2: 2.1 > to the latest (2.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14653) Update joda-time version to 2.9.9
[ https://issues.apache.org/jira/browse/HADOOP-14653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14653: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Update joda-time version to 2.9.9 > - > > Key: HADOOP-14653 > URL: https://issues.apache.org/jira/browse/HADOOP-14653 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14653.001.patch, HADOOP-14653.002.patch > > > Update the dependency > joda-time:joda-time:2.9.4 > to the latest (2.9.9). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14653) Update joda-time version to 2.9.9
[ https://issues.apache.org/jira/browse/HADOOP-14653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163403#comment-16163403 ] Ray Chiang commented on HADOOP-14653: - Committed to trunk and branch-3.0. Thanks [~andrew.wang] for the review! > Update joda-time version to 2.9.9 > - > > Key: HADOOP-14653 > URL: https://issues.apache.org/jira/browse/HADOOP-14653 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14653.001.patch, HADOOP-14653.002.patch > > > Update the dependency > joda-time:joda-time:2.9.4 > to the latest (2.9.9). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HADOOP-13055: Status: Patch Available (was: In Progress) > Implement linkMergeSlash for ViewFileSystem > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Affects Versions: 2.7.5 >Reporter: Zhe Zhang >Assignee: Manoj Govindassamy > Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, > HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, > HADOOP-13055.05.patch > > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14796) Update json-simple version to 1.1.1
[ https://issues.apache.org/jira/browse/HADOOP-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163379#comment-16163379 ] Hudson commented on HADOOP-14796: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12846 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12846/]) HADOOP-14796. Update json-simple version to 1.1.1. (rchiang) (rchiang: rev af45cd1be63c3eee968f0c820e7d70230a7bf246) * (edit) hadoop-project/pom.xml > Update json-simple version to 1.1.1 > --- > > Key: HADOOP-14796 > URL: https://issues.apache.org/jira/browse/HADOOP-14796 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14796.001.patch > > > Update the dependency > com.googlecode.json-simple:json-simple:1.1 > to the latest (1.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14653) Update joda-time version to 2.9.9
[ https://issues.apache.org/jira/browse/HADOOP-14653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163376#comment-16163376 ] Ray Chiang commented on HADOOP-14653: - Unit test failure looks to be the same as HADOOP-13101. > Update joda-time version to 2.9.9 > - > > Key: HADOOP-14653 > URL: https://issues.apache.org/jira/browse/HADOOP-14653 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14653.001.patch, HADOOP-14653.002.patch > > > Update the dependency > joda-time:joda-time:2.9.4 > to the latest (2.9.9). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14648) Bump commons-configuration2 to 2.1.1
[ https://issues.apache.org/jira/browse/HADOOP-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14648: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.0. Thanks [~andrew.wang] for the review! > Bump commons-configuration2 to 2.1.1 > > > Key: HADOOP-14648 > URL: https://issues.apache.org/jira/browse/HADOOP-14648 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14648.001.patch, HADOOP-14648.002.patch > > > Update the dependency > org.apache.commons: commons-configuration2: 2.1 > to the latest (2.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HADOOP-13600: -- Target Version/s: 3.1.0 Status: Patch Available (was: Open) > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HADOOP-13600: -- Status: Open (was: Patch Available) > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163366#comment-16163366 ] Sean Busbey commented on HADOOP-14857: -- Okay updating the dependency plugin (and maybe my mvn version) got things to a better state: {code} [INFO] [INFO] Building Apache Hadoop Client Test Minicluster 3.1.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hadoop-client-minicluster --- [INFO] org.apache.hadoop:hadoop-client-minicluster:jar:3.1.0-SNAPSHOT [INFO] +- org.apache.hadoop:hadoop-client-api:jar:3.1.0-SNAPSHOT:runtime [INFO] +- org.apache.hadoop:hadoop-client-runtime:jar:3.1.0-SNAPSHOT:runtime [INFO] | +- org.apache.htrace:htrace-core4:jar:4.1.0-incubating:runtime [INFO] | +- org.slf4j:slf4j-api:jar:1.7.25:runtime [INFO] | \- commons-logging:commons-logging:jar:1.1.3:runtime [INFO] +- junit:junit:jar:4.11:runtime [INFO] | \- org.hamcrest:hamcrest-core:jar:1.3:runtime [INFO] +- org.apache.hadoop:hadoop-annotations:jar:3.1.0-SNAPSHOT:compile (optional) [INFO] +- org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.1.0-SNAPSHOT:runtime (optional) [INFO] +- org.apache.hadoop:hadoop-common:test-jar:tests:3.1.0-SNAPSHOT:compile (optional) [INFO] +- org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.1.0-SNAPSHOT:compile (optional) [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.1.0-SNAPSHOT:compile (optional) [INFO] {code} the timeline server as optional makes sense, since I was trying to keep it out of the way at first due ot bringing in hbase. I don't get the others being present, but optional means they don't need to block this fix IMHO. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163364#comment-16163364 ] Hadoop QA commented on HADOOP-13600: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 11s{color} | {color:red} HADOOP-13600 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-13600 | | GITHUB PR | https://github.com/apache/hadoop/pull/167 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13264/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163348#comment-16163348 ] Sahil Takiar commented on HADOOP-13600: --- [~ste...@apache.org] thanks for taking a look! * Thanks for the tip with the S3Guard testing, I re-ran the tests with the local dynamodb enabled, tests pass * I made some modifications so that all directories are deleted after the files, and they are deleted in the order returned by {{#listFilesAndEmptyDirectories}} * I tried to move some stuff out into separate classes to decrease the # of lines * I creates a new class called `LazyTransferManager` that lazily initializes the `TransferManager` > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163347#comment-16163347 ] Hudson commented on HADOOP-14799: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12845 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12845/]) HADOOP-14799. Update nimbus-jose-jwt to 4.41.1. (rchiang) (rchiang: rev 556812c179aa094c21acf610439a8d69fe6420ab) * (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestFetcher.java * (delete) hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java * (add) hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthenticationHandler.java * (edit) hadoop-project/pom.xml > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, > HADOOP-14799.003.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HADOOP-13600: -- Status: Patch Available (was: Open) > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14796) Update json-simple version to 1.1.1
[ https://issues.apache.org/jira/browse/HADOOP-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14796: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.0. Thanks [~andrew.wang] for the review! > Update json-simple version to 1.1.1 > --- > > Key: HADOOP-14796 > URL: https://issues.apache.org/jira/browse/HADOOP-14796 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14796.001.patch > > > Update the dependency > com.googlecode.json-simple:json-simple:1.1 > to the latest (1.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HADOOP-13600: -- Attachment: HADOOP-13600.001.patch > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14798) Update sshd-core and related mina-core library versions
[ https://issues.apache.org/jira/browse/HADOOP-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163319#comment-16163319 ] Hudson commented on HADOOP-14798: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12844 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12844/]) HADOOP-14798. Update sshd-core and related mina-core library versions. (rchiang: rev ad74691807c51713bd9071e731d4481ee212567e) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java * (edit) hadoop-project/pom.xml > Update sshd-core and related mina-core library versions > --- > > Key: HADOOP-14798 > URL: https://issues.apache.org/jira/browse/HADOOP-14798 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14798.001.patch > > > Update the dependencies > org.apache.mina:mina-core:2.0.0-M5 > org.apache.sshd:sshd-core:0.14.0 > mina-core can be updated to 2.0.16 and sshd-core to 1.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14799: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.0. Thanks [~ste...@apache.org] and [~andrew.wang] for the reviews! > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, > HADOOP-14799.003.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14862) Metrics for AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-14862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163308#comment-16163308 ] John Zhuge commented on HADOOP-14862: - Thanks [~liuml07] for the valuable input! > Metrics for AdlFileSystem > - > > Key: HADOOP-14862 > URL: https://issues.apache.org/jira/browse/HADOOP-14862 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl >Affects Versions: 2.8.0 >Reporter: John Zhuge > > Add a Metrics2 source {{AdlFileSystemInstrumentation}} for {{AdlFileSystem}}. > Consider per-thread statistics data if possible. Atomic variables are not > totally free in multi-core arch. Don't think Java can do per-cpu data > structure. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org