[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124630#comment-17124630 ] Akira Ajisaka commented on HADOOP-17056: bq. +1 mvnsite 17m 59s the patch passed The precommit job looks good > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, > HADOOP-17056-test-03.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124629#comment-17124629 ] Akira Ajisaka commented on HADOOP-17056: Deleted the output of #16962 precommit job. > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, > HADOOP-17056-test-03.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17056: --- Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 57s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 7s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 11m 10s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} hadolint {color} | {color:green} 0m 5s{color} | {color:green} There were no new hadolint issues. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 6m 21s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 6s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 39s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 3s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17056 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004581/HADOOP-17056-test-03.patch | | Optional Tests | dupname asflicense shellcheck shelldocs hadolint mvnsite unit | | uname | Linux 76c888767720 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/branch-mvnsite-root.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/patch-mvnsite-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 hadolint=1.11.1-0-g0e692dd | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. ) > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL:
[jira] [Assigned] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-17056: -- Assignee: Akira Ajisaka > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, > HADOOP-17056-test-03.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes
vinayakumarb commented on a change in pull request #2026: URL: https://github.com/apache/hadoop/pull/2026#discussion_r434322138 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java ## @@ -1295,6 +1296,22 @@ static URI trimUri(URI uri) { */ public static void addPBProtocol(Configuration conf, Class protocol, BlockingService service, RPC.Server server) throws IOException { +RPC.setProtocolEngine(conf, protocol, ProtobufRpcEngine2.class); +server.addProtocol(RPC.RpcKind.RPC_PROTOCOL_BUFFER, protocol, service); + } + + /** + * Add protobuf based protocol to the {@link org.apache.hadoop.ipc.RPC.Server} + * @param conf configuration + * @param protocol Protocol interface + * @param service service that implements the protocol + * @param server RPC server to which the protocol implementation is + * added to + * @throws IOException + */ + @Deprecated + public static void addPBProtocol(Configuration conf, Class protocol, Review comment: But, I feel, we can still keep the method with deprecated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes
vinayakumarb commented on a change in pull request #2026: URL: https://github.com/apache/hadoop/pull/2026#discussion_r434320866 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngineCallback.java ## @@ -18,7 +18,7 @@ package org.apache.hadoop.ipc; -import org.apache.hadoop.thirdparty.protobuf.Message; +import com.google.protobuf.Message; public interface ProtobufRpcEngineCallback { Review comment: sure ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java ## @@ -433,7 +438,15 @@ public Server(Class protocolClass, Object protocolImpl, registerProtocolAndImpl(RPC.RpcKind.RPC_PROTOCOL_BUFFER, protocolClass, protocolImpl); } - + +@Override +protected RpcInvoker getServerRpcInvoker(RpcKind rpcKind) { + if (rpcKind == RpcKind.RPC_PROTOCOL_BUFFER) { +return RPC_INVOKER; + } + return super.getServerRpcInvoker(rpcKind); +} + Review comment: yes This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] abhishekdas99 commented on a change in pull request #2019: HADOOP-17029. Return correct permission and owner for listing on internal directories in ViewFs
abhishekdas99 commented on a change in pull request #2019: URL: https://github.com/apache/hadoop/pull/2019#discussion_r434320447 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewfsFileStatus.java ## @@ -56,38 +69,71 @@ public void testFileStatusSerialziation() File infile = new File(TEST_DIR, testfilename); final byte[] content = "dingos".getBytes(); -FileOutputStream fos = null; -try { - fos = new FileOutputStream(infile); +try (FileOutputStream fos = new FileOutputStream(infile)) { fos.write(content); -} finally { - if (fos != null) { -fos.close(); - } } assertEquals((long)content.length, infile.length()); Configuration conf = new Configuration(); ConfigUtil.addLink(conf, "/foo/bar/baz", TEST_DIR.toURI()); -FileSystem vfs = FileSystem.get(FsConstants.VIEWFS_URI, conf); -assertEquals(ViewFileSystem.class, vfs.getClass()); -Path path = new Path("/foo/bar/baz", testfilename); -FileStatus stat = vfs.getFileStatus(path); -assertEquals(content.length, stat.getLen()); -ContractTestUtils.assertNotErasureCoded(vfs, path); -assertTrue(path + " should have erasure coding unset in " + -"FileStatus#toString(): " + stat, -stat.toString().contains("isErasureCoded=false")); - -// check serialization/deserialization -DataOutputBuffer dob = new DataOutputBuffer(); -stat.write(dob); -DataInputBuffer dib = new DataInputBuffer(); -dib.reset(dob.getData(), 0, dob.getLength()); -FileStatus deSer = new FileStatus(); -deSer.readFields(dib); -assertEquals(content.length, deSer.getLen()); -assertFalse(deSer.isErasureCoded()); +try (FileSystem vfs = FileSystem.get(FsConstants.VIEWFS_URI, conf)) { + assertEquals(ViewFileSystem.class, vfs.getClass()); + Path path = new Path("/foo/bar/baz", testfilename); + FileStatus stat = vfs.getFileStatus(path); + assertEquals(content.length, stat.getLen()); + ContractTestUtils.assertNotErasureCoded(vfs, path); + assertTrue(path + " should have erasure coding unset in " + + "FileStatus#toString(): " + stat, + stat.toString().contains("isErasureCoded=false")); + + // check serialization/deserialization + DataOutputBuffer dob = new DataOutputBuffer(); + stat.write(dob); + DataInputBuffer dib = new DataInputBuffer(); + dib.reset(dob.getData(), 0, dob.getLength()); + FileStatus deSer = new FileStatus(); + deSer.readFields(dib); + assertEquals(content.length, deSer.getLen()); + assertFalse(deSer.isErasureCoded()); +} + } + + @Test + public void testListStatusACL() + throws IOException, URISyntaxException { +String testfilename = "testFileACL"; +String childDirectoryName = "testDirectoryACL"; +TEST_DIR.mkdirs(); +File infile = new File(TEST_DIR, testfilename); +final byte[] content = "dingos".getBytes(); + +try (FileOutputStream fos = new FileOutputStream(infile)) { + fos.write(content); +} +assertEquals((long)content.length, infile.length()); +File childDir = new File(TEST_DIR, childDirectoryName); +childDir.mkdirs(); + +Configuration conf = new Configuration(); +ConfigUtil.addLink(conf, "/file", infile.toURI()); +ConfigUtil.addLink(conf, "/dir", childDir.toURI()); + +try (FileSystem vfs = FileSystem.get(FsConstants.VIEWFS_URI, conf)) { + assertEquals(ViewFileSystem.class, vfs.getClass()); + FileStatus[] statuses = vfs.listStatus(new Path("/")); + + FileSystem localFs = FileSystem.getLocal(conf); + FileStatus fileStat = localFs.getFileStatus(new Path(infile.getPath())); + FileStatus dirStat = localFs.getFileStatus(new Path(childDir.getPath())); + + for (FileStatus status : statuses) { +if (status.getPath().getName().equals("file")) { + assertEquals(fileStat.getPermission(), status.getPermission()); +} else { Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] abhishekdas99 commented on a change in pull request #2019: HADOOP-17029. Return correct permission and owner for listing on internal directories in ViewFs
abhishekdas99 commented on a change in pull request #2019: URL: https://github.com/apache/hadoop/pull/2019#discussion_r434320391 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java ## @@ -1211,13 +1211,29 @@ public FileStatus getFileStatus(Path f) throws IOException { INode inode = iEntry.getValue(); if (inode.isLink()) { INodeLink link = (INodeLink) inode; - - result[i++] = new FileStatus(0, false, 0, 0, -creationTime, creationTime, PERMISSION_555, -ugi.getShortUserName(), ugi.getPrimaryGroupName(), -link.getTargetLink(), -new Path(inode.fullPath).makeQualified( -myUri, null)); + // For MERGE or NFLY links, the first target link is considered + // for fetching the FileStatus with an assumption that the permission + // and the owner will be the same for all the target directories. + Path linkedPath = new Path(link.targetDirLinkList[0].toString()); Review comment: In the new change , I have getting the filesystem by `link.getTargetFileSystem()` and calling `getFileStatus` on slash path. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17029) ViewFS does not return correct user/group and ACL
[ https://issues.apache.org/jira/browse/HADOOP-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124612#comment-17124612 ] Abhishek Das edited comment on HADOOP-17029 at 6/3/20, 5:37 AM: Sorry [~umamaheswararao] for responding late. I will update the PR as soon as possible. Thanks was (Author: abhishekd): Sorry [~umamaheswararao] for responding late. I will update the PR as soon as possibly. Thanks > ViewFS does not return correct user/group and ACL > - > > Key: HADOOP-17029 > URL: https://issues.apache.org/jira/browse/HADOOP-17029 > Project: Hadoop Common > Issue Type: Bug > Components: fs, viewfs >Reporter: Abhishek Das >Assignee: Abhishek Das >Priority: Major > > When doing ls on a mount point parent, the returned user/group ACL is > incorrect. It always showing the user and group being current user, with some > arbitrary ACL. Which could misleading any application depending on this API. > cc [~cliang] [~virajith] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17029) ViewFS does not return correct user/group and ACL
[ https://issues.apache.org/jira/browse/HADOOP-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124612#comment-17124612 ] Abhishek Das commented on HADOOP-17029: --- Sorry [~umamaheswararao] for responding late. I will update the PR as soon as possibly. Thanks > ViewFS does not return correct user/group and ACL > - > > Key: HADOOP-17029 > URL: https://issues.apache.org/jira/browse/HADOOP-17029 > Project: Hadoop Common > Issue Type: Bug > Components: fs, viewfs >Reporter: Abhishek Das >Assignee: Abhishek Das >Priority: Major > > When doing ls on a mount point parent, the returned user/group ACL is > incorrect. It always showing the user and group being current user, with some > arbitrary ACL. Which could misleading any application depending on this API. > cc [~cliang] [~virajith] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124595#comment-17124595 ] Xiaoqiao He commented on HADOOP-16254: -- [~John Smith] Sure, you are correct. `proxyHostname` should be brought by RPCHeader, Please reference v002. > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on a change in pull request #1991: HADOOP-17016. Adding Common Counters in ABFS
mehakmeet commented on a change in pull request #1991: URL: https://github.com/apache/hadoop/pull/1991#discussion_r434262848 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStatistics.java ## @@ -202,16 +182,13 @@ public void testOpenAppendRenameExists() throws IOException { fs.create(createFilePath); fs.open(createFilePath); fs.append(createFilePath); -fs.rename(createFilePath, destCreateFilePath); +assertTrue(fs.rename(createFilePath, destCreateFilePath)); Review comment: I'll make sure to use that going forward. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] billonahill commented on pull request #2044: YARN-10302. Make FairScheduler node comparator configurable
billonahill commented on pull request #2044: URL: https://github.com/apache/hadoop/pull/2044#issuecomment-637882580 Regarding the test4tests failure, this patch preserves backward compatibility and effects lines of code already tested, hence no new tests are added. It seems like all the failing unit tests are getting OOM errors. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2044: YARN-10302. Make FairScheduler node comparator configurable
hadoop-yetus commented on pull request #2044: URL: https://github.com/apache/hadoop/pull/2044#issuecomment-637879984 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 22m 45s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 26s | trunk passed | | +1 :green_heart: | compile | 0m 47s | trunk passed | | +1 :green_heart: | checkstyle | 0m 39s | trunk passed | | +1 :green_heart: | mvnsite | 0m 52s | trunk passed | | +1 :green_heart: | shadedclient | 15m 19s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 35s | trunk passed | | +0 :ok: | spotbugs | 1m 42s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 39s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | checkstyle | 0m 31s | the patch passed | | +1 :green_heart: | mvnsite | 0m 46s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 7s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 32s | the patch passed | | +1 :green_heart: | findbugs | 1m 56s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 55m 5s | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | The patch does not generate ASF License warnings. | | | | 139m 59s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart | | | hadoop.yarn.server.resourcemanager.scheduler.policy.TestFairOrderingPolicy | | | hadoop.yarn.server.resourcemanager.TestRMAdminService | | | hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerHealth | | | hadoop.yarn.server.resourcemanager.TestAppManager | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2044/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2044 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b4953fc558fe 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 6288e15118f | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-2044/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2044/2/testReport/ | | Max. process+thread count | 810 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2044/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2047: HDFS-15383 Add support for router delegation token without watch
hadoop-yetus commented on pull request #2047: URL: https://github.com/apache/hadoop/pull/2047#issuecomment-637873886 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 31s | trunk passed | | +1 :green_heart: | compile | 23m 7s | trunk passed | | +1 :green_heart: | checkstyle | 3m 10s | trunk passed | | +1 :green_heart: | mvnsite | 2m 32s | trunk passed | | +1 :green_heart: | shadedclient | 24m 8s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | trunk passed | | +0 :ok: | spotbugs | 1m 31s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 2s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 36s | the patch passed | | +1 :green_heart: | compile | 22m 19s | the patch passed | | +1 :green_heart: | javac | 22m 19s | the patch passed | | -0 :warning: | checkstyle | 3m 11s | root: The patch generated 7 new + 80 unchanged - 0 fixed = 87 total (was 80) | | +1 :green_heart: | mvnsite | 2m 24s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 57s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 34s | the patch passed | | +1 :green_heart: | findbugs | 3m 41s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 9m 13s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 9m 52s | hadoop-hdfs-rbf in the patch passed. | | -1 :x: | asflicense | 0m 46s | The patch generated 1 ASF License warnings. | | | | 155m 24s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2047/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2047 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 75faf044bd19 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 6288e15118f | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2047/1/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-2047/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2047/1/testReport/ | | asflicense | https://builds.apache.org/job/hadoop-multibranch/job/PR-2047/1/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 3232 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-rbf U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2047/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2048: HADOOP-17061. Fix broken links in AWS documentation.
hadoop-yetus commented on pull request #2048: URL: https://github.com/apache/hadoop/pull/2048#issuecomment-637849344 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 41s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 24m 42s | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | trunk passed | | +1 :green_heart: | shadedclient | 41m 49s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 38s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 63m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2048/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2048 | | Optional Tests | dupname asflicense mvnsite markdownlint | | uname | Linux d5a2d3548be3 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 6288e15118f | | Max. process+thread count | 342 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2048/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17061) Fix broken links in AWS documentation
Jeremie Piotte created HADOOP-17061: --- Summary: Fix broken links in AWS documentation Key: HADOOP-17061 URL: https://issues.apache.org/jira/browse/HADOOP-17061 Project: Hadoop Common Issue Type: Bug Reporter: Jeremie Piotte Broken links are found in the following page: [hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md|https://github.com/apache/hadoop/compare/trunk...piotte13:trunk#diff-3043a79259e7448fccbf133c3612b700] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] piotte13 opened a new pull request #2048: HADOOP-17061. Fix broken links in AWS documentation.
piotte13 opened a new pull request #2048: URL: https://github.com/apache/hadoop/pull/2048 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124316#comment-17124316 ] Fengnan Li commented on HADOOP-16828: - [~xyao] Thanks very much for the review and commit! As a continuous improvement, I created this https://issues.apache.org/jira/browse/HDFS-15383 and attached my patch. Internally we have been running this for 1.5 months it is performing well. > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen > Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, > Screen Shot 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fengnanli opened a new pull request #2047: Add support for router delegation token without watch
fengnanli opened a new pull request #2047: URL: https://github.com/apache/hadoop/pull/2047 Summary: This patch is targeting improving router's performance for delegation tokens related operations. It achieves the goal by removing watchers from router on tokens since based on our experience. The huge number of watches inside Zookeeper is degrading Zookeeper's performance pretty hard. The current limit is about 1.2-1.5 million. Specific changes: 1. Explicitly disable the watcher to tokens by not using PathChildrenCache or any curator provided cache at all. 2. Schedule a sync task between router and Zookeeper at a configurable interval to make routers sync with their token information through Zookeeper. 3. For token's change, always make sure to change local cache first instead of depending on callbacks of the watch event when using PathChildrenCache. The above three points are trying to make router token cache behaves as close as possible to the case when the PathChildrenCache is used. The below point handles one corner case. 4. Before token remover(a background thread) removes token from Zookeeper, one router will first make sure this token hasn't been renewed by other peers. This happens only when somehow the sync failed for this token that router local cache doesn't have the corret renewal date (expiry date) Test Plan: 1. Add several unit tests covering all common use cases. 2. Deployed on two machines and performing all tests. 3. Pressure testing: create production scale number of tokens (100k) and monitor the sync latency. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2041: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme
hadoop-yetus commented on pull request #2041: URL: https://github.com/apache/hadoop/pull/2041#issuecomment-637803389 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 23m 54s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 7s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 22s | trunk passed | | +1 :green_heart: | compile | 17m 21s | trunk passed | | +1 :green_heart: | checkstyle | 2m 45s | trunk passed | | +1 :green_heart: | mvnsite | 2m 55s | trunk passed | | +1 :green_heart: | shadedclient | 21m 1s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 52s | trunk passed | | +0 :ok: | spotbugs | 3m 7s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 13s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 5s | the patch passed | | +1 :green_heart: | compile | 16m 44s | the patch passed | | +1 :green_heart: | javac | 16m 44s | the patch passed | | +1 :green_heart: | checkstyle | 2m 42s | the patch passed | | +1 :green_heart: | mvnsite | 2m 55s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 23s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 58s | the patch passed | | +1 :green_heart: | findbugs | 5m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 44s | hadoop-common in the patch passed. | | -1 :x: | unit | 98m 2s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | The patch does not generate ASF License warnings. | | | | 249m 33s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2041/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2041 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 84fe748d1620 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b5efdea4fd3 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-2041/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2041/2/testReport/ | | Max. process+thread count | 4026 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2041/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #2035: HDFS-15374. Add documentation for command `fedbalance`.
goiri commented on a change in pull request #2035: URL: https://github.com/apache/hadoop/pull/2035#discussion_r434157276 ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,177 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) + - [Usage](#Usage) + - [Basic Usage](#Basic_Usage) + - [RBF Mode And Normal Federation Mode](#RBF_Mode_And_Normal_Federation_Mode) + - [Command Options](#Command_Options) + - [Configuration Options](#Configuration_Options) + - [Architecture of Federation Balance](#Architecture_of_Federation_Balance) + - [Balance Procedure Scheduler](#Balance_Procedure_Scheduler) + - [DistCpFedBalance](#DistCpFedBalance) +--- + +Overview + + + Federation Balance is a tool balancing data across different federation + namespaces. It uses [DistCp](../hadoop-distcp/DistCp.html) to copy data from + the source path to the target path. First it creates a snapshot at the source + path and submits the initial distcp. Then it uses distcp diff to do the + incremental copy. Finally when the source and the target are the same, it + updates the mount table in Router and moves the source to trash. + + This document aims to describe the usage and design of the Federation Balance. + +Usage +- + +### Basic Usage + + The federation balance tool supports both normal federation cluster and + router-based federation cluster. Taking rbf for example. Supposing we have a + mount entry in Router: + +/foo/src --> hdfs://nn0:8020/foo/src + + The command below runs a federation balance job. The first parameter is the + mount entry. The second one is the target path which must include the target + cluster. + +bash$ /bin/hadoop fedbalance submit /foo/src hdfs://nn1:8020/foo/dst + + It copies data from hdfs://nn0:8020/foo/src to hdfs://nn1:8020/foo/dst + incrementally and finally updates the mount entry to: + +/foo/src --> hdfs://nn1:8020/foo/dst + + If the hadoop shell process exits unexpectedly, we can use the command below + to continue the unfinished job: + +bash$ /bin/hadoop fedbalance continue + + This will scan the journal to find all the unfinished jobs, recover and + continue to execute them. + + If we want to balance in a normal federation cluster, use the command below. + +bash$ /bin/hadoop fedbalance -router false submit hdfs://nn0:8020/foo/src hdfs://nn1:8020/foo/dst + + The option `-router false` indicates this is not in router-based federation. + The source path must includes the source cluster. + +### RBF Mode And Normal Federation Mode + + The federation balance tool has 2 modes: + + * the router-based federation mode(rbf mode). + * the normal federation mode. + + By default the command runs in the rbf mode. You can specify the rbf mode + explicitly by using the option `-router true`. The option `-router false` + specifies the normal federation mode. + + In the rbf mode the first parameter is taken as the mount point. It disables + write by setting the mount point readonly. + + In the normal federation mode the first parameter is taken as the full path of + the source. The first parameter must include the source cluster. It disables + write by cancelling the execute permission of the source path. + + Details about disabling write see [DistCpFedBalance](#DistCpFedBalance). + +### Command Options + +Command `submit` has 5 options: + +| Option key | Description | +| -- | | +| -router | This option specifies the mode of the command. `True` indicates the router-based federation mode. `False` indicates the normal federation mode. | +| -forceCloseOpen | If `true`, the DIFF_DISTCP stage forces close all open files when there is no diff. Otherwise it waits until there is no open files. The default value is `false`. | +| -map | Max number of concurrent maps to use for copy. | +| -bandwidth | Specify bandwidth per map in MB. | +| -moveToTrash | If `true` move the source path to trash after the job is done. Otherwise delete the source path directly. | + +### Configuration Options + + +| Configuration key | Description | +| -- | | +| hadoop.hdfs.procedure.work.thread.num | The worker threads number of the BalanceProcedureScheduler. Default is `10`. | Review comment: Maybe add a column for default. ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,177 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) + - [Usage](#Usage) + - [Basic Usage](#Basic_Usage) + - [RBF Mode And Normal Federation
[GitHub] [hadoop] goiri commented on a change in pull request #2035: HDFS-15374. Add documentation for command `fedbalance`.
goiri commented on a change in pull request #2035: URL: https://github.com/apache/hadoop/pull/2035#discussion_r434154368 ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,156 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) Review comment: Take a look at: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html It just has: MACRO{toc|fromDepth=0|toDepth=3} This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #2035: HDFS-15374. Add documentation for command `fedbalance`.
goiri commented on a change in pull request #2035: URL: https://github.com/apache/hadoop/pull/2035#discussion_r434154368 ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,156 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) Review comment: Take a look at: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html It just has: This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15167) [viewfs] ls will fail when user doesn't exist
[ https://issues.apache.org/jira/browse/HADOOP-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124258#comment-17124258 ] Hadoop QA commented on HADOOP-15167: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 31s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 30s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 33s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 37s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}118m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16965/artifact/out/Dockerfile | | JIRA Issue | HADOOP-15167 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906353/HADOOP-15167-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 863e2cbced51 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 7f486f02589 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16965/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124241#comment-17124241 ] Hudson commented on HADOOP-16828: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18319 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18319/]) HADOOP-16828. Zookeeper Delegation Token Manager fetch sequence number (xyao: rev 6288e15118fab65a9a1452898e639313c6996769) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestZKDelegationTokenSecretManager.java > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen > Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, > Screen Shot 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-16828: Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~fengnanli] for the contribution and all for the reviews. I've commit the patch to trunk. > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen > Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, > Screen Shot 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17029) ViewFS does not return correct user/group and ACL
[ https://issues.apache.org/jira/browse/HADOOP-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124220#comment-17124220 ] Uma Maheswara Rao G commented on HADOOP-17029: -- Hey [~abhishekd], HADOOP-17060 needs changes in the same lines as this patch is doing. I have posted my review in this PR. Please check if you can take a look. Once this PR merged, I plan to work on HADOOP-17060. Please let me know if it takes more time for you on HADOOP-17029, I will go ahead with HADOOP-17060. > ViewFS does not return correct user/group and ACL > - > > Key: HADOOP-17029 > URL: https://issues.apache.org/jira/browse/HADOOP-17029 > Project: Hadoop Common > Issue Type: Bug > Components: fs, viewfs >Reporter: Abhishek Das >Assignee: Abhishek Das >Priority: Major > > When doing ls on a mount point parent, the returned user/group ACL is > incorrect. It always showing the user and group being current user, with some > arbitrary ACL. Which could misleading any application depending on this API. > cc [~cliang] [~virajith] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124208#comment-17124208 ] Xiaoyu Yao commented on HADOOP-16828: - Patch v2 LGTM, +1. I will merge it shortly. > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen > Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, > Screen Shot 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.
[ https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124202#comment-17124202 ] Mingliang Liu commented on HADOOP-17047: {quote} In summary, should I change the status of this issue to the patch available? {quote} Yes, it is now already "patch" available. {quote} Do I need to create a sub-task for the first case? {quote} Yes that will be helpful. Thanks, The new patch is not using @VisibleForTesting to replace {{/* This method is needed for tests. */.}} Also new patch file can have newer version name, like {{HADOOP-17047.002.patch}} > TODO comments exist in trunk while the related issues are already fixed. > > > Key: HADOOP-17047 > URL: https://issues.apache.org/jira/browse/HADOOP-17047 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Rungroj Maipradit >Assignee: Rungroj Maipradit >Priority: Trivial > Attachments: HADOOP-17047.001.patch, HADOOP-17047.001.patch > > > In a research project, we analyzed the source code of Hadoop looking for > comments with on-hold SATDs (self-admitted technical debt) that could be > fixed already. An on-hold SATD is a TODO/FIXME comment blocked by an issue. > If this blocking issue is already resolved, the related todo can be > implemented (or sometimes it is already implemented, but the comment is left > in the code causing confusions). As we found a few instances of these in > Hadoop, we decided to collect them in a ticket, so they are documented and > can be addressed sooner or later. > A list of code comments that mention already closed issues. > * A code comment suggests making the setJobConf method deprecated along with > a mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, > but the method is still not annotated as deprecated. > {code:java} > /** >* This code is to support backward compatibility and break the compile >* time dependency of core on mapred. >* This should be made deprecated along with the mapred package > HADOOP-1230. >* Should be removed when mapred package is removed. >*/ {code} > Comment location: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88] > * A comment mentions that the return type of the getDefaultFileSystem method > should be changed to AFS when HADOOP-6223 is completed. > Indeed, this change was done in the related commit of HADOOP-6223: > ([https://github.com/apache/hadoop/commit/3f371a0a644181b204111ee4e12c995fc7b5e5f5#diff-cd86a2b9ce3efd2232c2ace0e9084508L395)] > Thus, the comment could be removed. > {code:java} > @InterfaceStability.Unstable /* return type will change to AFS once > HADOOP-6223 is completed */ > {code} > Comment location: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java#L512] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2046: HADOOP-16202 Enhance S3A openFile()
hadoop-yetus commented on pull request #2046: URL: https://github.com/apache/hadoop/pull/2046#issuecomment-637726275 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 47s | trunk passed | | +1 :green_heart: | compile | 17m 33s | trunk passed | | +1 :green_heart: | checkstyle | 2m 46s | trunk passed | | +1 :green_heart: | mvnsite | 2m 4s | trunk passed | | +1 :green_heart: | shadedclient | 20m 26s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 28s | trunk passed | | +0 :ok: | spotbugs | 1m 6s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 7s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 22s | the patch passed | | +1 :green_heart: | compile | 16m 53s | the patch passed | | +1 :green_heart: | javac | 16m 53s | the patch passed | | +1 :green_heart: | checkstyle | 2m 49s | the patch passed | | +1 :green_heart: | mvnsite | 2m 3s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 7s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 28s | the patch passed | | +1 :green_heart: | findbugs | 3m 30s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 11s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 31s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | The patch does not generate ASF License warnings. | | | | 122m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2046/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2046 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 2b0a1f09fe16 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / aa6d13455b9 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2046/1/testReport/ | | Max. process+thread count | 1619 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2046/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao merged pull request #2041: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme
umamaheswararao merged pull request #2041: URL: https://github.com/apache/hadoop/pull/2041 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on pull request #2041: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme
umamaheswararao commented on pull request #2041: URL: https://github.com/apache/hadoop/pull/2041#issuecomment-637715880 Thank you @rakeshadr for the reviews! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rakeshadr commented on pull request #2041: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme
rakeshadr commented on pull request #2041: URL: https://github.com/apache/hadoop/pull/2041#issuecomment-637713458 Thanks @umamaheswararao for the contribution. Unit tests are really good. +1 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on pull request #2041: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme
umamaheswararao commented on pull request #2041: URL: https://github.com/apache/hadoop/pull/2041#issuecomment-637712010 Test failures are unrelated to this change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-17060) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory
[ https://issues.apache.org/jira/browse/HADOOP-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G moved HDFS-15370 to HADOOP-17060: - Component/s: (was: hdfs) viewfs Key: HADOOP-17060 (was: HDFS-15370) Affects Version/s: (was: 3.1.0) (was: 3.0.0) 3.0.0 3.1.0 Project: Hadoop Common (was: Hadoop HDFS) > listStatus and getFileStatus behave inconsistent in the case of ViewFs > implementation for isDirectory > - > > Key: HADOOP-17060 > URL: https://issues.apache.org/jira/browse/HADOOP-17060 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.1.0, 3.0.0 >Reporter: Srinivasu Majeti >Assignee: Uma Maheswara Rao G >Priority: Major > Labels: viewfs > > listStatus implementation in ViewFs and getFileStatus does not return > consistent values for an element on isDirectory value. listStatus returns > isDirectory of all softlinks as false and getFileStatus returns isDirectory > as true. > {code:java} > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" > FileStatus of viewfs://c3121/testme21may isDirectory:false > FileStatus of viewfs://c3121/tmp isDirectory:false > FileStatus of viewfs://c3121/foo isDirectory:false > FileStatus of viewfs://c3121/tmp21may isDirectory:false > FileStatus of viewfs://c3121/testme isDirectory:false > FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false > FileStatus of / isDirectory:true > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 > FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false > FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true > FileStatus of /testme2 isDirectory:true <--- returns true > [hdfs@c3121-node2 ~]$ {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17016) Adding Common Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124137#comment-17124137 ] Hudson commented on HADOOP-17016: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18316 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18316/]) HADOOP-17016. Adding Common Counters in ABFS (#1991). (stevel: rev 7f486f0258943f1dbda7fe5c08be4391e284df28) * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsStatistics.java * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsInstrumentation.java * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsCounters.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStatistics.java > Adding Common Counters in ABFS > -- > > Key: HADOOP-17016 > URL: https://issues.apache.org/jira/browse/HADOOP-17016 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Fix For: 3.4.0 > > > Common Counters to be added to ABFS: > |OP_CREATE| > |OP_OPEN| > |OP_GET_FILE_STATUS| > |OP_APPEND| > |OP_CREATE_NON_RECURSIVE| > |OP_DELETE| > |OP_EXISTS| > |OP_GET_DELEGATION_TOKEN| > |OP_LIST_STATUS| > |OP_MKDIRS| > |OP_RENAME| > |DIRECTORIES_CREATED| > |DIRECTORIES_DELETED| > |FILES_CREATED| > |FILES_DELETED| > |ERROR_IGNORED| > propose: > * Have an enum class to define all the counters. > * Have an Instrumentation class for making a MetricRegistry and adding all > the counters. > * Incrementing the counters in AzureBlobFileSystem. > * Integration and Unit tests to validate the counters. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17059) ArrayIndexOfboundsException in ViewFileSystem#listStatus
[ https://issues.apache.org/jira/browse/HADOOP-17059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124133#comment-17124133 ] Ayush Saxena commented on HADOOP-17059: --- Seems problem with user name as well, As in HADOOP-15167 > ArrayIndexOfboundsException in ViewFileSystem#listStatus > > > Key: HADOOP-17059 > URL: https://issues.apache.org/jira/browse/HADOOP-17059 > Project: Hadoop Common > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > > In Viewfilesystem#listStatus , we get groupnames of ugi , If groupnames > doesn't exists it will throw AIOBE > {code:java} > else { > result[i++] = new FileStatus(0, true, 0, 0, > creationTime, creationTime, PERMISSION_555, > ugi.getShortUserName(), ugi.getGroupNames()[0], > new Path(inode.fullPath).makeQualified( > myUri, null)); > } {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17016) Adding Common Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17016. - Fix Version/s: 3.4.0 Resolution: Fixed +1, merged to trunk if you want it in branch3.3 (you should) and branch-3.2 (you might) cherry pick the patch, run the integration tests, stick them up as PRs, point me at them and I'll do the merge. I won't do any code reviews for the backport, just want the tests to be rerun > Adding Common Counters in ABFS > -- > > Key: HADOOP-17016 > URL: https://issues.apache.org/jira/browse/HADOOP-17016 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Fix For: 3.4.0 > > > Common Counters to be added to ABFS: > |OP_CREATE| > |OP_OPEN| > |OP_GET_FILE_STATUS| > |OP_APPEND| > |OP_CREATE_NON_RECURSIVE| > |OP_DELETE| > |OP_EXISTS| > |OP_GET_DELEGATION_TOKEN| > |OP_LIST_STATUS| > |OP_MKDIRS| > |OP_RENAME| > |DIRECTORIES_CREATED| > |DIRECTORIES_DELETED| > |FILES_CREATED| > |FILES_DELETED| > |ERROR_IGNORED| > propose: > * Have an enum class to define all the counters. > * Have an Instrumentation class for making a MetricRegistry and adding all > the counters. > * Incrementing the counters in AzureBlobFileSystem. > * Integration and Unit tests to validate the counters. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #1991: HADOOP-17016. Adding Common Counters in ABFS
steveloughran commented on pull request #1991: URL: https://github.com/apache/hadoop/pull/1991#issuecomment-637698439 merged to trunk This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1991: HADOOP-17016. Adding Common Counters in ABFS
steveloughran closed pull request #1991: URL: https://github.com/apache/hadoop/pull/1991 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #1991: HADOOP-17016. Adding Common Counters in ABFS
steveloughran commented on pull request #1991: URL: https://github.com/apache/hadoop/pull/1991#issuecomment-637696461 Ok, all is good. I'd point you at ContractTestUtils for future rename/exists checks in tests, as it has better error reporting. +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1991: HADOOP-17016. Adding Common Counters in ABFS
hadoop-yetus removed a comment on pull request #1991: URL: https://github.com/apache/hadoop/pull/1991#issuecomment-630766152 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 22m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 2s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 22s | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | trunk passed | | +1 :green_heart: | shadedclient | 16m 13s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 51s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | +1 :green_heart: | checkstyle | 0m 15s | the patch passed | | +1 :green_heart: | mvnsite | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 25s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | | +1 :green_heart: | findbugs | 0m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 9s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 85m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1991 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4f84a46f4ce0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d4e36409d40 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/4/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1991: HADOOP-17016. Adding Common Counters in ABFS
steveloughran commented on a change in pull request #1991: URL: https://github.com/apache/hadoop/pull/1991#discussion_r434046740 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStatistics.java ## @@ -202,16 +182,13 @@ public void testOpenAppendRenameExists() throws IOException { fs.create(createFilePath); fs.open(createFilePath); fs.append(createFilePath); -fs.rename(createFilePath, destCreateFilePath); +assertTrue(fs.rename(createFilePath, destCreateFilePath)); Review comment: nit: There's a ContractTestUtils rename which is a bit more informative on rename failure, but I Don't see this failing enough to worry about it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17059) ArrayIndexOfboundsException in ViewFileSystem#listStatus
[ https://issues.apache.org/jira/browse/HADOOP-17059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HADOOP-17059: --- Description: In Viewfilesystem#listStatus , we get groupnames of ugi , If groupnames doesn't exists it will throw AIOBE {code:java} else { result[i++] = new FileStatus(0, true, 0, 0, creationTime, creationTime, PERMISSION_555, ugi.getShortUserName(), ugi.getGroupNames()[0], new Path(inode.fullPath).makeQualified( myUri, null)); } {code} > ArrayIndexOfboundsException in ViewFileSystem#listStatus > > > Key: HADOOP-17059 > URL: https://issues.apache.org/jira/browse/HADOOP-17059 > Project: Hadoop Common > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > > In Viewfilesystem#listStatus , we get groupnames of ugi , If groupnames > doesn't exists it will throw AIOBE > {code:java} > else { > result[i++] = new FileStatus(0, true, 0, 0, > creationTime, creationTime, PERMISSION_555, > ugi.getShortUserName(), ugi.getGroupNames()[0], > new Path(inode.fullPath).makeQualified( > myUri, null)); > } {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17059) ArrayIndexOfboundsException in ViewFileSystem#listStatus
hemanthboyina created HADOOP-17059: -- Summary: ArrayIndexOfboundsException in ViewFileSystem#listStatus Key: HADOOP-17059 URL: https://issues.apache.org/jira/browse/HADOOP-17059 Project: Hadoop Common Issue Type: Bug Reporter: hemanthboyina Assignee: hemanthboyina -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #2046: HADOOP-16202 Enhance S3A openFile()
steveloughran opened a new pull request #2046: URL: https://github.com/apache/hadoop/pull/2046 * s3a fs will use any FileStatus passed in to openFile...needed for mount/wrapped FS * and supports an option "fs.s3a.open.option.length" to set the length -which will also trigger the HEAD being skipped * builder spec updated to say status path is ignored * add tracking of all option keynames passed to builder. I had intended to support etag and version but its too tricky...you'd also need to supply the length or you issue HEAD with the given etag/version to extract that param too. The tracking of all option keys was for that. Not yet used -but left for others needs new tests -pass in status with different path (in contract) -new option skips HEAD request (trivial test -open missing file, seek to below declared length, only expect failure on read()) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16202) S3A openFile() operation to support explicit length parameter
[ https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16202: Description: The {{openFile()}} builder API lets us add new options when reading a file Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows the length of the file to be declared. If set, *no check for the existence of the file is issued when opening the file* Also: withFileStatus() to take any FileStatus implementation, rather than only S3AFileStatus -and not check that the path matches the path being opened. Needed to support viewFS-style wrapping and mounting. was: The {{openFile()}} builder API lets us add new options when reading a file Proposed: allowing applications to explicitly demand on. If we explicitly add the values in the S3A filestatus & located filestatus (maybe even: checksum?), then you can list a directory and. knowing those values, explicitly ask for that version of file. {code} org.apache.fs.s3a.open.versionid + versionID org.apache.fs.s3a.open.etag + etag {code} setting both will be an error. > S3A openFile() operation to support explicit length parameter > - > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > The {{openFile()}} builder API lets us add new options when reading a file > Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows > the length of the file to be declared. If set, *no check for the existence of > the file is issued when opening the file* > Also: withFileStatus() to take any FileStatus implementation, rather than > only S3AFileStatus -and not check that the path matches the path being > opened. Needed to support viewFS-style wrapping and mounting. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1999: HADOOP-14566. Add seek support for SFTP FileSystem.
hadoop-yetus commented on pull request #1999: URL: https://github.com/apache/hadoop/pull/1999#issuecomment-637645991 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 22s | trunk passed | | +1 :green_heart: | compile | 17m 15s | trunk passed | | +1 :green_heart: | checkstyle | 0m 53s | trunk passed | | +1 :green_heart: | mvnsite | 1m 28s | trunk passed | | +1 :green_heart: | shadedclient | 16m 48s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 5s | trunk passed | | +0 :ok: | spotbugs | 2m 8s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 6s | trunk passed | | -0 :warning: | patch | 2m 28s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | the patch passed | | +1 :green_heart: | compile | 16m 21s | the patch passed | | +1 :green_heart: | javac | 16m 21s | the patch passed | | +1 :green_heart: | checkstyle | 0m 51s | hadoop-common-project/hadoop-common: The patch generated 0 new + 18 unchanged - 1 fixed = 18 total (was 19) | | +1 :green_heart: | mvnsite | 1m 27s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 50s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 4s | the patch passed | | +1 :green_heart: | findbugs | 2m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 31s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | The patch does not generate ASF License warnings. | | | | 107m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1999/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1999 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 71756c43fcb1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / aa6d13455b9 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1999/9/testReport/ | | Max. process+thread count | 1393 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1999/9/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16202) S3A openFile() operation to support explicit length parameter
[ https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16202: Summary: S3A openFile() operation to support explicit length parameter (was: S3A openFile() operation to support explicit versionID, etag, length parameters) > S3A openFile() operation to support explicit length parameter > - > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > The {{openFile()}} builder API lets us add new options when reading a file > Proposed: allowing applications to explicitly demand on. If we explicitly add > the values in the S3A filestatus & located filestatus (maybe even: > checksum?), then you can list a directory and. knowing those values, > explicitly ask for that version of file. > {code} > org.apache.fs.s3a.open.versionid + versionID > org.apache.fs.s3a.open.etag + etag > {code} > setting both will be an error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1441: HADOOP-16568. S3A FullCredentialsTokenBinding fails if local credentials are unset
hadoop-yetus commented on pull request #1441: URL: https://github.com/apache/hadoop/pull/1441#issuecomment-637578388 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 19s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 56s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | trunk passed | | +1 :green_heart: | shadedclient | 16m 20s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed | | +0 :ok: | spotbugs | 1m 1s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 52s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | | +1 :green_heart: | findbugs | 1m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 26s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 65m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1441/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1441 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9fe32c820c2d 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1441/3/testReport/ | | Max. process+thread count | 342 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1441/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1041: HADOOP-15844. Tag S3GuardTool entry points as LimitedPrivate/Evolving
hadoop-yetus commented on pull request #1041: URL: https://github.com/apache/hadoop/pull/1041#issuecomment-637574752 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 58s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 17m 13s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed | | +0 :ok: | spotbugs | 1m 3s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 1s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | checkstyle | 0m 18s | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 21s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | | +1 :green_heart: | findbugs | 1m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 20s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 65m 1s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1041/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1041 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 36fb71a0adc7 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1041/12/testReport/ | | Max. process+thread count | 422 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1041/12/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.
hadoop-yetus commented on pull request #1963: URL: https://github.com/apache/hadoop/pull/1963#issuecomment-637572452 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 20m 57s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 22s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 15m 58s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed | | +1 :green_heart: | javac | 0m 25s | the patch passed | | -0 :warning: | checkstyle | 0m 17s | hadoop-tools/hadoop-aws: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) | | +1 :green_heart: | mvnsite | 0m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 8s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | | +1 :green_heart: | findbugs | 1m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 26s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 82m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1963 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5c76d9e4f62f 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/4/testReport/ | | Max. process+thread count | 346 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mpryahin commented on a change in pull request #1999: HADOOP-14566. Add seek support for SFTP FileSystem.
mpryahin commented on a change in pull request #1999: URL: https://github.com/apache/hadoop/pull/1999#discussion_r433891389 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractFSContract.java ## @@ -69,6 +69,14 @@ public void init() throws IOException { } + /** + * Any teardown logic can go here Review comment: fixed, thank you. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1999: HADOOP-14566. Add seek support for SFTP FileSystem.
steveloughran commented on a change in pull request #1999: URL: https://github.com/apache/hadoop/pull/1999#discussion_r433416794 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractFSContract.java ## @@ -69,6 +69,14 @@ public void init() throws IOException { } + /** + * Any teardown logic can go here Review comment: add . so javadoc is happy This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16202) S3A openFile() operation to support explicit versionID, etag, length parameters
[ https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16202: Summary: S3A openFile() operation to support explicit versionID, etag, length parameters (was: S3A openFile() operation to support explicit versionID, etag, lengthparameters) > S3A openFile() operation to support explicit versionID, etag, length > parameters > --- > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > The {{openFile()}} builder API lets us add new options when reading a file > Proposed: allowing applications to explicitly demand on. If we explicitly add > the values in the S3A filestatus & located filestatus (maybe even: > checksum?), then you can list a directory and. knowing those values, > explicitly ask for that version of file. > {code} > org.apache.fs.s3a.open.versionid + versionID > org.apache.fs.s3a.open.etag + etag > {code} > setting both will be an error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16202) S3A openFile() operation to support explicit versionID, etag, lengthparameters
[ https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16202: Summary: S3A openFile() operation to support explicit versionID, etag, lengthparameters (was: S3A openFile() operation to support explicit versionID, etag parameters) > S3A openFile() operation to support explicit versionID, etag, lengthparameters > -- > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > The {{openFile()}} builder API lets us add new options when reading a file > Proposed: allowing applications to explicitly demand on. If we explicitly add > the values in the S3A filestatus & located filestatus (maybe even: > checksum?), then you can list a directory and. knowing those values, > explicitly ask for that version of file. > {code} > org.apache.fs.s3a.open.versionid + versionID > org.apache.fs.s3a.open.etag + etag > {code} > setting both will be an error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16202) S3A openFile() operation to support explicit versionID, etag parameters
[ https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16202: --- Assignee: Steve Loughran > S3A openFile() operation to support explicit versionID, etag parameters > --- > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > The {{openFile()}} builder API lets us add new options when reading a file > Proposed: allowing applications to explicitly demand on. If we explicitly add > the values in the S3A filestatus & located filestatus (maybe even: > checksum?), then you can list a directory and. knowing those values, > explicitly ask for that version of file. > {code} > org.apache.fs.s3a.open.versionid + versionID > org.apache.fs.s3a.open.etag + etag > {code} > setting both will be an error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16090) S3A Client to add explicit support for versioned stores
[ https://issues.apache.org/jira/browse/HADOOP-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16090: Resolution: Abandoned Status: Resolved (was: Patch Available) hadoop-13230 will replace this by not deleting the markers at all > S3A Client to add explicit support for versioned stores > --- > > Key: HADOOP-16090 > URL: https://issues.apache.org/jira/browse/HADOOP-16090 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.1 >Reporter: Dmitri Chmelev >Assignee: Steve Loughran >Priority: Minor > > The fix to avoid calls to getFileStatus() for each path component in > deleteUnnecessaryFakeDirectories() (HADOOP-13164) results in accumulation of > delete markers in versioned S3 buckets. The above patch replaced > getFileStatus() checks with a single batch delete request formed by > generating all ancestor keys formed from a given path. Since the delete > request is not checking for existence of fake directories, it will create a > delete marker for every path component that did not exist (or was previously > deleted). Note that issuing a DELETE request without specifying a version ID > will always create a new delete marker, even if one already exists ([AWS S3 > Developer > Guide|https://docs.aws.amazon.com/AmazonS3/latest/dev/RemDelMarker.html]) > Since deleteUnnecessaryFakeDirectories() is called as a callback on > successful writes and on renames, delete markers accumulate rather quickly > and their rate of accumulation is inversely proportional to the depth of the > path. In other words, directories closer to the root will have more delete > markers than the leaves. > This behavior negatively impacts performance of getFileStatus() operation > when it has to issue listObjects() request (especially v1) as the delete > markers have to be examined when the request searches for first current > non-deleted version of an object following a given prefix. > I did a quick comparison against 3.x and the issue is still present: > [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1627: HADOOP-16644. Do a HEAD after a PUT to get the modtime.
steveloughran closed pull request #1627: URL: https://github.com/apache/hadoop/pull/1627 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.
steveloughran commented on pull request #1963: URL: https://github.com/apache/hadoop/pull/1963#issuecomment-637521825 yetus didnt run the last patch. i will rebase and resubmit to force it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.
hadoop-yetus removed a comment on pull request #1963: URL: https://github.com/apache/hadoop/pull/1963#issuecomment-631069895 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 25m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 24m 5s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 28s | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | trunk passed | | +1 :green_heart: | shadedclient | 17m 8s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed | | +0 :ok: | spotbugs | 1m 8s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 6s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | the patch passed | | +1 :green_heart: | compile | 0m 31s | the patch passed | | +1 :green_heart: | javac | 0m 31s | the patch passed | | -0 :warning: | checkstyle | 0m 20s | hadoop-tools/hadoop-aws: The patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) | | +1 :green_heart: | mvnsite | 0m 34s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 58s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | | +1 :green_heart: | findbugs | 1m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 17s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 92m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1963 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7cdec44bd173 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8f78aeb2500 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/2/testReport/ | | Max. process+thread count | 457 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2045: HADOOP-17056. shelldoc fails in hadoop-common.
iwasakims commented on pull request #2045: URL: https://github.com/apache/hadoop/pull/2045#issuecomment-637521388 Is this message in cosole output relevant? ``` 17:33:05 WARNING: shellcheck needs UTF-8 locale support. Forcing C.UTF-8. 17:33:05 executable '/testptch/hadoop/dev-support/bin/shelldocs' for 'shelldocs' does not exist. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123739#comment-17123739 ] Hadoop QA commented on HADOOP-16254: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} prototool {color} | {color:blue} 0m 0s{color} | {color:blue} prototool was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 10s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 33s{color} | {color:red} root generated 23 new + 139 unchanged - 23 fixed = 162 total (was 162) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 9 new + 342 unchanged - 1 fixed = 351 total (was 343) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 18s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}138m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16964/artifact/out/Dockerfile | | JIRA Issue | HADOOP-16254 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12968021/HADOOP-16254.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc prototool | | uname | Linux 725a56360cc9 4.15.0-91-generic
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123736#comment-17123736 ] Yuxuan Wang commented on HADOOP-16254: -- [~hexiaoqiao] {quote}Do you mean one connection from Router to NameNode could proxy different client requests? {quote} Yes. {quote}The next RPC should set another one because it is thread-local {quote} But namenode server only process connection header once for specific one connection. > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123692#comment-17123692 ] Xiaoqiao He commented on HADOOP-16254: -- Thanks [~zhengchenyu] and [~John Smith] for your comments and sorry for late response. About `safeguard`, I think we should reinforce it using superuser/ugi or configuration to avoid security vulnerability. I am not sure if it could resolve the issue [~daryn] mentioned above. {quote}Proxy address should be in rpc header not connection header, since one connection can forward multiple clients' ip.{quote} Sorry don't get it. Do you mean one connection from Router to NameNode could proxy different client requests? If that, we could set to null after handler has processed RPC request at Router. The next RPC should set another one because it is thread-local, and one handler could not process different RPC request at the same time IMO. Welcome any suggestions and discussions. > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123635#comment-17123635 ] Yuxuan Wang commented on HADOOP-16254: -- Long time no see. And I find the HADOOP-16254.004.patch make something wrong. Proxy address should be in rpc header not connection header, since one connection can forward multiple clients' ip. > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123630#comment-17123630 ] Hadoop QA commented on HADOOP-17056: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 57s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 7s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 11m 10s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} hadolint {color} | {color:green} 0m 5s{color} | {color:green} There were no new hadolint issues. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 6m 21s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 6s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 39s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 3s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17056 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004581/HADOOP-17056-test-03.patch | | Optional Tests | dupname asflicense shellcheck shelldocs hadolint mvnsite unit | | uname | Linux 76c888767720 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/branch-mvnsite-root.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/patch-mvnsite-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 hadolint=1.11.1-0-g0e692dd | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL:
[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123625#comment-17123625 ] Hadoop QA commented on HADOOP-17056: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 1s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 6m 53s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} hadolint {color} | {color:green} 0m 5s{color} | {color:green} There were no new hadolint issues. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 5s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 16s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 52s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}125m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16963/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17056 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004581/HADOOP-17056-test-03.patch | | Optional Tests | dupname asflicense shellcheck shelldocs hadolint mvnsite unit | | uname | Linux 898d758c9292 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16963/artifact/out/branch-mvnsite-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16963/testReport/ | | Max. process+thread count | 343 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16963/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 hadolint=1.11.1-0-g0e692dd | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123603#comment-17123603 ] zhengchenyu commented on HADOOP-16254: -- For security problem, I think this methods looks like proxy user. The proxy user could imitate some other specific user, it's also not security, but we allow in some condition. Maybe we need a ProxyHost class, decide some hosts could imitate some specific host. For HDFS-13248, we could offer configuration which could tell namnode the router's ip could imitate all datande and client's ip. > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17058) Support for Appendblob in abfs driver
[ https://issues.apache.org/jira/browse/HADOOP-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani reassigned HADOOP-17058: --- Assignee: Ishani > Support for Appendblob in abfs driver > - > > Key: HADOOP-17058 > URL: https://issues.apache.org/jira/browse/HADOOP-17058 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > add changes to support appendblob in the hadoop-azure abfs driver. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17058) Support for Appendblob in abfs driver
Ishani created HADOOP-17058: --- Summary: Support for Appendblob in abfs driver Key: HADOOP-17058 URL: https://issues.apache.org/jira/browse/HADOOP-17058 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.3.0 Reporter: Ishani add changes to support appendblob in the hadoop-azure abfs driver. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2045: HADOOP-17056. shelldoc fails in hadoop-common.
hadoop-yetus commented on pull request #2045: URL: https://github.com/apache/hadoop/pull/2045#issuecomment-637403880 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 22s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 16s | Maven dependency ordering for branch | | +1 :green_heart: | shadedclient | 16m 21s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for patch | | +1 :green_heart: | hadolint | 0m 5s | There were no new hadolint issues. | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 50s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 37m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2045 | | Optional Tests | dupname asflicense shellcheck shelldocs hadolint | | uname | Linux 23cbe5ee7213 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | Max. process+thread count | 314 (vs. ulimit of 5500) | | modules | C: U: | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 hadolint=1.11.1-0-g0e692dd | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123511#comment-17123511 ] Hadoop QA commented on HADOOP-17056: (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/16963/console in case of problems. > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, > HADOOP-17056-test-03.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123504#comment-17123504 ] Akira Ajisaka commented on HADOOP-17056: test-02 patch: Create gpg homedir under the project root However, the path length is still too long for the qbt jobs. I'd like to disable gpg verification in the docker build image. PR: https://github.com/apache/hadoop/pull/2045 test-03 patch: PR #2045 and modified hadoop-functions.sh to kick the shelldoc. > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, > HADOOP-17056-test-03.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123504#comment-17123504 ] Akira Ajisaka edited comment on HADOOP-17056 at 6/2/20, 8:34 AM: - test-02 patch: Create gpg homedir under the project root However, the path length is still too long for the qbt jobs. I'd like to disable gpg verification if the environment is the docker build image. PR: https://github.com/apache/hadoop/pull/2045 test-03 patch: PR #2045 and modified hadoop-functions.sh to kick the shelldoc. was (Author: ajisakaa): test-02 patch: Create gpg homedir under the project root However, the path length is still too long for the qbt jobs. I'd like to disable gpg verification in the docker build image. PR: https://github.com/apache/hadoop/pull/2045 test-03 patch: PR #2045 and modified hadoop-functions.sh to kick the shelldoc. > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, > HADOOP-17056-test-03.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2045: HADOOP-17056. shelldoc fails in hadoop-common.
hadoop-yetus commented on pull request #2045: URL: https://github.com/apache/hadoop/pull/2045#issuecomment-637381762 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17056: --- Attachment: HADOOP-17056-test-03.patch > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, > HADOOP-17056-test-03.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka opened a new pull request #2045: HADOOP-17056. shelldoc fails in hadoop-common.
aajisaka opened a new pull request #2045: URL: https://github.com/apache/hadoop/pull/2045 JIRA: https://issues.apache.org/jira/browse/HADOOP-17056 Skip GPG verification in the docker build image. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17056: --- Attachment: HADOOP-17056-test-02.patch > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17056: --- Attachment: HADOOP-17056-test-01.patch > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, > HADOOP-17056-test-01.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123445#comment-17123445 ] Akira Ajisaka commented on HADOOP-17056: Thanks [~iwasakims] for your comment. Applied 03 patch to run gpg-agent explicitly, and got more detailed error log: {noformat} [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- gpg-agent[3808]: directory '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/private-keys-v1.d' created gpg-agent[3808]: listening on socket '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/S.gpg-agent' gpg-agent[3808]: listening on socket '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/S.gpg-agent.extra' gpg-agent[3808]: socket name '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/S.gpg-agent.browser' is too long gpg: keybox '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/pubring.kbx' created {noformat} https://builds.apache.org/job/PreCommit-HADOOP-Build/16961/artifact/out/patch-mvnsite-root.txt > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: 2040.02.patch, 2040.03.patch, 2040.patch > > > {noformat} > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common --- > > ERROR: yetus-dl: gpg unable to import > > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS > > [INFO] > > > > [INFO] BUILD FAILURE > > [INFO] > > > > [INFO] Total time: 9.377 s > > [INFO] Finished at: 2020-05-28T17:37:41Z > > [INFO] > > > > [ERROR] Failed to execute goal > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project > > hadoop-common: Command execution failed. Process exited with an error: 1 > > (Exit value: 1) -> [Help 1] > > [ERROR] > > [ERROR] To see the full stack trace of the errors, re-run Maven with the > > -e switch. > > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > > [ERROR] > > [ERROR] For more information about the errors and possible solutions, > > please read the following articles: > > [ERROR] [Help 1] > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {noformat} > * > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt > * > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123433#comment-17123433 ] Hadoop QA commented on HADOOP-17056: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 1s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 37s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 7m 4s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 6m 40s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 4s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 27s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 56s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}116m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16961/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17056 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004556/2040.03.patch | | Optional Tests | dupname asflicense shellcheck shelldocs mvnsite unit | | uname | Linux 6b81de9a4d8e 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9fe4c37c25b | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16961/artifact/out/branch-mvnsite-root.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16961/artifact/out/patch-mvnsite-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16961/testReport/ | | Max. process+thread count | 310 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16961/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > shelldoc fails in hadoop-common > --- > > Key: HADOOP-17056 > URL: https://issues.apache.org/jira/browse/HADOOP-17056 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Major > Attachments: