[jira] [Commented] (HADOOP-17908) Add missing RELEASENOTES and CHANGES to upstream
[ https://issues.apache.org/jira/browse/HADOOP-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17414740#comment-17414740 ] Masatake Iwasaki commented on HADOOP-17908: --- The procedure is described in [HowToRelease wiki|https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease#HowToRelease-Publishing]. {quote} 5. Update upstream branches to make them aware of this new release: a. Copy and commit the CHANGES.md and RELEASENOTES.md: {quote} For example of 2.10.1, I did this to branch-2.10 but missed trunk. > Add missing RELEASENOTES and CHANGES to upstream > > > Key: HADOOP-17908 > URL: https://issues.apache.org/jira/browse/HADOOP-17908 > Project: Hadoop Common > Issue Type: Bug >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > > RELEASENOTES and CHANGES of 2.10.1, 3.1.4 and 3.3.0 are missing in trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17908) Add missing RELEASENOTES and CHANGES to upstream
Masatake Iwasaki created HADOOP-17908: - Summary: Add missing RELEASENOTES and CHANGES to upstream Key: HADOOP-17908 URL: https://issues.apache.org/jira/browse/HADOOP-17908 Project: Hadoop Common Issue Type: Bug Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki RELEASENOTES and CHANGES of 2.10.1, 3.1.4 and 3.3.0 are missing in trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #3359: HDFS-16198. Short circuit read leaks Slot objects when InvalidToken exception is thrown
jojochuang merged pull request #3359: URL: https://github.com/apache/hadoop/pull/3359 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin commented on a change in pull request #3417: HDFS-16220.[FGL]Configurable INodeMap#NAMESPACE_KEY_DEPTH_RANGES_STATIC.
xinglin commented on a change in pull request #3417: URL: https://github.com/apache/hadoop/pull/3417#discussion_r707916234 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java ## @@ -36,22 +38,22 @@ * and INode. */ public class INodeMap { - static final int NAMESPACE_KEY_DEPTH = 2; - static final int NUM_RANGES_STATIC = 256; // power of 2 + private static int namespaceKeyDepth; + private static long numRangesStatic; Review comment: Also, if we want to support namespaceKeyDepth other than 2, we probably need to modify the range Keys we insert when we create new partitions. Instead of inserting range key such as [0, 16385], [1, 16385], [2, 16385], I think we might need to insert range keys as [0,0, 16385], [1,0,16385], [2,0,16385] ... for depth of 3 and [0,0,0,16385], [1,0,0,16385], [2,0,0,16385]... for depth of 4. ``` for (int p = 0; p < numRangesStatic; p++) { INodeDirectory key = new INodeDirectory(INodeId.ROOT_INODE_ID, "range key".getBytes(StandardCharsets.UTF_8), perm, 0); key.setParent(new INodeDirectory((long)p, null, perm, 0)); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #3417: HDFS-16220.[FGL]Configurable INodeMap#NAMESPACE_KEY_DEPTH_RANGES_STATIC.
jianghuazhu commented on a change in pull request #3417: URL: https://github.com/apache/hadoop/pull/3417#discussion_r707911659 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java ## @@ -36,22 +38,22 @@ * and INode. */ public class INodeMap { - static final int NAMESPACE_KEY_DEPTH = 2; - static final int NUM_RANGES_STATIC = 256; // power of 2 + private static int namespaceKeyDepth; + private static long numRangesStatic; Review comment: Thanks @xinglin for the comment. I will update it later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #3417: HDFS-16220.[FGL]Configurable INodeMap#NAMESPACE_KEY_DEPTH_RANGES_STATIC.
jianghuazhu commented on a change in pull request #3417: URL: https://github.com/apache/hadoop/pull/3417#discussion_r707911318 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java ## @@ -36,22 +38,22 @@ * and INode. */ public class INodeMap { - static final int NAMESPACE_KEY_DEPTH = 2; - static final int NUM_RANGES_STATIC = 256; // power of 2 + private static int namespaceKeyDepth; + private static long numRangesStatic; Review comment: Thanks @cxorm for the comment. I will update it later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #3200: HDFS-15160. branch-3.2. ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock.
Hexiaoqiao commented on pull request #3200: URL: https://github.com/apache/hadoop/pull/3200#issuecomment-918793612 @brahmareddybattula Good point. Will revert and check in per-commit. Thanks. For version, I forgot we have create branch-3.2.3 already. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin commented on a change in pull request #3417: HDFS-16220.[FGL]Configurable INodeMap#NAMESPACE_KEY_DEPTH_RANGES_STATIC.
xinglin commented on a change in pull request #3417: URL: https://github.com/apache/hadoop/pull/3417#discussion_r707900522 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java ## @@ -36,22 +38,22 @@ * and INode. */ public class INodeMap { - static final int NAMESPACE_KEY_DEPTH = 2; - static final int NUM_RANGES_STATIC = 256; // power of 2 + private static int namespaceKeyDepth; + private static long numRangesStatic; Review comment: This is definitely one way to do it but is there a way we can make numSpaceKeyDepth/numRangesStatic a non-static variable? numspaceKeyDepth -> numSpaceKeyDepth numRangesStatic -> numRanges -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17907) FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs fault
[ https://issues.apache.org/jira/browse/HADOOP-17907?focusedWorklogId=650335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650335 ] ASF GitHub Bot logged work on HADOOP-17907: --- Author: ASF GitHub Bot Created on: 14/Sep/21 04:08 Start Date: 14/Sep/21 04:08 Worklog Time Spent: 10m Work Description: ayushtkn commented on a change in pull request #3431: URL: https://github.com/apache/hadoop/pull/3431#discussion_r707892932 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java ## @@ -182,7 +183,7 @@ public static boolean fullyDelete(final File dir, boolean tryGrantPermissions) { return true; } // handle nonempty directory deletion -if (!fullyDeleteContents(dir, tryGrantPermissions)) { +if (!FileUtils.isSymlink(dir) && !fullyDeleteContents(dir, tryGrantPermissions)) { Review comment: Could have tried ``dir.isDirectory()`` rather than ``!FileUtils.isSymlink(dir) ``? For files also ``fullyDeleteContents(dir, tryGrantPermissions)``, this call is irrelevant, it will just go and do a ``listFiles`` and return back with a true, I think? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650335) Time Spent: 20m (was: 10m) > FileUtil#fullyDelete deletes contents of sym-linked directory when symlink > cannot be deleted because of local fs fault > -- > > Key: HADOOP-17907 > URL: https://issues.apache.org/jira/browse/HADOOP-17907 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Weihao Zheng >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > As discussed in HADOOP-6536, FileUtil#fullyDelete should not delete the > contents of the sym-linked directory when we pass a symlink parameter. > Currently we try to delete the resource first by calling deleteImpl, and if > deleteImpl is failed, we regard it as non-empty directory and remove all its > contents and then itself. This logic behaves wrong when local file system > cannot delete symlink to a directory because of faulty disk, local system's > error, etc. When we cannot delete it in the first time, hadoop will try to > remove all the contents of the directory it pointed to and leave an empty dir. > So, we should add an isSymlink checking before we call fullyDeleteContents to > prevent such behavior. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #3431: HADOOP-17907. FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs f
ayushtkn commented on a change in pull request #3431: URL: https://github.com/apache/hadoop/pull/3431#discussion_r707892932 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java ## @@ -182,7 +183,7 @@ public static boolean fullyDelete(final File dir, boolean tryGrantPermissions) { return true; } // handle nonempty directory deletion -if (!fullyDeleteContents(dir, tryGrantPermissions)) { +if (!FileUtils.isSymlink(dir) && !fullyDeleteContents(dir, tryGrantPermissions)) { Review comment: Could have tried ``dir.isDirectory()`` rather than ``!FileUtils.isSymlink(dir) ``? For files also ``fullyDeleteContents(dir, tryGrantPermissions)``, this call is irrelevant, it will just go and do a ``listFiles`` and return back with a true, I think? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17907) FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs fault
[ https://issues.apache.org/jira/browse/HADOOP-17907?focusedWorklogId=650330=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650330 ] ASF GitHub Bot logged work on HADOOP-17907: --- Author: ASF GitHub Bot Created on: 14/Sep/21 03:15 Start Date: 14/Sep/21 03:15 Worklog Time Spent: 10m Work Description: FrankinRUC opened a new pull request #3431: URL: https://github.com/apache/hadoop/pull/3431 ### Description of PR As discussed in HADOOP-6536, FileUtil#fullyDelete should not delete the contents of the sym-linked directory when we pass a symlink parameter. Currently we try to delete the resource first by calling deleteImpl, and if deleteImpl is failed, we regard it as non-empty directory and remove all its contents and then itself. This logic behaves wrong when local file system cannot delete symlink to a directory because of faulty disk, local system's error, etc. When we cannot delete it in the first time, hadoop will try to remove all the contents of the directory it pointed to and leave an empty dir. So, we should add an isSymlink checking before we call fullyDeleteContents to prevent such behavior. ### How was this patch tested? ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650330) Remaining Estimate: 0h Time Spent: 10m > FileUtil#fullyDelete deletes contents of sym-linked directory when symlink > cannot be deleted because of local fs fault > -- > > Key: HADOOP-17907 > URL: https://issues.apache.org/jira/browse/HADOOP-17907 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Weihao Zheng >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > As discussed in HADOOP-6536, FileUtil#fullyDelete should not delete the > contents of the sym-linked directory when we pass a symlink parameter. > Currently we try to delete the resource first by calling deleteImpl, and if > deleteImpl is failed, we regard it as non-empty directory and remove all its > contents and then itself. This logic behaves wrong when local file system > cannot delete symlink to a directory because of faulty disk, local system's > error, etc. When we cannot delete it in the first time, hadoop will try to > remove all the contents of the directory it pointed to and leave an empty dir. > So, we should add an isSymlink checking before we call fullyDeleteContents to > prevent such behavior. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17907) FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs fault
[ https://issues.apache.org/jira/browse/HADOOP-17907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17907: Labels: pull-request-available (was: ) > FileUtil#fullyDelete deletes contents of sym-linked directory when symlink > cannot be deleted because of local fs fault > -- > > Key: HADOOP-17907 > URL: https://issues.apache.org/jira/browse/HADOOP-17907 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Weihao Zheng >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > As discussed in HADOOP-6536, FileUtil#fullyDelete should not delete the > contents of the sym-linked directory when we pass a symlink parameter. > Currently we try to delete the resource first by calling deleteImpl, and if > deleteImpl is failed, we regard it as non-empty directory and remove all its > contents and then itself. This logic behaves wrong when local file system > cannot delete symlink to a directory because of faulty disk, local system's > error, etc. When we cannot delete it in the first time, hadoop will try to > remove all the contents of the directory it pointed to and leave an empty dir. > So, we should add an isSymlink checking before we call fullyDeleteContents to > prevent such behavior. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] FrankinRUC opened a new pull request #3431: HADOOP-17907. FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs fault
FrankinRUC opened a new pull request #3431: URL: https://github.com/apache/hadoop/pull/3431 ### Description of PR As discussed in HADOOP-6536, FileUtil#fullyDelete should not delete the contents of the sym-linked directory when we pass a symlink parameter. Currently we try to delete the resource first by calling deleteImpl, and if deleteImpl is failed, we regard it as non-empty directory and remove all its contents and then itself. This logic behaves wrong when local file system cannot delete symlink to a directory because of faulty disk, local system's error, etc. When we cannot delete it in the first time, hadoop will try to remove all the contents of the directory it pointed to and leave an empty dir. So, we should add an isSymlink checking before we call fullyDeleteContents to prevent such behavior. ### How was this patch tested? ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17907) FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs fault
Weihao Zheng created HADOOP-17907: - Summary: FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs fault Key: HADOOP-17907 URL: https://issues.apache.org/jira/browse/HADOOP-17907 Project: Hadoop Common Issue Type: Bug Components: fs Reporter: Weihao Zheng As discussed in HADOOP-6536, FileUtil#fullyDelete should not delete the contents of the sym-linked directory when we pass a symlink parameter. Currently we try to delete the resource first by calling deleteImpl, and if deleteImpl is failed, we regard it as non-empty directory and remove all its contents and then itself. This logic behaves wrong when local file system cannot delete symlink to a directory because of faulty disk, local system's error, etc. When we cannot delete it in the first time, hadoop will try to remove all the contents of the directory it pointed to and leave an empty dir. So, we should add an isSymlink checking before we call fullyDeleteContents to prevent such behavior. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3429: HDFS-16227. De-flake TestMover#testMoverWithStripedFile
hadoop-yetus commented on pull request #3429: URL: https://github.com/apache/hadoop/pull/3429#issuecomment-91879 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 55s | | trunk passed | | +1 :green_heart: | compile | 1m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 27s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 55s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 33 unchanged - 1 fixed = 33 total (was 34) | | +1 :green_heart: | mvnsite | 1m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 53s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 385m 6s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 482m 58s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3429 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4588aec4a1f7 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8c520fca41985956cc9b30483ed78cbe7a38a0a9 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/4/testReport/ | | Max. process+thread count | 1954 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact
[GitHub] [hadoop] hadoop-yetus commented on pull request #3429: HDFS-16227. De-flake TestMover#testMoverWithStripedFile
hadoop-yetus commented on pull request #3429: URL: https://github.com/apache/hadoop/pull/3429#issuecomment-918733126 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 27s | | trunk passed | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 0s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 29s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 33 unchanged - 1 fixed = 33 total (was 34) | | +1 :green_heart: | mvnsite | 1m 22s | | the patch passed | | +1 :green_heart: | javadoc | 0m 53s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 46s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 384m 18s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 482m 34s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3429 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1eafcc4d8f45 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8c520fca41985956cc9b30483ed78cbe7a38a0a9 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/3/testReport/ | | Max. process+thread count | 2019 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
jianghuazhu commented on a change in pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#discussion_r707843194 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java ## @@ -230,6 +230,18 @@ public void testCheckSafeMode8() throws Exception { assertEquals(BMSafeModeStatus.OFF, getSafeModeStatus()); } + @Test(timeout = 2) + public void testCheckSafeMode9() throws Exception { +Configuration conf = new HdfsConfiguration(); +conf.setLong(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, 3000); +GenericTestUtils.LogCapturer auditLog = Review comment: Thanks @ayushtkn for the reminder. This is my oversight and I will update it later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
jianghuazhu commented on a change in pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#discussion_r707843194 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java ## @@ -230,6 +230,18 @@ public void testCheckSafeMode8() throws Exception { assertEquals(BMSafeModeStatus.OFF, getSafeModeStatus()); } + @Test(timeout = 2) + public void testCheckSafeMode9() throws Exception { +Configuration conf = new HdfsConfiguration(); +conf.setLong(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, 3000); +GenericTestUtils.LogCapturer auditLog = Review comment: Thanks @ayushtkn for the reminder. This is my neglect, I will update it later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #3366: HDFS-16203. Discover datanodes with unbalanced block pool usage by th…
tomscut commented on a change in pull request #3366: URL: https://github.com/apache/hadoop/pull/3366#discussion_r707841118 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/StorageReport.java ## @@ -48,6 +49,8 @@ public StorageReport(DatanodeStorage storage, boolean failed, long capacity, this.nonDfsUsed = nonDfsUsed; this.remaining = remaining; this.blockPoolUsed = bpUsed; +this.blockPoolUsagePercent = capacity == 0 ? 0.0f : Review comment: Thanks @tasanuma for your review. Thus can prevent some anomalies. I will update it soon. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17880) build hadoop 2.10.0 with docker machine failed
[ https://issues.apache.org/jira/browse/HADOOP-17880?focusedWorklogId=650312=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650312 ] ASF GitHub Bot logged work on HADOOP-17880: --- Author: ASF GitHub Bot Created on: 14/Sep/21 01:25 Start Date: 14/Sep/21 01:25 Worklog Time Spent: 10m Work Description: ZhendongBai commented on a change in pull request #3349: URL: https://github.com/apache/hadoop/pull/3349#discussion_r705917770 ## File path: dev-support/docker/Dockerfile ## @@ -18,234 +17,80 @@ # Dockerfile for installing the necessary dependencies for building Hadoop. # See BUILDING.txt. -FROM ubuntu:xenial +FROM centos:7 Review comment: @GauthamBanasandra ok, thanks a lot,besides jdk7 not found problem, python pylint is not installed sucessfully,and the python dependencies have problems. So I decide to give up fixing the ubuntu bugs, and choose to create a separate Dockerfile for centos later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650312) Time Spent: 2h 50m (was: 2h 40m) > build hadoop 2.10.0 with docker machine failed > -- > > Key: HADOOP-17880 > URL: https://issues.apache.org/jira/browse/HADOOP-17880 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.10.0 > Environment: mac os x86_64 >Reporter: baizhendong >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > 1. currently, we build the hadoop 2.10.0 with docker machine, and must > install Virtual Box, and for hadoop 3.x, just build with docker only. > 2. besides this, the docker image dependency is out of date, and some of them > is unavaialble, for example – jdk7 > 3. but just building hadoop 2.10.0 with hadoop 3.x build script without > modification is not working, for the protocol buffer version is not 2.5.0, > and it's not work for native build. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZhendongBai commented on a change in pull request #3349: HADOOP-17880. build 2.10.x with docker only.
ZhendongBai commented on a change in pull request #3349: URL: https://github.com/apache/hadoop/pull/3349#discussion_r705917770 ## File path: dev-support/docker/Dockerfile ## @@ -18,234 +17,80 @@ # Dockerfile for installing the necessary dependencies for building Hadoop. # See BUILDING.txt. -FROM ubuntu:xenial +FROM centos:7 Review comment: @GauthamBanasandra ok, thanks a lot,besides jdk7 not found problem, python pylint is not installed sucessfully,and the python dependencies have problems. So I decide to give up fixing the ubuntu bugs, and choose to create a separate Dockerfile for centos later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17880) build hadoop 2.10.0 with docker machine failed
[ https://issues.apache.org/jira/browse/HADOOP-17880?focusedWorklogId=650311=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650311 ] ASF GitHub Bot logged work on HADOOP-17880: --- Author: ASF GitHub Bot Created on: 14/Sep/21 01:23 Start Date: 14/Sep/21 01:23 Worklog Time Spent: 10m Work Description: ZhendongBai commented on a change in pull request #3349: URL: https://github.com/apache/hadoop/pull/3349#discussion_r707835411 ## File path: dev-support/docker/Dockerfile_centos7 ## @@ -0,0 +1,96 @@ +# Licensed to the Apache Software Foundation (ASF) under one Review comment: @GauthamBanasandra I rename Dockerfile_centos7 to Dockerfile_centos_7 to keep consistent with the filename in trunk. and `mvn clean package -Dhttps.protocols=TLSv1.2 -DskipTests -Pnative,dist -Drequire.fuse -Drequire.openssl -Drequire.snappy -Drequire.valgrind -Drequire.zstd -Drequire.test.libhadoop -Pyarn-ui -Dtar -Dmaven.javadoc.skip=true > build.log 2>&1` logs here: [build.log](https://github.com/apache/hadoop/files/7158247/build.log), and because some javadocs are illegal, and javadocs check failed, I add `-Dmaven.javadoc.skip=true` to build command. please review again, thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650311) Time Spent: 2h 40m (was: 2.5h) > build hadoop 2.10.0 with docker machine failed > -- > > Key: HADOOP-17880 > URL: https://issues.apache.org/jira/browse/HADOOP-17880 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.10.0 > Environment: mac os x86_64 >Reporter: baizhendong >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > 1. currently, we build the hadoop 2.10.0 with docker machine, and must > install Virtual Box, and for hadoop 3.x, just build with docker only. > 2. besides this, the docker image dependency is out of date, and some of them > is unavaialble, for example – jdk7 > 3. but just building hadoop 2.10.0 with hadoop 3.x build script without > modification is not working, for the protocol buffer version is not 2.5.0, > and it's not work for native build. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZhendongBai commented on a change in pull request #3349: HADOOP-17880. build 2.10.x with docker only.
ZhendongBai commented on a change in pull request #3349: URL: https://github.com/apache/hadoop/pull/3349#discussion_r707835411 ## File path: dev-support/docker/Dockerfile_centos7 ## @@ -0,0 +1,96 @@ +# Licensed to the Apache Software Foundation (ASF) under one Review comment: @GauthamBanasandra I rename Dockerfile_centos7 to Dockerfile_centos_7 to keep consistent with the filename in trunk. and `mvn clean package -Dhttps.protocols=TLSv1.2 -DskipTests -Pnative,dist -Drequire.fuse -Drequire.openssl -Drequire.snappy -Drequire.valgrind -Drequire.zstd -Drequire.test.libhadoop -Pyarn-ui -Dtar -Dmaven.javadoc.skip=true > build.log 2>&1` logs here: [build.log](https://github.com/apache/hadoop/files/7158247/build.log), and because some javadocs are illegal, and javadocs check failed, I add `-Dmaven.javadoc.skip=true` to build command. please review again, thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on a change in pull request #3366: HDFS-16203. Discover datanodes with unbalanced block pool usage by th…
tasanuma commented on a change in pull request #3366: URL: https://github.com/apache/hadoop/pull/3366#discussion_r707829233 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/StorageReport.java ## @@ -48,6 +49,8 @@ public StorageReport(DatanodeStorage storage, boolean failed, long capacity, this.nonDfsUsed = nonDfsUsed; this.remaining = remaining; this.blockPoolUsed = bpUsed; +this.blockPoolUsagePercent = capacity == 0 ? 0.0f : Review comment: If I remember right, `capacity` can be 0. ```suggestion this.blockPoolUsagePercent = capacity <= 0 ? 0.0f : ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3430: YARN-10942. Move AbstractCSQueue fields to separate objects that are tracking usage
hadoop-yetus commented on pull request #3430: URL: https://github.com/apache/hadoop/pull/3430#issuecomment-918689588 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 22s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 53s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | | trunk passed | | +1 :green_heart: | javadoc | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 58s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 51s | | the patch passed | | +1 :green_heart: | compile | 0m 57s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 0m 45s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 45s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/2/artifact/out/blanks-eol.txt) | The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 38s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 74 unchanged - 4 fixed = 81 total (was 78) | | +1 :green_heart: | mvnsite | 0m 50s | | the patch passed | | +1 :green_heart: | javadoc | 0m 37s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 35s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 1m 59s | [/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/2/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 17m 28s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 98m 47s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 182m 3s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Increment of volatile field org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueueUsageTracker.numContainers in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueueUsageTracker.decreaseNumContainers() At AbstractCSQueueUsageTracker.java:in
[GitHub] [hadoop] hadoop-yetus commented on pull request #3430: YARN-10942. Move AbstractCSQueue fields to separate objects that are tracking usage
hadoop-yetus commented on pull request #3430: URL: https://github.com/apache/hadoop/pull/3430#issuecomment-918689417 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 49s | | trunk passed | | +1 :green_heart: | compile | 1m 10s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 0s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 7s | | trunk passed | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 46s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 1m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 0s | | the patch passed | | +1 :green_heart: | compile | 0m 51s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 51s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/3/artifact/out/blanks-eol.txt) | The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 40s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 74 unchanged - 4 fixed = 81 total (was 78) | | +1 :green_heart: | mvnsite | 0m 54s | | the patch passed | | +1 :green_heart: | javadoc | 0m 39s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 2m 10s | [/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/3/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 15m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 98m 25s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/3/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 179m 13s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Increment of volatile field org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueueUsageTracker.numContainers in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueueUsageTracker.decreaseNumContainers() At AbstractCSQueueUsageTracker.java:in
[jira] [Work logged] (HADOOP-17895) RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-17895?focusedWorklogId=650279=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650279 ] ASF GitHub Bot logged work on HADOOP-17895: --- Author: ASF GitHub Bot Created on: 14/Sep/21 00:10 Start Date: 14/Sep/21 00:10 Worklog Time Spent: 10m Work Description: cnauroth commented on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918677709 @majdyz , thanks for refining the test case. Thanks also to @steveloughran and @ayushtkn for their testing. I can see the repro now. Interestingly, even though it repros on branch-3.3, it does not repro on branch-3.2. I suspect though that's not because branch-3.2 has correct Unicode handling, but rather that both the `mkdir` and `chmod` steps of this logic are doing the wrong thing, but they are at least in agreement on doing the same wrong thing. It looks like the end result on branch-3.2 is a successful call, but the new directory doesn't have the correct name. Here is the reason it's happening. The call performs the `mkdir` using Java classes and then performs the `chmod` through native code. I traced through the Java and native layers. I confirmed that the string passes correctly through all Java frames, and then inside the native layer it gets garbled. This is because the string data is fetched through the JNI `GetStringUTFChars` function, which is documented to return the JVM's internal "modified UTF-8" representation. The `chmod` call would work fine with true UTF-8, but there are edge cases where the modified UTF-8 would differ from the standard and not work correctly. The JavaDocs of [`DataInputFormat`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/DataInput.html) discuss the differences between standard UTF-8 and modified UTF-8. This is the key point relevant to this bug: > Only the 1-byte, 2-byte, and 3-byte formats are used. The character in your test case is an emoji that has a 4-byte standard UTF-8 representation, but modified UTF-8 chooses to represent it as 2 surrogate pairs of 3 bytes each. That encoding won't be compatible with the C APIs that we're calling. This problem is likely not unique to the `chmod` call. We use `GetStringUTFChars` in multiple places. Interestingly, it's possible that this bug doesn't repro at all on Windows, where the JNI code path uses `GetStringChars` instead to fetch the UTF-16 representation and convert to Windows wide-character string types. I no longer have easy access to a Windows test environment to confirm though. Now what might we do about this? Some ideas: 1. Explore a possible migration from native `chmod` to the new Java NIO File APIs, e.g. [`Files#setPosixFilePermissions`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/file/Files.html#setPosixFilePermissions(java.nio.file.Path,java.util.Set)). These native code paths were introduced before the JDK introduced APIs for permission changes. 2. Convert the code to use `GetStringChars` to fetch the UTF-16 bytes and pass them back to the JDK encoding conversion methods. That would give us a way to get to true UTF-8. This introduces a performance risk from extra buffer copying and extra transitions over the Java-JNI boundary. It's also probably a more confusing call flow. 3. Use `GetStringChars` to get UTF-16, but do the conversion to UTF-8 directly within the JNI layer using something like [`iconv`](https://linux.die.net/man/3/iconv). That would avoid some of the weirdness and performance penalties of transitioning back over the Java boundary. 4. Change the code to convert to a UTF-8 `byte[]` at the Java layer and pass that as-is to JNI. (Stay away from `jstring` entirely.) This would re-raise the debate of whether it's considered a backward-incompatible change if the interface between the Java layer and the JNI layer changes. It's probably natural for everyone to ask "isn't there some simpler way to get true UTF-8 out of a `jstring`?" I'm pretty sure there is no convenient function for that, but I'd love if someone proved me wrong. All of the solutions I can think of are fairly large in scope. Then, there is testing effort too of course. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650279) Time Spent: 2h (was: 1h 50m) > RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3 >
[GitHub] [hadoop] cnauroth commented on pull request #3391: HADOOP-17895. Add the test to reproduce the failure of `RawLocalFileSystem.mkdir` with unicode filename
cnauroth commented on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918677709 @majdyz , thanks for refining the test case. Thanks also to @steveloughran and @ayushtkn for their testing. I can see the repro now. Interestingly, even though it repros on branch-3.3, it does not repro on branch-3.2. I suspect though that's not because branch-3.2 has correct Unicode handling, but rather that both the `mkdir` and `chmod` steps of this logic are doing the wrong thing, but they are at least in agreement on doing the same wrong thing. It looks like the end result on branch-3.2 is a successful call, but the new directory doesn't have the correct name. Here is the reason it's happening. The call performs the `mkdir` using Java classes and then performs the `chmod` through native code. I traced through the Java and native layers. I confirmed that the string passes correctly through all Java frames, and then inside the native layer it gets garbled. This is because the string data is fetched through the JNI `GetStringUTFChars` function, which is documented to return the JVM's internal "modified UTF-8" representation. The `chmod` call would work fine with true UTF-8, but there are edge cases where the modified UTF-8 would differ from the standard and not work correctly. The JavaDocs of [`DataInputFormat`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/DataInput.html) discuss the differences between standard UTF-8 and modified UTF-8. This is the key point relevant to this bug: > Only the 1-byte, 2-byte, and 3-byte formats are used. The character in your test case is an emoji that has a 4-byte standard UTF-8 representation, but modified UTF-8 chooses to represent it as 2 surrogate pairs of 3 bytes each. That encoding won't be compatible with the C APIs that we're calling. This problem is likely not unique to the `chmod` call. We use `GetStringUTFChars` in multiple places. Interestingly, it's possible that this bug doesn't repro at all on Windows, where the JNI code path uses `GetStringChars` instead to fetch the UTF-16 representation and convert to Windows wide-character string types. I no longer have easy access to a Windows test environment to confirm though. Now what might we do about this? Some ideas: 1. Explore a possible migration from native `chmod` to the new Java NIO File APIs, e.g. [`Files#setPosixFilePermissions`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/file/Files.html#setPosixFilePermissions(java.nio.file.Path,java.util.Set)). These native code paths were introduced before the JDK introduced APIs for permission changes. 2. Convert the code to use `GetStringChars` to fetch the UTF-16 bytes and pass them back to the JDK encoding conversion methods. That would give us a way to get to true UTF-8. This introduces a performance risk from extra buffer copying and extra transitions over the Java-JNI boundary. It's also probably a more confusing call flow. 3. Use `GetStringChars` to get UTF-16, but do the conversion to UTF-8 directly within the JNI layer using something like [`iconv`](https://linux.die.net/man/3/iconv). That would avoid some of the weirdness and performance penalties of transitioning back over the Java boundary. 4. Change the code to convert to a UTF-8 `byte[]` at the Java layer and pass that as-is to JNI. (Stay away from `jstring` entirely.) This would re-raise the debate of whether it's considered a backward-incompatible change if the interface between the Java layer and the JNI layer changes. It's probably natural for everyone to ask "isn't there some simpler way to get true UTF-8 out of a `jstring`?" I'm pretty sure there is no convenient function for that, but I'd love if someone proved me wrong. All of the solutions I can think of are fairly large in scope. Then, there is testing effort too of course. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3428: HDFS-16225. Fix typo for FederationTestUtils
tomscut commented on pull request #3428: URL: https://github.com/apache/hadoop/pull/3428#issuecomment-918674957 Thanks @virajjasani @ayushtkn @goiri for your review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3430: YARN-10942. Move AbstractCSQueue fields to separate objects that are tracking usage
hadoop-yetus commented on pull request #3430: URL: https://github.com/apache/hadoop/pull/3430#issuecomment-918663066 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 58s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 20s | | trunk passed | | +1 :green_heart: | compile | 21m 15s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 49s | | trunk passed | | +1 :green_heart: | javadoc | 2m 11s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 30s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | -1 :x: | compile | 8m 6s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 8m 6s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 7m 13s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 7m 13s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/blanks-eol.txt) | The patch has 5 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 25s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 10 new + 593 unchanged - 7 fixed = 603 total (was 600) | | -1 :x: | mvnsite | 0m 34s | [/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3430/1/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 0s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 0m 34s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3429: HDFS-16227. De-flake TestMover#testMoverWithStripedFile
hadoop-yetus commented on pull request #3429: URL: https://github.com/apache/hadoop/pull/3429#issuecomment-918654685 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 5s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 51s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 33 unchanged - 1 fixed = 33 total (was 34) | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 45s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 10s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 230m 7s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 314m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3429 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ca28776e66af 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8c520fca41985956cc9b30483ed78cbe7a38a0a9 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/2/testReport/ | | Max. process+thread count | 3258 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact
[GitHub] [hadoop] hadoop-yetus commented on pull request #3427: HDFS-10648. Expose Balancer metrics through Metrics2
hadoop-yetus commented on pull request #3427: URL: https://github.com/apache/hadoop/pull/3427#issuecomment-918643249 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 14s | | trunk passed | | +1 :green_heart: | compile | 1m 30s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 40s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 41s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 2s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 13 unchanged - 1 fixed = 13 total (was 14) | | +1 :green_heart: | mvnsite | 1m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 238m 15s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3427/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 332m 0s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.balancer.TestBalancerService | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3427/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3427 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 3ca53722f2b0 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d030119bf26f57b09e77b722cc4f2b377e44f60f | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3427/2/testReport/ | | Max. process+thread count | 3643 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3427/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT
[GitHub] [hadoop] xkrogen commented on a change in pull request #3317: HDFS-16181. [SBN Read] Fix metric of RpcRequestCacheMissAmount can't display when tailEditLog form JN
xkrogen commented on a change in pull request #3317: URL: https://github.com/apache/hadoop/pull/3317#discussion_r707766429 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java ## @@ -52,11 +52,7 @@ MutableCounterLong bytesServedViaRpc; @Metric - MutableStat rpcRequestCacheMissAmount = new MutableStat( - "RpcRequestCacheMissAmount", "Number of RPC requests unable to be " + - "served due to lack of availability in cache, and how many " + - "transactions away the request was from being in the cache.", - "Misses", "Txns"); + MutableStat rpcRequestCacheMissAmount; Review comment: If we explicitly instantiate the metric via `registry.newStat()`, then we can remove the `@Metric` annotation. It's only necessary for metrics2 to automatically create the metric for us. (It's been a while since I've looked at Hadoop or metrics2, so let me know if I'm mis-remembering here) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cnauroth commented on pull request #3340: HDFS-16187. SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing
cnauroth commented on pull request #3340: URL: https://github.com/apache/hadoop/pull/3340#issuecomment-918574409 @bshashikant , thank you for the contribution and for incorporating the code review feedback. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth opened a new pull request #3430: YARN-10942. Move AbstractCSQueue fields to separate objects that are tracking usage
szilard-nemeth opened a new pull request #3430: URL: https://github.com/apache/hadoop/pull/3430 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brahmareddybattula commented on pull request #3200: HDFS-15160. branch-3.2. ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlo
brahmareddybattula commented on pull request #3200: URL: https://github.com/apache/hadoop/pull/3200#issuecomment-918521589 @Hexiaoqiao , There are two issues I want to bring with this merge. Can you please check once.? 1) Looks commit's are squashed and merged, this should n't good way as there are 5 differenet commits with 5 jira's..In furute,if we want to revert one commit(among these 5), then all commits we need to revert. 2) It's not merged to branch-3.2.3, But jira marked to hadoop-3.2.3 version..? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brahmareddybattula commented on a change in pull request #3200: HDFS-15160. branch-3.2. ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use dat
brahmareddybattula commented on a change in pull request #3200: URL: https://github.com/apache/hadoop/pull/3200#discussion_r707624080 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -201,16 +201,16 @@ public Block getStoredBlock(String bpid, long blkid) * The deepCopyReplica call doesn't use the datasetock since it will lead the * potential deadlock with the {@link FsVolumeList#addBlockPool} call. */ + @SuppressWarnings("unchecked") @Override public Set deepCopyReplica(String bpid) throws IOException { -Set replicas = null; +Set replicas; Review comment: can please raise jira to track this..? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3366: HDFS-16203. Discover datanodes with unbalanced block pool usage by th…
hadoop-yetus commented on pull request #3366: URL: https://github.com/apache/hadoop/pull/3366#issuecomment-918502823 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | jshint | 0m 0s | | jshint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 12s | | trunk passed | | +1 :green_heart: | compile | 4m 52s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 6s | | trunk passed | | +1 :green_heart: | javadoc | 2m 18s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 4s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 6m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 37s | | the patch passed | | +1 :green_heart: | compile | 4m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 4m 46s | | the patch passed | | +1 :green_heart: | compile | 4m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 6s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/7/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 1 new + 120 unchanged - 9 fixed = 121 total (was 129) | | +1 :green_heart: | mvnsite | 2m 43s | | the patch passed | | +1 :green_heart: | javadoc | 1m 54s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 42s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 6m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 20s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 240m 4s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 22m 26s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 384m 6s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3366 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell jshint | | uname | Linux 698a23a1a331 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0b102603c8fdd9a33bb6a4bb77beb8d9591b8fa7 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[GitHub] [hadoop] hadoop-yetus commented on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
hadoop-yetus commented on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-918490144 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 27 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 59s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 0s | | trunk passed | | +1 :green_heart: | compile | 23m 12s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 20m 1s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 33s | | trunk passed | | +1 :green_heart: | javadoc | 2m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 7s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 32s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 17m 7s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 17m 29s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 11s | | the patch passed | | +1 :green_heart: | compile | 22m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 22m 16s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/36/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1913 unchanged - 0 fixed = 1915 total (was 1913) | | +1 :green_heart: | compile | 19m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 19m 24s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/36/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1789 unchanged - 0 fixed = 1791 total (was 1789) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 51s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/36/artifact/out/results-checkstyle-root.txt) | root: The patch generated 21 new + 8 unchanged - 0 fixed = 29 total (was 8) | | +1 :green_heart: | mvnsite | 3m 37s | | the patch passed | | +1 :green_heart: | xml | 0m 10s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 8s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 29s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 17m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 27s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 17m 6s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 6m 4s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 2m 20s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 230m 34s | | | | Subsystem |
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650177=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650177 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 13/Sep/21 18:11 Start Date: 13/Sep/21 18:11 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918449188 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 42s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 51s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 79m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3381 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 42a6b218d54d 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 41d43efe5a1c185076f57d43b7ee87c5dcc7d6d8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/testReport/ | | Max. process+thread count | 571 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/console | | versions | git=2.25.1
[GitHub] [hadoop] hadoop-yetus commented on pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
hadoop-yetus commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918449188 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 42s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 51s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 79m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3381 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 42a6b218d54d 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 41d43efe5a1c185076f57d43b7ee87c5dcc7d6d8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/testReport/ | | Max. process+thread count | 571 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this
[GitHub] [hadoop] hadoop-yetus commented on pull request #3429: HDFS-16227. De-flake TestMover#testMoverWithStripedFile
hadoop-yetus commented on pull request #3429: URL: https://github.com/apache/hadoop/pull/3429#issuecomment-918424437 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 43s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 7s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 52s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 33 unchanged - 1 fixed = 34 total (was 34) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 235m 18s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 319m 6s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3429 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux efcab6abf628 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ebc48005d28567fa9e7938ee854eb2e0409eb350 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/1/testReport/ | | Max. process+thread count | 3245 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Work logged] (HADOOP-17895) RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-17895?focusedWorklogId=650159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650159 ] ASF GitHub Bot logged work on HADOOP-17895: --- Author: ASF GitHub Bot Created on: 13/Sep/21 17:37 Start Date: 13/Sep/21 17:37 Worklog Time Spent: 10m Work Description: majdyz edited a comment on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918209856 @cnauroth @ayushtkn Thank you for trying to reproduce the issue on your end, I have now updated the current test to also fail in the provided docker image started with `./start-build-env.sh`. I was using my local MacOS machine and didn't provide the randomized folder which makes the test runs smoothly in the second run. The CI is also breaking now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650159) Time Spent: 1h 50m (was: 1h 40m) > RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3 > > > Key: HADOOP-17895 > URL: https://issues.apache.org/jira/browse/HADOOP-17895 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > *Bug description:* > `fs.mkdirs` command for `RawLocalFileSystem` doesn't work in Hadoop 3 with > NativeIO enabled. > The failure was happening when doing the native `chmod` command to the file > (the `mkdir` command itself is working). > Stacktrace: > {{ENOENT: No such file or directory ENOENT: No such file or directory at > org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method) at > org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:382) at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:974) > at > org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:660) > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:700) > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672)}} > > *To reproduce:* > * Add `fs.mkdirs` in RawLocalFileSystem with NativeIO enabled. > * Sample: [https://github.com/apache/hadoop/pull/3391] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] majdyz edited a comment on pull request #3391: HADOOP-17895. Add the test to reproduce the failure of `RawLocalFileSystem.mkdir` with unicode filename
majdyz edited a comment on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918209856 @cnauroth @ayushtkn Thank you for trying to reproduce the issue on your end, I have now updated the current test to also fail in the provided docker image started with `./start-build-env.sh`. I was using my local MacOS machine and didn't provide the randomized folder which makes the test runs smoothly in the second run. The CI is also breaking now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17900) Move ClusterStorageCapacityExceededException to Public from LimitedPrivate
[ https://issues.apache.org/jira/browse/HADOOP-17900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HADOOP-17900. --- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Move ClusterStorageCapacityExceededException to Public from LimitedPrivate > -- > > Key: HADOOP-17900 > URL: https://issues.apache.org/jira/browse/HADOOP-17900 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > As of now the exception is marked limited private > {code:java} > @InterfaceAudience.LimitedPrivate({ "HDFS", "MapReduce", "Tez" }) > {code} > Doesn't allow other projects, Rather than individually adding project, Make > it Public itself. > This exception can be used to act as a fail-fast marker for different > operations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17900) Move ClusterStorageCapacityExceededException to Public from LimitedPrivate
[ https://issues.apache.org/jira/browse/HADOOP-17900?focusedWorklogId=650148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650148 ] ASF GitHub Bot logged work on HADOOP-17900: --- Author: ASF GitHub Bot Created on: 13/Sep/21 17:21 Start Date: 13/Sep/21 17:21 Worklog Time Spent: 10m Work Description: ayushtkn commented on pull request #3404: URL: https://github.com/apache/hadoop/pull/3404#issuecomment-918407279 Thanx @ferhui & @jojochuang for the review!!! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650148) Time Spent: 1.5h (was: 1h 20m) > Move ClusterStorageCapacityExceededException to Public from LimitedPrivate > -- > > Key: HADOOP-17900 > URL: https://issues.apache.org/jira/browse/HADOOP-17900 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > As of now the exception is marked limited private > {code:java} > @InterfaceAudience.LimitedPrivate({ "HDFS", "MapReduce", "Tez" }) > {code} > Doesn't allow other projects, Rather than individually adding project, Make > it Public itself. > This exception can be used to act as a fail-fast marker for different > operations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17900) Move ClusterStorageCapacityExceededException to Public from LimitedPrivate
[ https://issues.apache.org/jira/browse/HADOOP-17900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17414416#comment-17414416 ] Ayush Saxena commented on HADOOP-17900: --- Committed to trunk. > Move ClusterStorageCapacityExceededException to Public from LimitedPrivate > -- > > Key: HADOOP-17900 > URL: https://issues.apache.org/jira/browse/HADOOP-17900 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > As of now the exception is marked limited private > {code:java} > @InterfaceAudience.LimitedPrivate({ "HDFS", "MapReduce", "Tez" }) > {code} > Doesn't allow other projects, Rather than individually adding project, Make > it Public itself. > This exception can be used to act as a fail-fast marker for different > operations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #3404: HADOOP-17900. Move ClusterStorageCapacityExceededException to Public from LimitedPrivate.
ayushtkn commented on pull request #3404: URL: https://github.com/apache/hadoop/pull/3404#issuecomment-918407279 Thanx @ferhui & @jojochuang for the review!!! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17900) Move ClusterStorageCapacityExceededException to Public from LimitedPrivate
[ https://issues.apache.org/jira/browse/HADOOP-17900?focusedWorklogId=650147=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650147 ] ASF GitHub Bot logged work on HADOOP-17900: --- Author: ASF GitHub Bot Created on: 13/Sep/21 17:20 Start Date: 13/Sep/21 17:20 Worklog Time Spent: 10m Work Description: ayushtkn merged pull request #3404: URL: https://github.com/apache/hadoop/pull/3404 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650147) Time Spent: 1h 20m (was: 1h 10m) > Move ClusterStorageCapacityExceededException to Public from LimitedPrivate > -- > > Key: HADOOP-17900 > URL: https://issues.apache.org/jira/browse/HADOOP-17900 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > As of now the exception is marked limited private > {code:java} > @InterfaceAudience.LimitedPrivate({ "HDFS", "MapReduce", "Tez" }) > {code} > Doesn't allow other projects, Rather than individually adding project, Make > it Public itself. > This exception can be used to act as a fail-fast marker for different > operations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #3404: HADOOP-17900. Move ClusterStorageCapacityExceededException to Public from LimitedPrivate.
ayushtkn merged pull request #3404: URL: https://github.com/apache/hadoop/pull/3404 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
ayushtkn commented on a change in pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#discussion_r707532786 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java ## @@ -230,6 +230,18 @@ public void testCheckSafeMode8() throws Exception { assertEquals(BMSafeModeStatus.OFF, getSafeModeStatus()); } + @Test(timeout = 2) + public void testCheckSafeMode9() throws Exception { +Configuration conf = new HdfsConfiguration(); +conf.setLong(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, 3000); +GenericTestUtils.LogCapturer auditLog = Review comment: Can you change the variable name to just log, it isn't audit log -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brumi1024 commented on a change in pull request #3358: YARN-10930. Introduce universal capacity resource vector
brumi1024 commented on a change in pull request #3358: URL: https://github.com/apache/hadoop/pull/3358#discussion_r707501296 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityVector.java ## @@ -0,0 +1,141 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import org.apache.hadoop.yarn.api.records.ResourceInformation; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.Map; +import java.util.Set; + +/** + * Contains capacity values with calculation types associated for each + * resource. + */ +public class QueueCapacityVector implements +Iterable { + private final ResourceVector resource; + private final Map resourceTypes + = new HashMap<>(); + private final Set + definedResourceTypes = new HashSet<>(); + + public QueueCapacityVector(ResourceVector resource) { +this.resource = resource; + } + + public static QueueCapacityVector empty() { Review comment: Empty is a bit misleading here (at least for me), it can mean that the values are emptied for the object in question. newInstance for example is a bit more straightforward: https://www.informit.com/articles/article.aspx?p=1216151 Also can you please add javadoc to these factory methods? ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestQueueCapacityConfigParser.java ## @@ -0,0 +1,163 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf; + +import org.apache.hadoop.util.Lists; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueResourceVectorEntry; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueVectorResourceType; +import org.apache.hadoop.yarn.util.resource.ResourceUtils; +import org.junit.Assert; +import org.junit.Test; + +import java.util.List; + +public class TestQueueCapacityConfigParser { + + private static final String QUEUE = "root.test"; + private static final String ABSOLUTE_RESOURCE = "[memory-mb=12Gi, vcores=6, yarn.io/gpu=10]"; Review comment: Can you please add a testcase where the gpu resource is added to the resource_types, but it isn't configured in absolute_resources? Just for future development, to ensure that this will be supported in the future. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceVector.java ## @@ -0,0 +1,65 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + *
[jira] [Work logged] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup
[ https://issues.apache.org/jira/browse/HADOOP-15129?focusedWorklogId=650111=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650111 ] ASF GitHub Bot logged work on HADOOP-15129: --- Author: ASF GitHub Bot Created on: 13/Sep/21 16:23 Start Date: 13/Sep/21 16:23 Worklog Time Spent: 10m Work Description: cnauroth commented on pull request #3348: URL: https://github.com/apache/hadoop/pull/3348#issuecomment-918360599 > hello, do you plan to include this fix in upcoming 3.3.2 version? > thank you! Hello @vitalii-buchyn-exa . Yes, this will be included in the 3.3.2 release. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650111) Time Spent: 1h (was: 50m) > Datanode caches namenode DNS lookup failure and cannot startup > -- > > Key: HADOOP-15129 > URL: https://issues.apache.org/jira/browse/HADOOP-15129 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.2 > Environment: Google Compute Engine. > I'm using Java 8, Debian 8, Hadoop 2.8.2. >Reporter: Karthik Palaniappan >Assignee: Chris Nauroth >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch > > Time Spent: 1h > Remaining Estimate: 0h > > On startup, the Datanode creates an InetSocketAddress to register with each > namenode. Though there are retries on connection failure throughout the > stack, the same InetSocketAddress is reused. > InetSocketAddress is an interesting class, because it resolves DNS names to > IP addresses on construction, and it is never refreshed. Hadoop re-creates an > InetSocketAddress in some cases just in case the remote IP has changed for a > particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472. > Anyway, on startup, you cna see the Datanode log: "Namenode...remains > unresolved" -- referring to the fact that DNS lookup failed. > {code:java} > 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Refresh request received for nameservices: null > 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode > for null remains unresolved for ID null. Check your hdfs-site.xml file to > ensure namenodes are configured properly. > 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Starting BPOfferServices for nameservices: > 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Block pool (Datanode Uuid unassigned) service to > cluster-32f5-m:8020 starting to offer service > {code} > The Datanode then proceeds to use this unresolved address, as it may work if > the DN is configured to use a proxy. Since I'm not using a proxy, it forever > prints out this message: > {code:java} > 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > {code} > Unfortunately, the log doesn't contain the exception that triggered it, but > the culprit is actually in IPC Client: > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444. > This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 > to give a clear error message when somebody mispells an address. > However, the fix in HADOOP-7472 doesn't apply here, because that code happens > in Client#getConnection after the Connection is constructed. > My proposed fix (will attach a patch) is to move this exception out of the > constructor and into a place that will trigger HADOOP-7472's logic to > re-resolve addresses. If the DNS failure was temporary, this will allow the > connection to succeed. If not, the connection will fail after ipc client > retries (default 10
[GitHub] [hadoop] cnauroth commented on pull request #3348: HADOOP-15129. Datanode caches namenode DNS lookup failure and cannot …
cnauroth commented on pull request #3348: URL: https://github.com/apache/hadoop/pull/3348#issuecomment-918360599 > hello, do you plan to include this fix in upcoming 3.3.2 version? > thank you! Hello @vitalii-buchyn-exa . Yes, this will be included in the 3.3.2 release. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brumi1024 commented on a change in pull request #3419: YARN-10911. AbstractCSQueue: Create a separate class for usernames and weights that are travelling in a Map
brumi1024 commented on a change in pull request #3419: URL: https://github.com/apache/hadoop/pull/3419#discussion_r707490969 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UserWeights.java ## @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; +import java.util.regex.Matcher; + +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.USER_SETTINGS; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.USER_WEIGHT; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.USER_WEIGHT_PATTERN; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueuePrefix; + +public class UserWeights { + public static final float DEFAULT_WEIGHT = 1.0F; + /** + * Key: Username, + * Value: Weight as float + */ + protected Map data = new HashMap<>(); + + private UserWeights() {} + + public static UserWeights createEmpty() { +return new UserWeights(); + } + + public static UserWeights createByConfig( + CapacitySchedulerConfiguration conf, + ConfigurationProperties configurationProperties, + String queuePath) { +String queuePathPlusPrefix = getQueuePrefix(queuePath) + USER_SETTINGS; +Map props = configurationProperties.getPropertiesWithPrefix(queuePathPlusPrefix); + +Map keyValueMap = new HashMap<>(); +for (Map.Entry item: props.entrySet()) { + Matcher m = USER_WEIGHT_PATTERN.matcher(item.getKey()); + if (m.find()) { +keyValueMap.put(item.getKey(), conf.substituteVars(item.getValue())); Review comment: Wouldn't it be possible to create the userWeights map here directly? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brumi1024 commented on a change in pull request #3420: YARN-10913. AbstractCSQueue: Group preemption methods and fields into a separate class
brumi1024 commented on a change in pull request #3420: URL: https://github.com/apache/hadoop/pull/3420#discussion_r707478457 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueuePreemption.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +public class AbstractCSQueuePreemption { + private final boolean preemptionDisabled; + // Indicates if the in-queue preemption setting is ever disabled within the + // hierarchy of this queue. + private final boolean intraQueuePreemptionDisabledInHierarchy; + + public AbstractCSQueuePreemption( + CSQueue queue, + CapacitySchedulerContext csContext, + CapacitySchedulerConfiguration configuration) { +this.preemptionDisabled = isQueueHierarchyPreemptionDisabled(queue, csContext, configuration); +this.intraQueuePreemptionDisabledInHierarchy = +isIntraQueueHierarchyPreemptionDisabled(queue, csContext, configuration); + } + + /** + * The specified queue is cross-queue preemptable if system-wide cross-queue + * preemption is turned on unless any queue in the qPath hierarchy + * has explicitly turned cross-queue preemption off. + * NOTE: Cross-queue preemptability is inherited from a queue's parent. + * + * @param q queue to check preemption state + * @param csContext + * @param configuration capacity scheduler config + * @return true if queue has cross-queue preemption disabled, false otherwise + */ + private boolean isQueueHierarchyPreemptionDisabled(CSQueue q, + CapacitySchedulerContext csContext, CapacitySchedulerConfiguration configuration) { +boolean systemWidePreemption = +csContext.getConfiguration() Review comment: Same here as below. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueuePreemption.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +public class AbstractCSQueuePreemption { + private final boolean preemptionDisabled; + // Indicates if the in-queue preemption setting is ever disabled within the + // hierarchy of this queue. + private final boolean intraQueuePreemptionDisabledInHierarchy; + + public AbstractCSQueuePreemption( + CSQueue queue, + CapacitySchedulerContext csContext, + CapacitySchedulerConfiguration configuration) { +this.preemptionDisabled = isQueueHierarchyPreemptionDisabled(queue, csContext, configuration); +this.intraQueuePreemptionDisabledInHierarchy = +isIntraQueueHierarchyPreemptionDisabled(queue, csContext, configuration); + } + + /** + * The specified queue is cross-queue preemptable if system-wide cross-queue + * preemption is turned on unless any queue in the qPath hierarchy + * has explicitly turned cross-queue preemption off. + * NOTE: Cross-queue preemptability is inherited from a queue's parent. + * + * @param q queue to check preemption state + * @param
[jira] [Work logged] (HADOOP-17895) RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-17895?focusedWorklogId=650102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650102 ] ASF GitHub Bot logged work on HADOOP-17895: --- Author: ASF GitHub Bot Created on: 13/Sep/21 16:06 Start Date: 13/Sep/21 16:06 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918344994 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3.1 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 55s | | branch-3.3.1 passed | | +1 :green_heart: | compile | 18m 10s | | branch-3.3.1 passed | | +1 :green_heart: | checkstyle | 0m 48s | | branch-3.3.1 passed | | +1 :green_heart: | mvnsite | 1m 31s | | branch-3.3.1 passed | | +1 :green_heart: | javadoc | 1m 32s | | branch-3.3.1 passed | | +1 :green_heart: | spotbugs | 2m 23s | | branch-3.3.1 passed | | +1 :green_heart: | shadedclient | 20m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 17m 58s | | the patch passed | | +1 :green_heart: | javac | 17m 59s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 47s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed | | +1 :green_heart: | spotbugs | 2m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 7s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 141m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.rawlocal.TestRawlocalContractMkdir | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3391 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux de5d180a444f 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3.1 / e0161043c1cdb86ea1f654c3e56c593b85cf6593 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/testReport/ | | Max. process+thread count | 1236 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650102) Time Spent: 1h 40m (was: 1.5h) > RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3 >
[GitHub] [hadoop] hadoop-yetus commented on pull request #3391: HADOOP-17895. Add the test to reproduce the failure of `RawLocalFileSystem.mkdir` with unicode filename
hadoop-yetus commented on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918344994 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3.1 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 55s | | branch-3.3.1 passed | | +1 :green_heart: | compile | 18m 10s | | branch-3.3.1 passed | | +1 :green_heart: | checkstyle | 0m 48s | | branch-3.3.1 passed | | +1 :green_heart: | mvnsite | 1m 31s | | branch-3.3.1 passed | | +1 :green_heart: | javadoc | 1m 32s | | branch-3.3.1 passed | | +1 :green_heart: | spotbugs | 2m 23s | | branch-3.3.1 passed | | +1 :green_heart: | shadedclient | 20m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 17m 58s | | the patch passed | | +1 :green_heart: | javac | 17m 59s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 47s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed | | +1 :green_heart: | spotbugs | 2m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 7s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 141m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.rawlocal.TestRawlocalContractMkdir | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3391 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux de5d180a444f 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3.1 / e0161043c1cdb86ea1f654c3e56c593b85cf6593 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/testReport/ | | Max. process+thread count | 1236 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3391/2/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17871) S3A CSE: minor tuning
[ https://issues.apache.org/jira/browse/HADOOP-17871?focusedWorklogId=650091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650091 ] ASF GitHub Bot logged work on HADOOP-17871: --- Author: ASF GitHub Bot Created on: 13/Sep/21 15:48 Start Date: 13/Sep/21 15:48 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918327945 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 19 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 1s | | trunk passed | | +1 :green_heart: | compile | 29m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 24m 21s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 57s | | trunk passed | | +1 :green_heart: | javadoc | 1m 54s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 42s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 41s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 56s | | the patch passed | | +1 :green_heart: | compile | 28m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 28m 24s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/3/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 14 new + 1917 unchanged - 0 fixed = 1931 total (was 1917) | | +1 :green_heart: | compile | 24m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 24m 21s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/3/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 14 new + 1790 unchanged - 0 fixed = 1804 total (was 1790) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 21s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/3/artifact/out/results-checkstyle-root.txt) | root: The patch generated 13 new + 137 unchanged - 38 fixed = 150 total (was 175) | | +1 :green_heart: | mvnsite | 2m 53s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 38s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 45s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 57s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 248m 5s | | | | Subsystem |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3412: HADOOP-17871. S3A CSE: minor tuning
hadoop-yetus commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918327945 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 19 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 1s | | trunk passed | | +1 :green_heart: | compile | 29m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 24m 21s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 57s | | trunk passed | | +1 :green_heart: | javadoc | 1m 54s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 42s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 41s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 56s | | the patch passed | | +1 :green_heart: | compile | 28m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 28m 24s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/3/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 14 new + 1917 unchanged - 0 fixed = 1931 total (was 1917) | | +1 :green_heart: | compile | 24m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 24m 21s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/3/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 14 new + 1790 unchanged - 0 fixed = 1804 total (was 1790) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 21s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/3/artifact/out/results-checkstyle-root.txt) | root: The patch generated 13 new + 137 unchanged - 38 fixed = 150 total (was 175) | | +1 :green_heart: | mvnsite | 2m 53s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 38s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 45s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 57s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 248m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3412 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint | | uname | Linux 89f76a61c5a6
[GitHub] [hadoop] steveloughran commented on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
steveloughran commented on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-918305235 rebase & setting up for GCS testing. the latest PR uses openFile(path).withFileStatus(st), so can open directly from a list() call to opening the files; saves one HEAD per manifest load. For a job with 1000 tasks, each generating a single file, that would reduce IO from 1 LIST, 1K HEAD, 1K GET, 1K Rename to the LIST, GET and rename, (+cleanup): shaving off a lot of load -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
hadoop-yetus removed a comment on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-916548752 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 10s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 25 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 0s | | trunk passed | | +1 :green_heart: | compile | 22m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 54s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 39s | | trunk passed | | +1 :green_heart: | javadoc | 2m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 31s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 17m 13s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 17m 34s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 14s | | the patch passed | | +1 :green_heart: | compile | 22m 8s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 22m 8s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/35/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1913 unchanged - 0 fixed = 1915 total (was 1913) | | +1 :green_heart: | compile | 19m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 19m 33s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/35/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1790 unchanged - 0 fixed = 1792 total (was 1790) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 53s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/35/artifact/out/results-checkstyle-root.txt) | root: The patch generated 30 new + 8 unchanged - 0 fixed = 38 total (was 8) | | +1 :green_heart: | mvnsite | 3m 34s | | the patch passed | | +1 :green_heart: | xml | 0m 10s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 52s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 27s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 17m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 27s | | hadoop-project in the patch passed. | | -1 :x: | unit | 16m 55s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/35/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 7m 34s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 2m 7s | | hadoop-azure in the
[jira] [Work logged] (HADOOP-17892) Add Hadoop code formatter in dev-support
[ https://issues.apache.org/jira/browse/HADOOP-17892?focusedWorklogId=650061=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650061 ] ASF GitHub Bot logged work on HADOOP-17892: --- Author: ASF GitHub Bot Created on: 13/Sep/21 14:52 Start Date: 13/Sep/21 14:52 Worklog Time Spent: 10m Work Description: virajjasani commented on a change in pull request #3387: URL: https://github.com/apache/hadoop/pull/3387#discussion_r707413736 ## File path: dev-support/code-formatter/hadoop_idea_formatter.xml ## @@ -0,0 +1,76 @@ + + + + + + + + + + + + + + + + + + + + + Review comment: > I guess this is the correct order [#2073 (review)](https://github.com/apache/hadoop/pull/2073#pullrequestreview-434899055) > > in addition just thirdparty imports goes to the other block only. > > Do wait for Steve to confirm once before updating I see, Thanks. Unless we follow module specific formatting (few rules are applicable to S3A, ABFS, WASB and the like only, hence wondering), I think this should be good enough to keep in formatter as generic rule. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650061) Time Spent: 5h (was: 4h 50m) > Add Hadoop code formatter in dev-support > > > Key: HADOOP-17892 > URL: https://issues.apache.org/jira/browse/HADOOP-17892 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 5h > Remaining Estimate: 0h > > We should add Hadoop code formatter xml to dev-support specifically for new > developers to refer to. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #3387: HADOOP-17892. Add Hadoop code formatter in dev-support
virajjasani commented on a change in pull request #3387: URL: https://github.com/apache/hadoop/pull/3387#discussion_r707413736 ## File path: dev-support/code-formatter/hadoop_idea_formatter.xml ## @@ -0,0 +1,76 @@ + + + + + + + + + + + + + + + + + + + + + Review comment: > I guess this is the correct order [#2073 (review)](https://github.com/apache/hadoop/pull/2073#pullrequestreview-434899055) > > in addition just thirdparty imports goes to the other block only. > > Do wait for Steve to confirm once before updating I see, Thanks. Unless we follow module specific formatting (few rules are applicable to S3A, ABFS, WASB and the like only, hence wondering), I think this should be good enough to keep in formatter as generic rule. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup
[ https://issues.apache.org/jira/browse/HADOOP-15129?focusedWorklogId=650053=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650053 ] ASF GitHub Bot logged work on HADOOP-15129: --- Author: ASF GitHub Bot Created on: 13/Sep/21 14:36 Start Date: 13/Sep/21 14:36 Worklog Time Spent: 10m Work Description: vitalii-buchyn-exa edited a comment on pull request #3348: URL: https://github.com/apache/hadoop/pull/3348#issuecomment-918257768 hello, do you plan to include this fix in upcoming 3.3.2 version? thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650053) Time Spent: 50m (was: 40m) > Datanode caches namenode DNS lookup failure and cannot startup > -- > > Key: HADOOP-15129 > URL: https://issues.apache.org/jira/browse/HADOOP-15129 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.2 > Environment: Google Compute Engine. > I'm using Java 8, Debian 8, Hadoop 2.8.2. >Reporter: Karthik Palaniappan >Assignee: Chris Nauroth >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch > > Time Spent: 50m > Remaining Estimate: 0h > > On startup, the Datanode creates an InetSocketAddress to register with each > namenode. Though there are retries on connection failure throughout the > stack, the same InetSocketAddress is reused. > InetSocketAddress is an interesting class, because it resolves DNS names to > IP addresses on construction, and it is never refreshed. Hadoop re-creates an > InetSocketAddress in some cases just in case the remote IP has changed for a > particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472. > Anyway, on startup, you cna see the Datanode log: "Namenode...remains > unresolved" -- referring to the fact that DNS lookup failed. > {code:java} > 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Refresh request received for nameservices: null > 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode > for null remains unresolved for ID null. Check your hdfs-site.xml file to > ensure namenodes are configured properly. > 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Starting BPOfferServices for nameservices: > 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Block pool (Datanode Uuid unassigned) service to > cluster-32f5-m:8020 starting to offer service > {code} > The Datanode then proceeds to use this unresolved address, as it may work if > the DN is configured to use a proxy. Since I'm not using a proxy, it forever > prints out this message: > {code:java} > 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > {code} > Unfortunately, the log doesn't contain the exception that triggered it, but > the culprit is actually in IPC Client: > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444. > This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 > to give a clear error message when somebody mispells an address. > However, the fix in HADOOP-7472 doesn't apply here, because that code happens > in Client#getConnection after the Connection is constructed. > My proposed fix (will attach a patch) is to move this exception out of the > constructor and into a place that will trigger HADOOP-7472's logic to > re-resolve addresses. If the DNS failure was temporary, this will allow the > connection to succeed. If not, the connection will fail after ipc client > retries (default 10 seconds worth of retries). > I want to fix this in ipc client rather
[jira] [Work logged] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup
[ https://issues.apache.org/jira/browse/HADOOP-15129?focusedWorklogId=650052=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650052 ] ASF GitHub Bot logged work on HADOOP-15129: --- Author: ASF GitHub Bot Created on: 13/Sep/21 14:35 Start Date: 13/Sep/21 14:35 Worklog Time Spent: 10m Work Description: vitalii-buchyn-exa commented on pull request #3348: URL: https://github.com/apache/hadoop/pull/3348#issuecomment-918257768 hello, do you plan to include this fix on upcoming 3.2.2 version? thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650052) Time Spent: 40m (was: 0.5h) > Datanode caches namenode DNS lookup failure and cannot startup > -- > > Key: HADOOP-15129 > URL: https://issues.apache.org/jira/browse/HADOOP-15129 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.2 > Environment: Google Compute Engine. > I'm using Java 8, Debian 8, Hadoop 2.8.2. >Reporter: Karthik Palaniappan >Assignee: Chris Nauroth >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch > > Time Spent: 40m > Remaining Estimate: 0h > > On startup, the Datanode creates an InetSocketAddress to register with each > namenode. Though there are retries on connection failure throughout the > stack, the same InetSocketAddress is reused. > InetSocketAddress is an interesting class, because it resolves DNS names to > IP addresses on construction, and it is never refreshed. Hadoop re-creates an > InetSocketAddress in some cases just in case the remote IP has changed for a > particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472. > Anyway, on startup, you cna see the Datanode log: "Namenode...remains > unresolved" -- referring to the fact that DNS lookup failed. > {code:java} > 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Refresh request received for nameservices: null > 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode > for null remains unresolved for ID null. Check your hdfs-site.xml file to > ensure namenodes are configured properly. > 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Starting BPOfferServices for nameservices: > 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Block pool (Datanode Uuid unassigned) service to > cluster-32f5-m:8020 starting to offer service > {code} > The Datanode then proceeds to use this unresolved address, as it may work if > the DN is configured to use a proxy. Since I'm not using a proxy, it forever > prints out this message: > {code:java} > 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > {code} > Unfortunately, the log doesn't contain the exception that triggered it, but > the culprit is actually in IPC Client: > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444. > This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 > to give a clear error message when somebody mispells an address. > However, the fix in HADOOP-7472 doesn't apply here, because that code happens > in Client#getConnection after the Connection is constructed. > My proposed fix (will attach a patch) is to move this exception out of the > constructor and into a place that will trigger HADOOP-7472's logic to > re-resolve addresses. If the DNS failure was temporary, this will allow the > connection to succeed. If not, the connection will fail after ipc client > retries (default 10 seconds worth of retries). > I want to fix this in ipc client rather than just
[GitHub] [hadoop] vitalii-buchyn-exa edited a comment on pull request #3348: HADOOP-15129. Datanode caches namenode DNS lookup failure and cannot …
vitalii-buchyn-exa edited a comment on pull request #3348: URL: https://github.com/apache/hadoop/pull/3348#issuecomment-918257768 hello, do you plan to include this fix in upcoming 3.3.2 version? thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vitalii-buchyn-exa commented on pull request #3348: HADOOP-15129. Datanode caches namenode DNS lookup failure and cannot …
vitalii-buchyn-exa commented on pull request #3348: URL: https://github.com/apache/hadoop/pull/3348#issuecomment-918257768 hello, do you plan to include this fix on upcoming 3.2.2 version? thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17892) Add Hadoop code formatter in dev-support
[ https://issues.apache.org/jira/browse/HADOOP-17892?focusedWorklogId=650039=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650039 ] ASF GitHub Bot logged work on HADOOP-17892: --- Author: ASF GitHub Bot Created on: 13/Sep/21 14:05 Start Date: 13/Sep/21 14:05 Worklog Time Spent: 10m Work Description: ayushtkn commented on a change in pull request #3387: URL: https://github.com/apache/hadoop/pull/3387#discussion_r707368432 ## File path: dev-support/code-formatter/hadoop_idea_formatter.xml ## @@ -0,0 +1,76 @@ + + + + + + + + + + + + + + + + + + + + + Review comment: I guess this is the correct order https://github.com/apache/hadoop/pull/2073#pullrequestreview-434899055 in addition just thirdparty imports goes to the other block only. Do wait for Steve to confirm once before updating -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650039) Time Spent: 4h 50m (was: 4h 40m) > Add Hadoop code formatter in dev-support > > > Key: HADOOP-17892 > URL: https://issues.apache.org/jira/browse/HADOOP-17892 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 4h 50m > Remaining Estimate: 0h > > We should add Hadoop code formatter xml to dev-support specifically for new > developers to refer to. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #3387: HADOOP-17892. Add Hadoop code formatter in dev-support
ayushtkn commented on a change in pull request #3387: URL: https://github.com/apache/hadoop/pull/3387#discussion_r707368432 ## File path: dev-support/code-formatter/hadoop_idea_formatter.xml ## @@ -0,0 +1,76 @@ + + + + + + + + + + + + + + + + + + + + + Review comment: I guess this is the correct order https://github.com/apache/hadoop/pull/2073#pullrequestreview-434899055 in addition just thirdparty imports goes to the other block only. Do wait for Steve to confirm once before updating -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650028=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650028 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:52 Start Date: 13/Sep/21 13:52 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918213470 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 34s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 21s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 34s | | the patch passed | | -1 :x: | javadoc | 0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) | | -1 :x: | javadoc | 0m 23s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) | | +1 :green_heart: | spotbugs | 1m 10s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 10s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 76m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3381 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ccd54cc5e5bd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64
[GitHub] [hadoop] hadoop-yetus commented on pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
hadoop-yetus commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918213470 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 34s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 21s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 34s | | the patch passed | | -1 :x: | javadoc | 0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) | | -1 :x: | javadoc | 0m 23s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) | | +1 :green_heart: | spotbugs | 1m 10s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 10s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 76m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3381 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ccd54cc5e5bd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 68f88a3bf9ed7abce69d0c1fb029a571740c2a64 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
[jira] [Work logged] (HADOOP-17895) RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-17895?focusedWorklogId=650026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650026 ] ASF GitHub Bot logged work on HADOOP-17895: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:48 Start Date: 13/Sep/21 13:48 Worklog Time Spent: 10m Work Description: majdyz commented on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918209856 @cnauroth @ayushtkn Thank you for trying to reproduce the issue on your end, I have now updated the current test to also fail in the provided docker image started with `./start-build-env.sh`. I was using my local MacOS machine and didn't provide the randomized folder which makes the test runs smoothly in the second run. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650026) Time Spent: 1.5h (was: 1h 20m) > RawLocalFileSystem mkdirs with unicode filename doesn't work in Hadoop 3 > > > Key: HADOOP-17895 > URL: https://issues.apache.org/jira/browse/HADOOP-17895 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > *Bug description:* > `fs.mkdirs` command for `RawLocalFileSystem` doesn't work in Hadoop 3 with > NativeIO enabled. > The failure was happening when doing the native `chmod` command to the file > (the `mkdir` command itself is working). > Stacktrace: > {{ENOENT: No such file or directory ENOENT: No such file or directory at > org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method) at > org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:382) at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:974) > at > org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:660) > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:700) > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672)}} > > *To reproduce:* > * Add `fs.mkdirs` in RawLocalFileSystem with NativeIO enabled. > * Sample: [https://github.com/apache/hadoop/pull/3391] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] majdyz commented on pull request #3391: HADOOP-17895. Add the test to reproduce the failure of `RawLocalFileSystem.mkdir` with unicode filename
majdyz commented on pull request #3391: URL: https://github.com/apache/hadoop/pull/3391#issuecomment-918209856 @cnauroth @ayushtkn Thank you for trying to reproduce the issue on your end, I have now updated the current test to also fail in the provided docker image started with `./start-build-env.sh`. I was using my local MacOS machine and didn't provide the randomized folder which makes the test runs smoothly in the second run. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu edited a comment on pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
jianghuazhu edited a comment on pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#issuecomment-918193786 Thanks @ayushtkn for the comment. I checked again and found that there are some exceptions in jenkins, for example: hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes I looked into the code and found that some exceptions occurred mainly when initializing MiniQJMHACluster->initializeSharedEdits. I directly used the code of the trunk branch to test TestBalancerWithHANameNodes, and found that the same exception occurred. So it doesn't seem to have much to do with the code I submitted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650021 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:36 Start Date: 13/Sep/21 13:36 Worklog Time Spent: 10m Work Description: snvijaya commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918198704 > LGTM; some minor changes. Main one is using/adding a statistic to StreamStatisticNames Hi @steveloughran , Thanks for taking the time to review this PR. Post analyzing the metric gathering spot and the metric grouping in StreamStatistics and StoreStatistics, I feel the new statistics is probably right to be defined within AbfsStatistics. Have added my explanation for this above. Kindly request your inputs on this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650021) Time Spent: 1h 10m (was: 1h) > ABFS: Refactor HTTP request handling code > - > > Key: HADOOP-17890 > URL: https://issues.apache.org/jira/browse/HADOOP-17890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Aims at Http request handling code refactoring. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
snvijaya commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918198704 > LGTM; some minor changes. Main one is using/adding a statistic to StreamStatisticNames Hi @steveloughran , Thanks for taking the time to review this PR. Post analyzing the metric gathering spot and the metric grouping in StreamStatistics and StoreStatistics, I feel the new statistics is probably right to be defined within AbfsStatistics. Have added my explanation for this above. Kindly request your inputs on this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
jianghuazhu commented on pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#issuecomment-918193786 Thanks @aux for the comment. I checked again and found that there are some exceptions in jenkins, for example: hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes I looked into the code and found that some exceptions occurred mainly when initializing MiniQJMHACluster->initializeSharedEdits. I directly used the code of the trunk branch to test TestBalancerWithHANameNodes, and found that the same exception occurred. So it doesn't seem to have much to do with the code I submitted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17871) S3A CSE: minor tuning
[ https://issues.apache.org/jira/browse/HADOOP-17871?focusedWorklogId=650016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650016 ] ASF GitHub Bot logged work on HADOOP-17871: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:26 Start Date: 13/Sep/21 13:26 Worklog Time Spent: 10m Work Description: mehakmeet commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918189333 In the last Yetus run, checkstyle says the correct indentation is level 6 not 8 and in this one, it says it should be 8, 10, or 12 and not 6. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650016) Time Spent: 2h 20m (was: 2h 10m) > S3A CSE: minor tuning > - > > Key: HADOOP-17871 > URL: https://issues.apache.org/jira/browse/HADOOP-17871 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Mehakmeet Singh >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > Some minor tuning to the CSE encryption support before backporting to 3.3.x > and so shipping this year > * LogExactlyOnce an "please ignore the warning" message to a new log > ("org.apache.hadoop.fs.s3a.encryption") which can be set to ERROR if you get > bored of the message. > * Extend testing_s3a.md and the SDK upgrade runbook: test CSE always > * change property name of encryption key (maybe: fs.s3a.encryption) and add > mapping in S3AFileSystem.addDeprecatedKeys ... docs will need updating too. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on pull request #3412: HADOOP-17871. S3A CSE: minor tuning
mehakmeet commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918189333 In the last Yetus run, checkstyle says the correct indentation is level 6 not 8 and in this one, it says it should be 8, 10, or 12 and not 6. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650010 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:20 Start Date: 13/Sep/21 13:20 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707326581 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -74,6 +76,7 @@ // metrics private int bytesSent; private long bytesReceived; + private long bytesDiscarded; Review comment: Have added the javadocs. Will keep a PR checklist point on this. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650010) Time Spent: 1h (was: 50m) > ABFS: Refactor HTTP request handling code > - > > Key: HADOOP-17890 > URL: https://issues.apache.org/jira/browse/HADOOP-17890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Aims at Http request handling code refactoring. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707326581 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -74,6 +76,7 @@ // metrics private int bytesSent; private long bytesReceived; + private long bytesDiscarded; Review comment: Have added the javadocs. Will keep a PR checklist point on this. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17871) S3A CSE: minor tuning
[ https://issues.apache.org/jira/browse/HADOOP-17871?focusedWorklogId=650009=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650009 ] ASF GitHub Bot logged work on HADOOP-17871: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:17 Start Date: 13/Sep/21 13:17 Worklog Time Spent: 10m Work Description: mehakmeet commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918181098 @steveloughran can we ignore javac errors, which are regarding old keys being used in removeBucketOverides since they are deprecated now? some of the javac errors are not from this patch also. Checkstyle errors are in that disabled test. Guess, the indentation is still wrong somehow. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650009) Time Spent: 2h 10m (was: 2h) > S3A CSE: minor tuning > - > > Key: HADOOP-17871 > URL: https://issues.apache.org/jira/browse/HADOOP-17871 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Mehakmeet Singh >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > Some minor tuning to the CSE encryption support before backporting to 3.3.x > and so shipping this year > * LogExactlyOnce an "please ignore the warning" message to a new log > ("org.apache.hadoop.fs.s3a.encryption") which can be set to ERROR if you get > bored of the message. > * Extend testing_s3a.md and the SDK upgrade runbook: test CSE always > * change property name of encryption key (maybe: fs.s3a.encryption) and add > mapping in S3AFileSystem.addDeprecatedKeys ... docs will need updating too. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650008=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650008 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:17 Start Date: 13/Sep/21 13:17 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707324115 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final int offset, final int len startTime = System.nanoTime(); } -if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { - processStorageErrorResponse(); +long totalBytesRead = 0; + +try { + totalBytesRead = parseResponse(buffer, offset, length); +} finally { if (this.isTraceEnabled) { this.recvResponseTimeMs += elapsedTimeMs(startTime); } - this.bytesReceived = this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0); -} else { - // consume the input stream to release resources - int totalBytesRead = 0; + this.bytesReceived = totalBytesRead; +} + } + public long parseResponse(final byte[] buffer, + final int offset, + final int length) throws IOException { +if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { + processStorageErrorResponse(); + return this.connection.getHeaderFieldLong( + HttpHeaderConfigurations.CONTENT_LENGTH, 0); +} else { try (InputStream stream = this.connection.getInputStream()) { if (isNullInputStream(stream)) { - return; + return 0; } -boolean endOfStream = false; -// this is a list operation and need to retrieve the data -// need a better solution -if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == null) { +if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) +&& buffer == null) { parseListFilesResponse(stream); } else { - if (buffer != null) { -while (totalBytesRead < length) { - int bytesRead = stream.read(buffer, offset + totalBytesRead, length - totalBytesRead); - if (bytesRead == -1) { -endOfStream = true; -break; - } - totalBytesRead += bytesRead; -} - } - if (!endOfStream && stream.read() != -1) { -// read and discard -int bytesRead = 0; -byte[] b = new byte[CLEAN_UP_BUFFER_SIZE]; -while ((bytesRead = stream.read(b)) >= 0) { - totalBytesRead += bytesRead; -} - } + return readDataFromStream(stream, buffer, offset, length); } - } catch (IOException ex) { -LOG.warn("IO/Network error: {} {}: {}", -method, getMaskedUrl(), ex.getMessage()); -LOG.debug("IO Error: ", ex); -throw ex; - } finally { -if (this.isTraceEnabled) { - this.recvResponseTimeMs += elapsedTimeMs(startTime); + } +} + +return 0; + } + + public long readDataFromStream(final InputStream stream, + final byte[] buffer, + final int offset, + final int length) throws IOException { +// consume the input stream to release resources +int totalBytesRead = 0; +boolean endOfStream = false; + +if (buffer != null) { Review comment: Ideally never. Other than List and Read, server should not be sending any content that the client isnt ready. In case of List, buffer is not provided by the caller as the size is not known, and is parsed and returned before reaching this method. Buffer passed in here can not be null in case of read flow either, as the null check happens before HttpRequest can be raised. However this is an existing protective check in the code, hence retaining it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650008) Time Spent: 50m (was: 40m) > ABFS: Refactor HTTP request handling code > - > > Key: HADOOP-17890 > URL: https://issues.apache.org/jira/browse/HADOOP-17890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects
[GitHub] [hadoop] mehakmeet commented on pull request #3412: HADOOP-17871. S3A CSE: minor tuning
mehakmeet commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918181098 @steveloughran can we ignore javac errors, which are regarding old keys being used in removeBucketOverides since they are deprecated now? some of the javac errors are not from this patch also. Checkstyle errors are in that disabled test. Guess, the indentation is still wrong somehow. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707324115 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final int offset, final int len startTime = System.nanoTime(); } -if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { - processStorageErrorResponse(); +long totalBytesRead = 0; + +try { + totalBytesRead = parseResponse(buffer, offset, length); +} finally { if (this.isTraceEnabled) { this.recvResponseTimeMs += elapsedTimeMs(startTime); } - this.bytesReceived = this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0); -} else { - // consume the input stream to release resources - int totalBytesRead = 0; + this.bytesReceived = totalBytesRead; +} + } + public long parseResponse(final byte[] buffer, + final int offset, + final int length) throws IOException { +if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { + processStorageErrorResponse(); + return this.connection.getHeaderFieldLong( + HttpHeaderConfigurations.CONTENT_LENGTH, 0); +} else { try (InputStream stream = this.connection.getInputStream()) { if (isNullInputStream(stream)) { - return; + return 0; } -boolean endOfStream = false; -// this is a list operation and need to retrieve the data -// need a better solution -if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == null) { +if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) +&& buffer == null) { parseListFilesResponse(stream); } else { - if (buffer != null) { -while (totalBytesRead < length) { - int bytesRead = stream.read(buffer, offset + totalBytesRead, length - totalBytesRead); - if (bytesRead == -1) { -endOfStream = true; -break; - } - totalBytesRead += bytesRead; -} - } - if (!endOfStream && stream.read() != -1) { -// read and discard -int bytesRead = 0; -byte[] b = new byte[CLEAN_UP_BUFFER_SIZE]; -while ((bytesRead = stream.read(b)) >= 0) { - totalBytesRead += bytesRead; -} - } + return readDataFromStream(stream, buffer, offset, length); } - } catch (IOException ex) { -LOG.warn("IO/Network error: {} {}: {}", -method, getMaskedUrl(), ex.getMessage()); -LOG.debug("IO Error: ", ex); -throw ex; - } finally { -if (this.isTraceEnabled) { - this.recvResponseTimeMs += elapsedTimeMs(startTime); + } +} + +return 0; + } + + public long readDataFromStream(final InputStream stream, + final byte[] buffer, + final int offset, + final int length) throws IOException { +// consume the input stream to release resources +int totalBytesRead = 0; +boolean endOfStream = false; + +if (buffer != null) { Review comment: Ideally never. Other than List and Read, server should not be sending any content that the client isnt ready. In case of List, buffer is not provided by the caller as the size is not known, and is parsed and returned before reaching this method. Buffer passed in here can not be null in case of read flow either, as the null check happens before HttpRequest can be raised. However this is an existing protective check in the code, hence retaining it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomicooler commented on pull request #3342: YARN-10897. Introduce QueuePath class
tomicooler commented on pull request #3342: URL: https://github.com/apache/hadoop/pull/3342#issuecomment-918179680 Thanks for the review fixes. +1 from my side. The hasEmptyPart could be simplified with the newly added iterator, and there are some checkStyle warnings (longer than 100 lines). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650005=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650005 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:11 Start Date: 13/Sep/21 13:11 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707319307 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final int offset, final int len startTime = System.nanoTime(); } -if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { - processStorageErrorResponse(); +long totalBytesRead = 0; + +try { + totalBytesRead = parseResponse(buffer, offset, length); +} finally { if (this.isTraceEnabled) { this.recvResponseTimeMs += elapsedTimeMs(startTime); } - this.bytesReceived = this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0); -} else { - // consume the input stream to release resources - int totalBytesRead = 0; + this.bytesReceived = totalBytesRead; +} + } + public long parseResponse(final byte[] buffer, Review comment: Have added the javadocs. ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final int offset, final int len startTime = System.nanoTime(); } -if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { - processStorageErrorResponse(); +long totalBytesRead = 0; + +try { + totalBytesRead = parseResponse(buffer, offset, length); +} finally { if (this.isTraceEnabled) { this.recvResponseTimeMs += elapsedTimeMs(startTime); } - this.bytesReceived = this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0); -} else { - // consume the input stream to release resources - int totalBytesRead = 0; + this.bytesReceived = totalBytesRead; +} + } + public long parseResponse(final byte[] buffer, + final int offset, + final int length) throws IOException { +if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { + processStorageErrorResponse(); + return this.connection.getHeaderFieldLong( + HttpHeaderConfigurations.CONTENT_LENGTH, 0); +} else { try (InputStream stream = this.connection.getInputStream()) { if (isNullInputStream(stream)) { - return; + return 0; } -boolean endOfStream = false; -// this is a list operation and need to retrieve the data -// need a better solution -if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == null) { +if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) +&& buffer == null) { parseListFilesResponse(stream); } else { - if (buffer != null) { -while (totalBytesRead < length) { - int bytesRead = stream.read(buffer, offset + totalBytesRead, length - totalBytesRead); - if (bytesRead == -1) { -endOfStream = true; -break; - } - totalBytesRead += bytesRead; -} - } - if (!endOfStream && stream.read() != -1) { -// read and discard -int bytesRead = 0; -byte[] b = new byte[CLEAN_UP_BUFFER_SIZE]; -while ((bytesRead = stream.read(b)) >= 0) { - totalBytesRead += bytesRead; -} - } + return readDataFromStream(stream, buffer, offset, length); } - } catch (IOException ex) { -LOG.warn("IO/Network error: {} {}: {}", -method, getMaskedUrl(), ex.getMessage()); -LOG.debug("IO Error: ", ex); -throw ex; - } finally { -if (this.isTraceEnabled) { - this.recvResponseTimeMs += elapsedTimeMs(startTime); + } +} + +return 0; + } + + public long readDataFromStream(final InputStream stream, Review comment: Have added the javadocs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650005) Time Spent: 40m (was: 0.5h) > ABFS: Refactor HTTP
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650004=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650004 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:11 Start Date: 13/Sep/21 13:11 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707319027 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java ## @@ -75,6 +75,8 @@ "Total bytes uploaded."), BYTES_RECEIVED("bytes_received", "Total bytes received."), + BYTES_DISCARDED_AT_SOCKET_READ("bytes_discarded_at_socket_read", Review comment: The bytesDiscarded is incremented when server happens to return any bytes that the client wasnt expecting to receive. As of today, there are only 2 APIs that the server will return response body, which is List and Read. In case of List, inputStream is provided to the ObjectMapper for json conversion. This leaves just the read API where data intended to be read should match with the space in buffer to store data received. Ideally there are no scenarios in driver-server communication that this is expected. I couldnt find any clue that lead to the code that drains the socket either, but saw few forums mention about the side effects of client disconnecting while server might still be transmitting. TCP Reset gets triggered and signals an error in connection which in turn triggers some error handling and network layer buffers being reset. In the case of read flow, AbfsHttpOperation layer has no access to AbfsInputStream instance and hence cant access the stream statistics it holds to. While logically read is the only possible API that can hit this case, this code is in a general Http response handling code, hence I retained the new statistic outside of StreamStatistics to track this. I looked at StoreStatisticNames, and it didnt look right to add a new statistic in there, hence adding this along with the other network statistics such as BYTES_SEND and BYTES_RECEIVED defined in AbfsStatistic enum. Please let me know if this looks ok. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 650004) Time Spent: 0.5h (was: 20m) > ABFS: Refactor HTTP request handling code > - > > Key: HADOOP-17890 > URL: https://issues.apache.org/jira/browse/HADOOP-17890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Aims at Http request handling code refactoring. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707319307 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final int offset, final int len startTime = System.nanoTime(); } -if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { - processStorageErrorResponse(); +long totalBytesRead = 0; + +try { + totalBytesRead = parseResponse(buffer, offset, length); +} finally { if (this.isTraceEnabled) { this.recvResponseTimeMs += elapsedTimeMs(startTime); } - this.bytesReceived = this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0); -} else { - // consume the input stream to release resources - int totalBytesRead = 0; + this.bytesReceived = totalBytesRead; +} + } + public long parseResponse(final byte[] buffer, Review comment: Have added the javadocs. ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final int offset, final int len startTime = System.nanoTime(); } -if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { - processStorageErrorResponse(); +long totalBytesRead = 0; + +try { + totalBytesRead = parseResponse(buffer, offset, length); +} finally { if (this.isTraceEnabled) { this.recvResponseTimeMs += elapsedTimeMs(startTime); } - this.bytesReceived = this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0); -} else { - // consume the input stream to release resources - int totalBytesRead = 0; + this.bytesReceived = totalBytesRead; +} + } + public long parseResponse(final byte[] buffer, + final int offset, + final int length) throws IOException { +if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) { + processStorageErrorResponse(); + return this.connection.getHeaderFieldLong( + HttpHeaderConfigurations.CONTENT_LENGTH, 0); +} else { try (InputStream stream = this.connection.getInputStream()) { if (isNullInputStream(stream)) { - return; + return 0; } -boolean endOfStream = false; -// this is a list operation and need to retrieve the data -// need a better solution -if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == null) { +if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) +&& buffer == null) { parseListFilesResponse(stream); } else { - if (buffer != null) { -while (totalBytesRead < length) { - int bytesRead = stream.read(buffer, offset + totalBytesRead, length - totalBytesRead); - if (bytesRead == -1) { -endOfStream = true; -break; - } - totalBytesRead += bytesRead; -} - } - if (!endOfStream && stream.read() != -1) { -// read and discard -int bytesRead = 0; -byte[] b = new byte[CLEAN_UP_BUFFER_SIZE]; -while ((bytesRead = stream.read(b)) >= 0) { - totalBytesRead += bytesRead; -} - } + return readDataFromStream(stream, buffer, offset, length); } - } catch (IOException ex) { -LOG.warn("IO/Network error: {} {}: {}", -method, getMaskedUrl(), ex.getMessage()); -LOG.debug("IO Error: ", ex); -throw ex; - } finally { -if (this.isTraceEnabled) { - this.recvResponseTimeMs += elapsedTimeMs(startTime); + } +} + +return 0; + } + + public long readDataFromStream(final InputStream stream, Review comment: Have added the javadocs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
snvijaya commented on a change in pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#discussion_r707319027 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java ## @@ -75,6 +75,8 @@ "Total bytes uploaded."), BYTES_RECEIVED("bytes_received", "Total bytes received."), + BYTES_DISCARDED_AT_SOCKET_READ("bytes_discarded_at_socket_read", Review comment: The bytesDiscarded is incremented when server happens to return any bytes that the client wasnt expecting to receive. As of today, there are only 2 APIs that the server will return response body, which is List and Read. In case of List, inputStream is provided to the ObjectMapper for json conversion. This leaves just the read API where data intended to be read should match with the space in buffer to store data received. Ideally there are no scenarios in driver-server communication that this is expected. I couldnt find any clue that lead to the code that drains the socket either, but saw few forums mention about the side effects of client disconnecting while server might still be transmitting. TCP Reset gets triggered and signals an error in connection which in turn triggers some error handling and network layer buffers being reset. In the case of read flow, AbfsHttpOperation layer has no access to AbfsInputStream instance and hence cant access the stream statistics it holds to. While logically read is the only possible API that can hit this case, this code is in a general Http response handling code, hence I retained the new statistic outside of StreamStatistics to track this. I looked at StoreStatisticNames, and it didnt look right to add a new statistic in there, hence adding this along with the other network statistics such as BYTES_SEND and BYTES_RECEIVED defined in AbfsStatistic enum. Please let me know if this looks ok. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17871) S3A CSE: minor tuning
[ https://issues.apache.org/jira/browse/HADOOP-17871?focusedWorklogId=650001=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650001 ] ASF GitHub Bot logged work on HADOOP-17871: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:05 Start Date: 13/Sep/21 13:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918169320 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 19 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 24s | | trunk passed | | +1 :green_heart: | compile | 21m 18s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 36s | | trunk passed | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 28s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 20m 36s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 20m 36s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/2/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 14 new + 1917 unchanged - 0 fixed = 1931 total (was 1917) | | +1 :green_heart: | compile | 18m 29s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 18m 29s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 14 new + 1790 unchanged - 0 fixed = 1804 total (was 1790) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 32s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 13 new + 137 unchanged - 38 fixed = 150 total (was 175) | | +1 :green_heart: | mvnsite | 2m 35s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 10s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 31s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 198m 46s | | | | Subsystem |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3428: HDFS-16225. Fix typo for FederationTestUtils
hadoop-yetus commented on pull request #3428: URL: https://github.com/apache/hadoop/pull/3428#issuecomment-918170097 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 58s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 34m 7s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3428/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 107m 55s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3428/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3428 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux a862a92b219a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 56d7aa6c25057c2f8e5f8feb2b6f2a145d221796 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3428/1/testReport/ | | Max. process+thread count | 2712 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3428/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To
[GitHub] [hadoop] hadoop-yetus commented on pull request #3412: HADOOP-17871. S3A CSE: minor tuning
hadoop-yetus commented on pull request #3412: URL: https://github.com/apache/hadoop/pull/3412#issuecomment-918169320 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 19 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 24s | | trunk passed | | +1 :green_heart: | compile | 21m 18s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 36s | | trunk passed | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 28s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 20m 36s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 20m 36s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/2/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 14 new + 1917 unchanged - 0 fixed = 1931 total (was 1917) | | +1 :green_heart: | compile | 18m 29s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 18m 29s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 14 new + 1790 unchanged - 0 fixed = 1804 total (was 1790) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 32s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 13 new + 137 unchanged - 38 fixed = 150 total (was 175) | | +1 :green_heart: | mvnsite | 2m 35s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 10s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 31s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 198m 46s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3412/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3412 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint | | uname | Linux e331a69896b5
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3312: HADOOP-17851 Support user specified content encoding for S3A
hadoop-yetus removed a comment on pull request #3312: URL: https://github.com/apache/hadoop/pull/3312#issuecomment-902969672 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 3s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 30s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | -1 :x: | compile | 0m 38s | [/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-aws in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 38s | [/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-aws in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 31s | [/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-aws in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 0m 31s | [/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-aws in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 2 new + 5 unchanged - 0 fixed = 7 total (was 5) | | -1 :x: | mvnsite | 0m 31s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 0m 31s | [/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shadedclient | 16m 23s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 34s |
[jira] [Work logged] (HADOOP-17851) Support user specified content encoding for S3A
[ https://issues.apache.org/jira/browse/HADOOP-17851?focusedWorklogId=65=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-65 ] ASF GitHub Bot logged work on HADOOP-17851: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:04 Start Date: 13/Sep/21 13:04 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3312: URL: https://github.com/apache/hadoop/pull/3312#issuecomment-902969672 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 3s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 30s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | -1 :x: | compile | 0m 38s | [/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-aws in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 38s | [/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-aws in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 31s | [/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-aws in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 0m 31s | [/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-aws in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 2 new + 5 unchanged - 0 fixed = 7 total (was 5) | | -1 :x: | mvnsite | 0m 31s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/3/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 0m 31s |
[jira] [Work logged] (HADOOP-17851) Support user specified content encoding for S3A
[ https://issues.apache.org/jira/browse/HADOOP-17851?focusedWorklogId=64=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-64 ] ASF GitHub Bot logged work on HADOOP-17851: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:02 Start Date: 13/Sep/21 13:02 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3312: URL: https://github.com/apache/hadoop/pull/3312#issuecomment-917225765 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 50s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 46s | | trunk passed | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 17s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 74m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3312 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 512413a1d55c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3d45dc78b3ff309289603a7bde6aae91b73e450a | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/5/testReport/ | | Max. process+thread count | 676 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | |
[jira] [Work logged] (HADOOP-17892) Add Hadoop code formatter in dev-support
[ https://issues.apache.org/jira/browse/HADOOP-17892?focusedWorklogId=649998=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-649998 ] ASF GitHub Bot logged work on HADOOP-17892: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:02 Start Date: 13/Sep/21 13:02 Worklog Time Spent: 10m Work Description: virajjasani commented on a change in pull request #3387: URL: https://github.com/apache/hadoop/pull/3387#discussion_r707312230 ## File path: dev-support/code-formatter/hadoop_idea_formatter.xml ## @@ -0,0 +1,76 @@ + + + + + + + + + + + + + + + + + + + + + Review comment: I tried to confirm from some of the oldest classes e.g Namenode and few others (within namenode package). But now I see that some other similar old classes have different order. For instance, Datanode itself has kind of reverse order. @steveloughran do we have the correct order defined somewhere? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 649998) Time Spent: 4h 40m (was: 4.5h) > Add Hadoop code formatter in dev-support > > > Key: HADOOP-17892 > URL: https://issues.apache.org/jira/browse/HADOOP-17892 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > We should add Hadoop code formatter xml to dev-support specifically for new > developers to refer to. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17851) Support user specified content encoding for S3A
[ https://issues.apache.org/jira/browse/HADOOP-17851?focusedWorklogId=649997=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-649997 ] ASF GitHub Bot logged work on HADOOP-17851: --- Author: ASF GitHub Bot Created on: 13/Sep/21 13:02 Start Date: 13/Sep/21 13:02 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #3312: URL: https://github.com/apache/hadoop/pull/3312#discussion_r707311904 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java ## @@ -410,6 +410,10 @@ private Constants() { public static final String CANNED_ACL = "fs.s3a.acl.default"; public static final String DEFAULT_CANNED_ACL = ""; + // gzip, deflate, compress, br, etc. + public static final String CONTENT_ENCODING = "fs.s3a.content.encoding"; + public static final String DEFAULT_CONTENT_ENCODING = null; Review comment: +1 for making that default visible -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 649997) Time Spent: 3h 40m (was: 3.5h) > Support user specified content encoding for S3A > --- > > Key: HADOOP-17851 > URL: https://issues.apache.org/jira/browse/HADOOP-17851 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Holden Karau >Assignee: Holden Karau >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > User-specified object content-encoding (part of the object metadata) is > important for allowing compressed files to be processed in the AWS ecosystem. > We should allow the user to specify the content encoding of the files being > written. > metadata can not be changed after a file is written without a rewrite. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3312: HADOOP-17851 Support user specified content encoding for S3A
hadoop-yetus removed a comment on pull request #3312: URL: https://github.com/apache/hadoop/pull/3312#issuecomment-917225765 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 50s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 46s | | trunk passed | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 17s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 74m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3312 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 512413a1d55c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3d45dc78b3ff309289603a7bde6aae91b73e450a | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/5/testReport/ | | Max. process+thread count | 676 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3312/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go