[GitHub] [hadoop] virajjasani commented on pull request #3659: HDFS-16323. DatanodeHttpServer doesn't require handler state map while retrieving filter handlers
virajjasani commented on pull request #3659: URL: https://github.com/apache/hadoop/pull/3659#issuecomment-969965471 @anuengineer @aajisaka @tasanuma Could you please take a look? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3666: Backport HDFS-16315 for branch-3.3
tasanuma commented on pull request #3666: URL: https://github.com/apache/hadoop/pull/3666#issuecomment-969957949 @tomscut Thanks for creating the PR. +1, pending Jenkins. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3665: pull-request-creator 1
hadoop-yetus commented on pull request #3665: URL: https://github.com/apache/hadoop/pull/3665#issuecomment-969933268 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | shadedclient | 26m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | shadedclient | 20m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | asflicense | 0m 36s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3665/1/artifact/out/results-asflicense.txt) | The patch generated 1 ASF License warnings. | | | | 49m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3665/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3665 | | Optional Tests | dupname asflicense codespell shellcheck shelldocs | | uname | Linux d064b129d20c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d93bf1f3cf3a588c96a5cb85b17d1c7b0158ed51 | | Max. process+thread count | 543 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3665/1/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3643: HDFS-16315. Add metrics related to Transfer and NativeCopy for DataNode
tomscut commented on pull request #3643: URL: https://github.com/apache/hadoop/pull/3643#issuecomment-969915079 > @tomscut I'd like to cherry-pick it to lower branches, but there are small conflicts. Could you create another PR for branch-3.3? The variable of `name` in `TestFsDatasetImpl` doesn't exist in branch-3.3. Hi @tasanuma , I submitted a PR [#3666]( https://github.com/apache/hadoop/pull/3666) for branch-3.3. Please help review it after the build. Thank you very much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut opened a new pull request #3666: Backport HDFS-16315 for branch-3.3
tomscut opened a new pull request #3666: URL: https://github.com/apache/hadoop/pull/3666 Backport [HDFS-16315](https://issues.apache.org/jira/browse/HDFS-16315) for branch-3.3. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API
[ https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=681836=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681836 ] ASF GitHub Bot logged work on HADOOP-11867: --- Author: ASF GitHub Bot Created on: 16/Nov/21 06:30 Start Date: 16/Nov/21 06:30 Worklog Time Spent: 10m Work Description: mukund-thakur commented on a change in pull request #3499: URL: https://github.com/apache/hadoop/pull/3499#discussion_r749944182 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/CombinedFileRange.java ## @@ -0,0 +1,71 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.impl; + +import org.apache.hadoop.fs.FileRange; +import org.apache.hadoop.fs.FileRangeImpl; + +import java.util.ArrayList; +import java.util.List; + +/** + * A file range that represents a set of underlying file ranges. + * This is used when we combine the user's FileRange objects + * together into a single read for efficiency. + */ +public class CombinedFileRange extends FileRangeImpl { Review comment: yes this makes sense. But there are many tests based on toString() value match. I think it is a low priority now but will do for sure before final merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681836) Time Spent: 8h 20m (was: 8h 10m) > FS API: Add a high-performance vectored Read to FSDataInputStream API > - > > Key: HADOOP-11867 > URL: https://issues.apache.org/jira/browse/HADOOP-11867 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, fs/azure, fs/s3, hdfs-client >Affects Versions: 3.0.0 >Reporter: Gopal Vijayaraghavan >Assignee: Owen O'Malley >Priority: Major > Labels: performance, pull-request-available > Time Spent: 8h 20m > Remaining Estimate: 0h > > The most significant way to read from a filesystem in an efficient way is to > let the FileSystem implementation handle the seek behaviour underneath the > API to be the most efficient as possible. > A better approach to the seek problem is to provide a sequence of read > locations as part of a single call, while letting the system schedule/plan > the reads ahead of time. > This is exceedingly useful for seek-heavy readers on HDFS, since this allows > for potentially optimizing away the seek-gaps within the FSDataInputStream > implementation. > For seek+read systems with even more latency than locally-attached disks, > something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would > take of the seeks internally while reading chunk.remaining() bytes into each > chunk (which may be {{slice()}}ed off a bigger buffer). > The base implementation can stub in this as a sequence of seeks + read() into > ByteBuffers, without forcing each FS implementation to override this in any > way. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #3499: HADOOP-11867. Add a high performance vectored read API to file system.
mukund-thakur commented on a change in pull request #3499: URL: https://github.com/apache/hadoop/pull/3499#discussion_r749944182 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/CombinedFileRange.java ## @@ -0,0 +1,71 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.impl; + +import org.apache.hadoop.fs.FileRange; +import org.apache.hadoop.fs.FileRangeImpl; + +import java.util.ArrayList; +import java.util.List; + +/** + * A file range that represents a set of underlying file ranges. + * This is used when we combine the user's FileRange objects + * together into a single read for efficiency. + */ +public class CombinedFileRange extends FileRangeImpl { Review comment: yes this makes sense. But there are many tests based on toString() value match. I think it is a low priority now but will do for sure before final merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ardier opened a new pull request #3665: pull-request-creator 1
ardier opened a new pull request #3665: URL: https://github.com/apache/hadoop/pull/3665 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on pull request #3664: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 commented on pull request #3664: URL: https://github.com/apache/hadoop/pull/3664#issuecomment-969834330 @tomscut Thank you for your reply. sorry, the previous PR exception was closed and created a new PR. As you suggested fixed and updated PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 edited a comment on pull request #3651: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 edited a comment on pull request #3651: URL: https://github.com/apache/hadoop/pull/3651#issuecomment-969825808 Sorry, code branch HDFS-16314 was deleted by mistake, resubmit a PR https://github.com/apache/hadoop/pull/3664 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on pull request #3651: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 commented on pull request #3651: URL: https://github.com/apache/hadoop/pull/3651#issuecomment-969825808 Sorry, code branch HDFS-16314 was deleted by mistake, resubmit a PR -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 closed pull request #3664: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 closed pull request #3664: URL: https://github.com/apache/hadoop/pull/3664 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 opened a new pull request #3664: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 opened a new pull request #3664: URL: https://github.com/apache/hadoop/pull/3664 ### Description of PR Consider that make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable and rapid rollback in case this feature HDFS-16076 unexpected things happen in production environment Details: HDFS-16314 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3643: HDFS-16315. Add metrics related to Transfer and NativeCopy for DataNode
tomscut commented on pull request #3643: URL: https://github.com/apache/hadoop/pull/3643#issuecomment-969787153 > @tomscut I'd like to cherry-pick it to lower branches, but there are small conflicts. Could you create another PR for branch-3.3? The variable of `name` in `TestFsDatasetImpl` doesn't exist in branch-3.3. Thank you for reminding me. I would love to do this and I will submit a separate PR later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3643: HDFS-16315. Add metrics related to Transfer and NativeCopy for DataNode
tasanuma commented on pull request #3643: URL: https://github.com/apache/hadoop/pull/3643#issuecomment-969782246 @tomscut I'd like to cherry-pick it to lower branches, but there are small conflicts. Could you create another PR for branch-3.3? The variable of `name` in `TestFsDatasetImpl` doesn't exist in branch-3.3. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 closed pull request #3651: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 closed pull request #3651: URL: https://github.com/apache/hadoop/pull/3651 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on a change in pull request #3651: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 commented on a change in pull request #3651: URL: https://github.com/apache/hadoop/pull/3651#discussion_r749867573 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java ## @@ -2390,15 +2394,24 @@ String reconfigureParallelLoad(String newVal) { String reconfigureSlowNodesParameters(final DatanodeManager datanodeManager, final String property, final String newVal) throws ReconfigurationException { +BlockManager bm = namesystem.getBlockManager(); namesystem.writeLock(); boolean enable; try { - if (newVal == null) { -enable = DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_DEFAULT; + if (property.equals(DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_KEY)) { +enable = (newVal == null ? DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_DEFAULT : +Boolean.parseBoolean(newVal)); +datanodeManager.setAvoidSlowDataNodesForReadEnabled(enable); + } else if (property.equals( +DFS_NAMENODE_BLOCKPLACEMENTPOLICY_EXCLUDE_SLOW_NODES_ENABLED_KEY)) { +enable = (newVal == null ? + DFS_NAMENODE_BLOCKPLACEMENTPOLICY_EXCLUDE_SLOW_NODES_ENABLED_DEFAULT : +Boolean.parseBoolean(newVal)); +bm.setExculeSlowDataNodesForWriteEnabled(enable); } else { -enable = Boolean.parseBoolean(newVal); +throw new IllegalArgumentException("Unexpected property " + +property + "in reconfReplicationParameters"); Review comment: @tomscut Thank you for your reply. Fixed and updated PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3643: HDFS-16315. Add metrics related to Transfer and NativeCopy for DataNode
tomscut commented on pull request #3643: URL: https://github.com/apache/hadoop/pull/3643#issuecomment-969716964 Thanks @tasanuma for the merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GuoPhilipse commented on pull request #3661: HDFS-16324. fix error log in BlockManagerSafeMode
GuoPhilipse commented on pull request #3661: URL: https://github.com/apache/hadoop/pull/3661#issuecomment-969695706 > > @tomscut Could you kindly help verify, the test error seems not related with the patch. `[ERROR] testSetRepIncWithUnderReplicatedBlocks(org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks) Time elapsed: 120.023 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 12 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.shell.SetReplication.waitForReplication(SetReplication.java:137) at org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:78) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121) at org.apache.hadoop.fs.shell.Command.run(Command.java:179) at org.apache.hadoop.fs.FsShell.run(FsShell.java:327) at org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testSetRepIncWithUnderReplicatedBlocks(TestUnderReplicatedBlocks.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorIm pl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748)` > > Can you commit an empty commit to trigger the builder again? sure, have just triggered -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3643: HDFS-16315. Add metrics related to Transfer and NativeCopy for DataNode
tasanuma commented on pull request #3643: URL: https://github.com/apache/hadoop/pull/3643#issuecomment-969691839 Merged. Thanks for your contribution, @tomscut, and thanks for reviewing it, @ferhui and @ayushtkn. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma merged pull request #3643: HDFS-16315. Add metrics related to Transfer and NativeCopy for DataNode
tasanuma merged pull request #3643: URL: https://github.com/apache/hadoop/pull/3643 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3661: HDFS-16324. fix error log in BlockManagerSafeMode
tomscut commented on pull request #3661: URL: https://github.com/apache/hadoop/pull/3661#issuecomment-969677031 > @tomscut Could you kindly help verify, the test error seems not related with the patch. `[ERROR] testSetRepIncWithUnderReplicatedBlocks(org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks) Time elapsed: 120.023 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 12 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.shell.SetReplication.waitForReplication(SetReplication.java:137) at org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:78) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121) at org.apache.hadoop.fs.shell.Command.run(Command.java:179) at org.apache.hadoop.fs.FsShell.run(FsShell.java:327) at org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testSetRepIncWithUnderReplicatedBlocks(TestUnderReplicatedBlocks.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl .invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748)` Can you commit an empty commit to trigger the builder again? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3654: HDFS-16320. Datanode retrieve slownode information from NameNode
ferhui commented on pull request #3654: URL: https://github.com/apache/hadoop/pull/3654#issuecomment-969675032 @tasanuma @ayushtkn Would you also take a look? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GuoPhilipse commented on pull request #3661: HDFS-16324. fix error log in BlockManagerSafeMode
GuoPhilipse commented on pull request #3661: URL: https://github.com/apache/hadoop/pull/3661#issuecomment-969671740 @tomscut Could you kindly help verify, the test error seems not related with the patch. `[ERROR] testSetRepIncWithUnderReplicatedBlocks(org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks) Time elapsed: 120.023 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 12 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.shell.SetReplication.waitForReplication(SetReplication.java:137) at org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:78) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121) at org.apache.hadoop.fs.shell.Command.run(Command.java:179) at org.apache.hadoop.fs.FsShell.run(FsShell.java:327) at org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testSetRepIncWithUnderReplicatedBlocks(TestUnderReplicatedBlocks.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748)` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GuoPhilipse commented on pull request #3657: YARN-11007.Improve doc
GuoPhilipse commented on pull request #3657: URL: https://github.com/apache/hadoop/pull/3657#issuecomment-969667746 @BilwaST @tangzhankun ,Could you kindly help review ? Thanx -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3638: HDFS-16313. Add metrics for each subcluster
ferhui commented on pull request #3638: URL: https://github.com/apache/hadoop/pull/3638#issuecomment-969653176 @symious Thanks for contribution. @goiri Thanks for review! Merged to trunk. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui merged pull request #3638: HDFS-16313. Add metrics for each subcluster
ferhui merged pull request #3638: URL: https://github.com/apache/hadoop/pull/3638 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #3651: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
tomscut commented on a change in pull request #3651: URL: https://github.com/apache/hadoop/pull/3651#discussion_r749833779 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java ## @@ -2390,15 +2394,24 @@ String reconfigureParallelLoad(String newVal) { String reconfigureSlowNodesParameters(final DatanodeManager datanodeManager, final String property, final String newVal) throws ReconfigurationException { +BlockManager bm = namesystem.getBlockManager(); namesystem.writeLock(); boolean enable; try { - if (newVal == null) { -enable = DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_DEFAULT; + if (property.equals(DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_KEY)) { +enable = (newVal == null ? DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_DEFAULT : +Boolean.parseBoolean(newVal)); +datanodeManager.setAvoidSlowDataNodesForReadEnabled(enable); + } else if (property.equals( +DFS_NAMENODE_BLOCKPLACEMENTPOLICY_EXCLUDE_SLOW_NODES_ENABLED_KEY)) { +enable = (newVal == null ? + DFS_NAMENODE_BLOCKPLACEMENTPOLICY_EXCLUDE_SLOW_NODES_ENABLED_DEFAULT : +Boolean.parseBoolean(newVal)); +bm.setExculeSlowDataNodesForWriteEnabled(enable); } else { -enable = Boolean.parseBoolean(newVal); +throw new IllegalArgumentException("Unexpected property " + +property + "in reconfReplicationParameters"); Review comment: There is a space missing and the method name needs to be changed to `reconfigureSlowNodesParameters`. BTW, please add a space in line `2237`. Thanks. The other changes look good to me. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut opened a new pull request #3663: HDFS-16326. Simplify the code for DiskBalancer
tomscut opened a new pull request #3663: URL: https://github.com/apache/hadoop/pull/3663 JIRA: [HDFS-16326](https://issues.apache.org/jira/browse/HDFS-16326). Simplify the code for DiskBalancer. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3646: YARN-11003. Make RMNode aware of all (OContainer inclusive) allocated resources
hadoop-yetus commented on pull request #3646: URL: https://github.com/apache/hadoop/pull/3646#issuecomment-969502883 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 56s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 1s | | trunk passed | | +1 :green_heart: | compile | 21m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 19s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 4s | | trunk passed | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 45s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 22m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 22m 23s | | the patch passed | | +1 :green_heart: | compile | 19m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 37s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3646/3/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 87 unchanged - 2 fixed = 89 total (was 89) | | +1 :green_heart: | mvnsite | 2m 4s | | the patch passed | | +1 :green_heart: | javadoc | 1m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 43s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 96m 4s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 12m 40s | | hadoop-sls in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 303m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3646/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3646 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 0dc4f999a317 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2ee6bf8f93f7c4b1825ec367d4fbf76ae66d2fc4 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3646/3/testReport/ | | Max. process+thread count | 966 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-tools/hadoop-sls U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3646/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | |
[jira] [Work logged] (HADOOP-17999) ViewFileSystem.setVerifyChecksum should not initialize all target filesystems
[ https://issues.apache.org/jira/browse/HADOOP-17999?focusedWorklogId=681690=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681690 ] ASF GitHub Bot logged work on HADOOP-17999: --- Author: ASF GitHub Bot Created on: 15/Nov/21 20:29 Start Date: 15/Nov/21 20:29 Worklog Time Spent: 10m Work Description: shvachko commented on a change in pull request #3639: URL: https://github.com/apache/hadoop/pull/3639#discussion_r749658488 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java ## @@ -917,13 +917,9 @@ public void removeXAttr(Path path, String name) throws IOException { } @Override - public void setVerifyChecksum(final boolean verifyChecksum) { -List> mountPoints = -fsState.getMountPoints(); -Map fsMap = initializeMountedFileSystems(mountPoints); -for (InodeTree.MountPoint mount : mountPoints) { - fsMap.get(mount.src).setVerifyChecksum(verifyChecksum); -} + public void setVerifyChecksum(final boolean verifyChecksum) { Review comment: Looks like the default impl. in FileSystem is already no-op. So the value of overriding is only in the comment. May be we should just remove this methods and add a comment to the jira explaining the change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681690) Time Spent: 40m (was: 0.5h) > ViewFileSystem.setVerifyChecksum should not initialize all target filesystems > - > > Key: HADOOP-17999 > URL: https://issues.apache.org/jira/browse/HADOOP-17999 > Project: Hadoop Common > Issue Type: Bug >Reporter: Abhishek Das >Assignee: Abhishek Das >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Currently setVerifyChecksum and setWriteChecksum initializes all target file > systems which causes delay in hadoop shell copy commands such as get, put, > copyFromLocal etc. > This also eventually causes OOM. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shvachko commented on a change in pull request #3639: HADOOP-17999: Do not initialize all target FileSystems for setWriteChecksum and setVerifyChecksum
shvachko commented on a change in pull request #3639: URL: https://github.com/apache/hadoop/pull/3639#discussion_r749658488 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java ## @@ -917,13 +917,9 @@ public void removeXAttr(Path path, String name) throws IOException { } @Override - public void setVerifyChecksum(final boolean verifyChecksum) { -List> mountPoints = -fsState.getMountPoints(); -Map fsMap = initializeMountedFileSystems(mountPoints); -for (InodeTree.MountPoint mount : mountPoints) { - fsMap.get(mount.src).setVerifyChecksum(verifyChecksum); -} + public void setVerifyChecksum(final boolean verifyChecksum) { Review comment: Looks like the default impl. in FileSystem is already no-op. So the value of overriding is only in the comment. May be we should just remove this methods and add a comment to the jira explaining the change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18011) ABFS: Enable config control for default connection timeout
[ https://issues.apache.org/jira/browse/HADOOP-18011?focusedWorklogId=681681=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681681 ] ASF GitHub Bot logged work on HADOOP-18011: --- Author: ASF GitHub Bot Created on: 15/Nov/21 19:41 Start Date: 15/Nov/21 19:41 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3662: URL: https://github.com/apache/hadoop/pull/3662#issuecomment-969255405 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 19s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | -1 :x: | javadoc | 0m 24s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) | | -1 :x: | javadoc | 0m 22s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) | | +1 :green_heart: | spotbugs | 1m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 59s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 89m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3662 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs
[GitHub] [hadoop] hadoop-yetus commented on pull request #3662: HADOOP-18011. ABFS: Configurable HTTP connection and read timeouts
hadoop-yetus commented on pull request #3662: URL: https://github.com/apache/hadoop/pull/3662#issuecomment-969255405 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 19s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | -1 :x: | javadoc | 0m 24s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) | | -1 :x: | javadoc | 0m 22s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) | | +1 :green_heart: | spotbugs | 1m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 59s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 89m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3662/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3662 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 19704f6e0dfd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8e06077c489200f35a487ce0952350e03599a641 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions |
[GitHub] [hadoop] goiri commented on a change in pull request #3646: YARN-11003. Make RMNode aware of all (OContainer inclusive) allocated resources
goiri commented on a change in pull request #3646: URL: https://github.com/apache/hadoop/pull/3646#discussion_r749613001 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java ## @@ -358,6 +375,119 @@ public void testContainerUpdate() throws InterruptedException{ .getContainerId()); } + /** + * Tests that allocated container resources are counted correctly in + * {@link org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode} + * upon a node update. Resources should be counted for both GUARANTEED + * and OPPORTUNISTIC containers. + */ + @Test (timeout = 5000) + public void testAllocatedContainerUpdate() { +NodeStatus mockNodeStatus = createMockNodeStatus(); +//Start the node +node.handle(new RMNodeStartedEvent(null, null, null, mockNodeStatus)); + +// Make sure that the node starts with no allocated resources +Assert.assertEquals(node.getAllocatedContainerResource(), Resources.none()); + +ApplicationId app0 = BuilderUtils.newApplicationId(0, 0); +final ContainerId newContainerId = BuilderUtils.newContainerId( +BuilderUtils.newApplicationAttemptId(app0, 0), 0); +final ContainerId runningContainerId = BuilderUtils.newContainerId( +BuilderUtils.newApplicationAttemptId(app0, 0), 1); + +rmContext.getRMApps().put(app0, Mockito.mock(RMApp.class)); + +RMNodeStatusEvent statusEventFromNode1 = getMockRMNodeStatusEvent(null); + +final List containerStatuses = new ArrayList<>(); + +// Use different memory and VCores for new and running state containers +// to test that they add up correctly +final Resource newContainerCapability = +Resource.newInstance(100, 1); +final Resource runningContainerCapability = +Resource.newInstance(200, 2); +final Resource completedContainerCapability = +Resource.newInstance(50, 3); +final ContainerStatus newContainerStatusFromNode = getMockContainerStatus( +newContainerId, newContainerCapability, ContainerState.NEW); +final ContainerStatus runningContainerStatusFromNode = +getMockContainerStatus(runningContainerId, runningContainerCapability, +ContainerState.RUNNING); + +containerStatuses.addAll(Arrays.asList( +newContainerStatusFromNode, runningContainerStatusFromNode)); +doReturn(containerStatuses).when(statusEventFromNode1).getContainers(); +node.handle(statusEventFromNode1); +Assert.assertEquals(node.getAllocatedContainerResource(), +Resource.newInstance(300, 3)); + +final ContainerId newOppContainerId = BuilderUtils.newContainerId( +BuilderUtils.newApplicationAttemptId(app0, 0), 2); +final ContainerId runningOppContainerId = BuilderUtils.newContainerId( +BuilderUtils.newApplicationAttemptId(app0, 0), 3); + +// Use the same resource capability as in previous for opportunistic case +RMNodeStatusEvent statusEventFromNode2 = getMockRMNodeStatusEvent(null); +final ContainerStatus newOppContainerStatusFromNode = +getMockContainerStatus(newOppContainerId, newContainerCapability, +ContainerState.NEW, ExecutionType.OPPORTUNISTIC); +final ContainerStatus runningOppContainerStatusFromNode = +getMockContainerStatus(runningOppContainerId, +runningContainerCapability, ContainerState.RUNNING, +ExecutionType.OPPORTUNISTIC); + +containerStatuses.addAll(Arrays.asList( +newOppContainerStatusFromNode, runningOppContainerStatusFromNode)); + +// Pass in both guaranteed and opportunistic container statuses +doReturn(containerStatuses).when(statusEventFromNode2).getContainers(); + +node.handle(statusEventFromNode2); + +// The result here should be double the first check, +// since allocated resources are doubled, just +// with different execution types +Assert.assertEquals(node.getAllocatedContainerResource(), +Resource.newInstance(600, 6)); + +RMNodeStatusEvent statusEventFromNode3 = getMockRMNodeStatusEvent(null); +final ContainerId completedContainerId = BuilderUtils.newContainerId( +BuilderUtils.newApplicationAttemptId(app0, 0), 4); +final ContainerId completedOppContainerId = BuilderUtils.newContainerId( +BuilderUtils.newApplicationAttemptId(app0, 0), 5); +final ContainerStatus completedContainerStatusFromNode = +getMockContainerStatus(completedContainerId, completedContainerCapability, +ContainerState.COMPLETE, ExecutionType.OPPORTUNISTIC); +final ContainerStatus completedOppContainerStatusFromNode = +getMockContainerStatus(completedOppContainerId, +completedContainerCapability, ContainerState.COMPLETE, +ExecutionType.OPPORTUNISTIC); + +containerStatuses.addAll(Arrays.asList( +completedContainerStatusFromNode,
[jira] [Work logged] (HADOOP-17409) Remove S3Guard - no longer needed
[ https://issues.apache.org/jira/browse/HADOOP-17409?focusedWorklogId=681669=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681669 ] ASF GitHub Bot logged work on HADOOP-17409: --- Author: ASF GitHub Bot Created on: 15/Nov/21 19:08 Start Date: 15/Nov/21 19:08 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3534: URL: https://github.com/apache/hadoop/pull/3534#issuecomment-969227284 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 4s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 107 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 49s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 10s | | trunk passed | | +1 :green_heart: | compile | 28m 7s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 23m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 48s | | trunk passed | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 23s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | -1 :x: | compile | 25m 45s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 25m 45s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 22m 18s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 22m 18s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 11s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 15 new + 71 unchanged - 83 fixed = 86 total (was 154) | | -1 :x: | mvnsite | 0m 40s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | xml | 0m 4s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javadoc | 0m 43s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3534: HADOOP-17409. Remove s3guard from S3A module
hadoop-yetus commented on pull request #3534: URL: https://github.com/apache/hadoop/pull/3534#issuecomment-969227284 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 4s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 107 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 49s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 10s | | trunk passed | | +1 :green_heart: | compile | 28m 7s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 23m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 48s | | trunk passed | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 23s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | -1 :x: | compile | 25m 45s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 25m 45s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 22m 18s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 22m 18s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 11s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 15 new + 71 unchanged - 83 fixed = 86 total (was 154) | | -1 :x: | mvnsite | 0m 40s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | xml | 0m 4s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javadoc | 0m 43s | [/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3534/6/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-aws in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | spotbugs | 0m 38s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3661: HDFS-16324. fix error log in BlockManagerSafeMode
hadoop-yetus commented on pull request #3661: URL: https://github.com/apache/hadoop/pull/3661#issuecomment-969219671 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 26s | | trunk passed | | +1 :green_heart: | compile | 5m 12s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 52s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 22s | | trunk passed | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 2s | | the patch passed | | +1 :green_heart: | compile | 5m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 5m 12s | | the patch passed | | +1 :green_heart: | compile | 4m 40s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 4s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3661/1/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 133 new + 34 unchanged - 0 fixed = 167 total (was 34) | | +1 :green_heart: | mvnsite | 2m 3s | | the patch passed | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 50s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 37s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 25s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 226m 6s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3661/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 352m 51s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3661/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3661 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1500bb72bf63 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 93e21da8e1ee16ce52580927e2c40d2b43c1c271 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3661/1/testReport/ | | Max. process+thread count | 3382 (vs.
[jira] [Work logged] (HADOOP-17999) ViewFileSystem.setVerifyChecksum should not initialize all target filesystems
[ https://issues.apache.org/jira/browse/HADOOP-17999?focusedWorklogId=681666=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681666 ] ASF GitHub Bot logged work on HADOOP-17999: --- Author: ASF GitHub Bot Created on: 15/Nov/21 18:56 Start Date: 15/Nov/21 18:56 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3639: URL: https://github.com/apache/hadoop/pull/3639#issuecomment-969216656 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 0m 23s | [/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 23s | [/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in trunk failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -0 :warning: | checkstyle | 0m 20s | [/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt) | The patch fails to run checkstyle in hadoop-common | | -1 :x: | mvnsite | 0m 23s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | javadoc | 0m 23s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-common in trunk failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | spotbugs | 0m 23s | [/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | +1 :green_heart: | shadedclient | 2m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | compile | 0m 23s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 23s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 23s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3639: HADOOP-17999: Do not initialize all target FileSystems for setWriteChecksum and setVerifyChecksum
hadoop-yetus commented on pull request #3639: URL: https://github.com/apache/hadoop/pull/3639#issuecomment-969216656 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 0m 23s | [/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 23s | [/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in trunk failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -0 :warning: | checkstyle | 0m 20s | [/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt) | The patch fails to run checkstyle in hadoop-common | | -1 :x: | mvnsite | 0m 23s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | javadoc | 0m 23s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-common in trunk failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | spotbugs | 0m 23s | [/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | +1 :green_heart: | shadedclient | 2m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | compile | 0m 23s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 23s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 23s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3639/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 0m 23s |
[jira] [Commented] (HADOOP-18012) ABFS: Modify Rename idempotency code
[ https://issues.apache.org/jira/browse/HADOOP-18012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444030#comment-17444030 ] Sneha Vijayarajan commented on HADOOP-18012: Need to fix test failures seen along with this (as they overlap): [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemDelete.testDeleteIdempotencyTriggerHttp404:269 » NullPointer [ERROR] ITestAzureBlobFileSystemRename.testRenameIdempotencyTriggerHttpNotFound:232->testRenameIdempotencyTriggerChecks:268->lambda$testRenameIdempotencyTriggerChecks$0:269 » NullPointer > ABFS: Modify Rename idempotency code > > > Key: HADOOP-18012 > URL: https://issues.apache.org/jira/browse/HADOOP-18012 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > > ABFS driver has a handling for rename idempotency which relies on LMT of the > destination file to conclude if the rename was successful or not when source > file is absent and if the rename request had entered retry loop. > This handling is incorrect as LMT of the destination does not change on > rename. > This Jira will track the change to undo the current implementation and add a > new one where for an incoming rename operation, source file eTag is fetched > first and then rename is done only if eTag matches for the source file. > As this is going to be a costly operation given an extra HEAD request is > added to each rename, this implementation will be guarded over a config and > can enabled by customers who have workloads that do multiple renames. > Long term plan to handle rename idempotency without HEAD request is being > discussed. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18011) ABFS: Enable config control for default connection timeout
[ https://issues.apache.org/jira/browse/HADOOP-18011?focusedWorklogId=681659=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681659 ] ASF GitHub Bot logged work on HADOOP-18011: --- Author: ASF GitHub Bot Created on: 15/Nov/21 18:35 Start Date: 15/Nov/21 18:35 Worklog Time Spent: 10m Work Description: snvijaya commented on pull request #3662: URL: https://github.com/apache/hadoop/pull/3662#issuecomment-969201299 ::: AGGREGATED TEST RESULT HNS-OAuth [INFO] Results: [INFO] [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 1 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemDelete.testDeleteIdempotencyTriggerHttp404:269 » NullPointer [ERROR] ITestAzureBlobFileSystemRename.testRenameIdempotencyTriggerHttpNotFound:232->testRenameIdempotencyTriggerChecks:268->lambda$testRenameIdempotencyTriggerChecks$0:269 » NullPointer [INFO] [ERROR] Tests run: 560, Failures: 0, Errors: 2, Skipped: 98 [INFO] Results: [INFO] [WARNING] Tests run: 259, Failures: 0, Errors: 0, Skipped: 52 HNS-SharedKey [INFO] Results: [INFO] [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 2 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemDelete.testDeleteIdempotencyTriggerHttp404:269 » NullPointer [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:349->lambda$testAcquireRetry$7:350 » TestTimedOut [ERROR] ITestAzureBlobFileSystemRename.testRenameIdempotencyTriggerHttpNotFound:232->testRenameIdempotencyTriggerChecks:268->lambda$testRenameIdempotencyTriggerChecks$0:269 » NullPointer [INFO] [ERROR] Tests run: 560, Failures: 0, Errors: 3, Skipped: 67 [INFO] Results: [INFO] [WARNING] Tests run: 259, Failures: 0, Errors: 0, Skipped: 40 NonHNS-SharedKey [INFO] Results: [INFO] [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 2 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAbfsRestOperationException.testCustomTokenFetchRetryCount:93->testWithDifferentCustomTokenFetchRetry:122->Assert.assertTrue:42->Assert.fail:89 Number of token fetch retries (4) done, does not match with fs.azure.custom.token.fetch.retry.count configured (0) [ERROR] ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS:181 Expecting org.apache.hadoop.security.AccessControlException with text "This request is not authorized to perform this operation using this permission.", 403 but got : "void" [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemDelete.testDeleteIdempotencyTriggerHttp404:269 » NullPointer [ERROR] ITestAzureBlobFileSystemRename.testRenameIdempotencyTriggerHttpNotFound:232->testRenameIdempotencyTriggerChecks:268->lambda$testRenameIdempotencyTriggerChecks$0:269 » NullPointer [INFO] [ERROR] Tests run: 560, Failures: 2, Errors: 2, Skipped: 276 [INFO] Results: [INFO] [WARNING] Tests run: 259, Failures: 0, Errors: 0, Skipped: 40 Failed tests are not related to the change and different JIRAs are tracking the fixes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681659) Time Spent: 20m (was: 10m) > ABFS: Enable config control for default connection timeout > --- > > Key: HADOOP-18011 > URL: https://issues.apache.org/jira/browse/HADOOP-18011 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > ABFS driver has a default connection timeout and read timeout value of 30 > secs. For jobs that are time sensitive, preference would be quick failure and > have shorter HTTP connection and read timeout. > This Jira is created enable config control over the default connection and > read timeout. > New config name: > fs.azure.http.connection.timeout > fs.azure.http.read.timeout -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on pull request #3662: HADOOP-18011. ABFS: Configurable HTTP connection and read timeouts
snvijaya commented on pull request #3662: URL: https://github.com/apache/hadoop/pull/3662#issuecomment-969201299 ::: AGGREGATED TEST RESULT HNS-OAuth [INFO] Results: [INFO] [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 1 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemDelete.testDeleteIdempotencyTriggerHttp404:269 » NullPointer [ERROR] ITestAzureBlobFileSystemRename.testRenameIdempotencyTriggerHttpNotFound:232->testRenameIdempotencyTriggerChecks:268->lambda$testRenameIdempotencyTriggerChecks$0:269 » NullPointer [INFO] [ERROR] Tests run: 560, Failures: 0, Errors: 2, Skipped: 98 [INFO] Results: [INFO] [WARNING] Tests run: 259, Failures: 0, Errors: 0, Skipped: 52 HNS-SharedKey [INFO] Results: [INFO] [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 2 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemDelete.testDeleteIdempotencyTriggerHttp404:269 » NullPointer [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:349->lambda$testAcquireRetry$7:350 » TestTimedOut [ERROR] ITestAzureBlobFileSystemRename.testRenameIdempotencyTriggerHttpNotFound:232->testRenameIdempotencyTriggerChecks:268->lambda$testRenameIdempotencyTriggerChecks$0:269 » NullPointer [INFO] [ERROR] Tests run: 560, Failures: 0, Errors: 3, Skipped: 67 [INFO] Results: [INFO] [WARNING] Tests run: 259, Failures: 0, Errors: 0, Skipped: 40 NonHNS-SharedKey [INFO] Results: [INFO] [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 2 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAbfsRestOperationException.testCustomTokenFetchRetryCount:93->testWithDifferentCustomTokenFetchRetry:122->Assert.assertTrue:42->Assert.fail:89 Number of token fetch retries (4) done, does not match with fs.azure.custom.token.fetch.retry.count configured (0) [ERROR] ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS:181 Expecting org.apache.hadoop.security.AccessControlException with text "This request is not authorized to perform this operation using this permission.", 403 but got : "void" [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemDelete.testDeleteIdempotencyTriggerHttp404:269 » NullPointer [ERROR] ITestAzureBlobFileSystemRename.testRenameIdempotencyTriggerHttpNotFound:232->testRenameIdempotencyTriggerChecks:268->lambda$testRenameIdempotencyTriggerChecks$0:269 » NullPointer [INFO] [ERROR] Tests run: 560, Failures: 2, Errors: 2, Skipped: 276 [INFO] Results: [INFO] [WARNING] Tests run: 259, Failures: 0, Errors: 0, Skipped: 40 Failed tests are not related to the change and different JIRAs are tracking the fixes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in
[ https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444021#comment-17444021 ] Prabhu Joseph edited comment on HADOOP-17996 at 11/15/21, 6:18 PM: --- Thanks [~brahmareddy] for reviewing the patch. {quote}this was just to track the re-login attempt so that so many retries can be avoided.? {quote} There are two issues the patch addresses 1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from {{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to current time irrespective of the login status, followed by logout and then login. When login fails for some reason like intermittent issue in connecting to AD, then all subsequent Client and Server operations will fail with GSS Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 seconds). {code:java} // try re-login if (UserGroupInformation.isLoginKeytabBased()) { UserGroupInformation.getLoginUser().reloginFromKeytab(); } else if (UserGroupInformation.isLoginTicketBased()) { UserGroupInformation.getLoginUser().reloginFromTicketCache(); } {code} This issue is addressed by setting the last login time to current time after the login succeeds. 2. Currently the re-login happens only from IPC#Client during {{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has logged out and have failed to login back leading to all IPC#Server operations failing in {{processSaslMessage}} with below error. {code:java} 2021-11-02 13:28:08,750 WARN ipc.Server - Auth failed for 10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate failed) 2021-11-02 13:28:08,767 WARN ipc.Server - Auth failed for 10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate failed) {code} This patch adds re-login from Server side as well during any Authentication Failure. {quote}Configuring kerberosMinSecondsBeforeRelogin with low value will not work here if it's needed.? {quote} This will workaround the first issue. {quote}After this fix , on failure it will continuously retry..? {quote} IPC#Client does re-login during Connection Failure. This patch adds at IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client and IPC#Server. The real kerberos login will happen for every retry from IPC#Client and IPC#Server till the login succeeds. was (Author: prabhu joseph): Thanks [~brahmareddy] for reviewing the patch. {quote}this was just to track the re-login attempt so that so many retries can be avoided.? {quote} There are two issues the patch tries to address 1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from {{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to current time irrespective of the login status, followed by logout and then login. When login fails for some reason like intermittent issue in connecting to AD, then all subsequent Client and Server operations will fail with GSS Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 seconds). {code:java} // try re-login if (UserGroupInformation.isLoginKeytabBased()) { UserGroupInformation.getLoginUser().reloginFromKeytab(); } else if (UserGroupInformation.isLoginTicketBased()) { UserGroupInformation.getLoginUser().reloginFromTicketCache(); } {code} This issue is addressed by setting the last login time to current time after the login succeeds. 2. Currently the re-login happens only from IPC#Client during {{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has logged out and have failed to login back leading to all IPC#Server operations failing in {{processSaslMessage}} with below error. {code:java} 2021-11-02 13:28:08,750 WARN ipc.Server - Auth failed for 10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate failed) 2021-11-02 13:28:08,767 WARN ipc.Server - Auth failed for 10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate failed) {code} This patch adds re-login from Server side as well during any Authentication Failure. bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work here if it's needed.? This will workaround the first issue. bq. After this fix , on failure it will continuously retry..? IPC#Client does re-login during Connection Failure. This patch adds at IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client and IPC#Server. The real kerberos login will happen for every retry from IPC#Client and IPC#Server till the login succeeds. > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in >
[jira] [Commented] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in
[ https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444021#comment-17444021 ] Prabhu Joseph commented on HADOOP-17996: Thanks [~brahmareddy] for reviewing the patch. {quote}this was just to track the re-login attempt so that so many retries can be avoided.? {quote} There are two issues the patch tries to address 1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from {{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to current time irrespective of the login status, followed by logout and then login. When login fails for some reason like intermittent issue in connecting to AD, then all subsequent Client and Server operations will fail with GSS Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 seconds). {code:java} // try re-login if (UserGroupInformation.isLoginKeytabBased()) { UserGroupInformation.getLoginUser().reloginFromKeytab(); } else if (UserGroupInformation.isLoginTicketBased()) { UserGroupInformation.getLoginUser().reloginFromTicketCache(); } {code} This issue is addressed by setting the last login time to current time after the login succeeds. 2. Currently the re-login happens only from IPC#Client during {{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has logged out and have failed to login back leading to all IPC#Server operations failing in {{processSaslMessage}} with below error. {code:java} 2021-11-02 13:28:08,750 WARN ipc.Server - Auth failed for 10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate failed) 2021-11-02 13:28:08,767 WARN ipc.Server - Auth failed for 10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate failed) {code} This patch adds re-login from Server side as well during any Authentication Failure. bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work here if it's needed.? This will workaround the first issue. {quote} {quote}After this fix , on failure it will continuously retry..? {quote} IPC#Client does re-login during Connection Failure. This patch adds at IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client and IPC#Server. The real kerberos login will happen for every retry from IPC#Client and IPC#Server till the login succeeds. > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in > -- > > Key: HADOOP-17996 > URL: https://issues.apache.org/jira/browse/HADOOP-17996 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.3.1 >Reporter: Prabhu Joseph >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17996.001.patch > > > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in. IPC#Client does reloginFromKeytab when there is a connection > reset failure from AD which does logout and set the last login time to now > and then tries to login. The login also fails as not able to connect to AD. > Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check > fails. All Client and Server operations fails with *GSS initiate failed* > {code} > 2021-10-31 09:50:53,546 WARN ha.EditLogTailer - Unable to trigger a roll of > the active NN > java.util.concurrent.ExecutionException: > org.apache.hadoop.security.KerberosAuthException: DestHost:destPort > namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local > exception: org.apache.hadoop.security.KerberosAuthException: Login failure > for user: nn/nameno...@example.com javax.security.auth.login.LoginException: > Connection reset > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712) > at > org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480) > at >
[jira] [Comment Edited] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in
[ https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444021#comment-17444021 ] Prabhu Joseph edited comment on HADOOP-17996 at 11/15/21, 6:17 PM: --- Thanks [~brahmareddy] for reviewing the patch. {quote}this was just to track the re-login attempt so that so many retries can be avoided.? {quote} There are two issues the patch tries to address 1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from {{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to current time irrespective of the login status, followed by logout and then login. When login fails for some reason like intermittent issue in connecting to AD, then all subsequent Client and Server operations will fail with GSS Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 seconds). {code:java} // try re-login if (UserGroupInformation.isLoginKeytabBased()) { UserGroupInformation.getLoginUser().reloginFromKeytab(); } else if (UserGroupInformation.isLoginTicketBased()) { UserGroupInformation.getLoginUser().reloginFromTicketCache(); } {code} This issue is addressed by setting the last login time to current time after the login succeeds. 2. Currently the re-login happens only from IPC#Client during {{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has logged out and have failed to login back leading to all IPC#Server operations failing in {{processSaslMessage}} with below error. {code:java} 2021-11-02 13:28:08,750 WARN ipc.Server - Auth failed for 10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate failed) 2021-11-02 13:28:08,767 WARN ipc.Server - Auth failed for 10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate failed) {code} This patch adds re-login from Server side as well during any Authentication Failure. bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work here if it's needed.? This will workaround the first issue. bq. After this fix , on failure it will continuously retry..? IPC#Client does re-login during Connection Failure. This patch adds at IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client and IPC#Server. The real kerberos login will happen for every retry from IPC#Client and IPC#Server till the login succeeds. was (Author: prabhu joseph): Thanks [~brahmareddy] for reviewing the patch. {quote}this was just to track the re-login attempt so that so many retries can be avoided.? {quote} There are two issues the patch tries to address 1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from {{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to current time irrespective of the login status, followed by logout and then login. When login fails for some reason like intermittent issue in connecting to AD, then all subsequent Client and Server operations will fail with GSS Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 seconds). {code:java} // try re-login if (UserGroupInformation.isLoginKeytabBased()) { UserGroupInformation.getLoginUser().reloginFromKeytab(); } else if (UserGroupInformation.isLoginTicketBased()) { UserGroupInformation.getLoginUser().reloginFromTicketCache(); } {code} This issue is addressed by setting the last login time to current time after the login succeeds. 2. Currently the re-login happens only from IPC#Client during {{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has logged out and have failed to login back leading to all IPC#Server operations failing in {{processSaslMessage}} with below error. {code:java} 2021-11-02 13:28:08,750 WARN ipc.Server - Auth failed for 10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate failed) 2021-11-02 13:28:08,767 WARN ipc.Server - Auth failed for 10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate failed) {code} This patch adds re-login from Server side as well during any Authentication Failure. bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work here if it's needed.? This will workaround the first issue. {quote} {quote}After this fix , on failure it will continuously retry..? {quote} IPC#Client does re-login during Connection Failure. This patch adds at IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client and IPC#Server. The real kerberos login will happen for every retry from IPC#Client and IPC#Server till the login succeeds. > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in >
[jira] [Updated] (HADOOP-18011) ABFS: Enable config control for default connection timeout
[ https://issues.apache.org/jira/browse/HADOOP-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18011: Labels: pull-request-available (was: ) > ABFS: Enable config control for default connection timeout > --- > > Key: HADOOP-18011 > URL: https://issues.apache.org/jira/browse/HADOOP-18011 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > ABFS driver has a default connection timeout and read timeout value of 30 > secs. For jobs that are time sensitive, preference would be quick failure and > have shorter HTTP connection and read timeout. > This Jira is created enable config control over the default connection and > read timeout. > New config name: > fs.azure.http.connection.timeout > fs.azure.http.read.timeout -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18011) ABFS: Enable config control for default connection timeout
[ https://issues.apache.org/jira/browse/HADOOP-18011?focusedWorklogId=681648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681648 ] ASF GitHub Bot logged work on HADOOP-18011: --- Author: ASF GitHub Bot Created on: 15/Nov/21 18:10 Start Date: 15/Nov/21 18:10 Worklog Time Spent: 10m Work Description: snvijaya opened a new pull request #3662: URL: https://github.com/apache/hadoop/pull/3662 ABFS driver has a default connection timeout and read timeout value of 30 secs. For jobs that are time sensitive, preference would be quick failure and have shorter HTTP connection and read timeout. This change enables 2 configs: fs.azure.http.connection.timeout and fs.azure.http.read.timeout that allows custom values to be configured for default HTTP connection and read timeout. All the integration tests were run over ABFS accounts (pasted in PR conversation tab). New checks are added to the tests for config update and socket timeout. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681648) Remaining Estimate: 0h Time Spent: 10m > ABFS: Enable config control for default connection timeout > --- > > Key: HADOOP-18011 > URL: https://issues.apache.org/jira/browse/HADOOP-18011 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > ABFS driver has a default connection timeout and read timeout value of 30 > secs. For jobs that are time sensitive, preference would be quick failure and > have shorter HTTP connection and read timeout. > This Jira is created enable config control over the default connection and > read timeout. > New config name: > fs.azure.http.connection.timeout > fs.azure.http.read.timeout -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya opened a new pull request #3662: HADOOP-18011. ABFS: Configurable HTTP connection and read timeouts
snvijaya opened a new pull request #3662: URL: https://github.com/apache/hadoop/pull/3662 ABFS driver has a default connection timeout and read timeout value of 30 secs. For jobs that are time sensitive, preference would be quick failure and have shorter HTTP connection and read timeout. This change enables 2 configs: fs.azure.http.connection.timeout and fs.azure.http.read.timeout that allows custom values to be configured for default HTTP connection and read timeout. All the integration tests were run over ABFS accounts (pasted in PR conversation tab). New checks are added to the tests for config update and socket timeout. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18012) ABFS: Modify Rename idempotency code
Sneha Vijayarajan created HADOOP-18012: -- Summary: ABFS: Modify Rename idempotency code Key: HADOOP-18012 URL: https://issues.apache.org/jira/browse/HADOOP-18012 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.3.1 Reporter: Sneha Vijayarajan Assignee: Sneha Vijayarajan ABFS driver has a handling for rename idempotency which relies on LMT of the destination file to conclude if the rename was successful or not when source file is absent and if the rename request had entered retry loop. This handling is incorrect as LMT of the destination does not change on rename. This Jira will track the change to undo the current implementation and add a new one where for an incoming rename operation, source file eTag is fetched first and then rename is done only if eTag matches for the source file. As this is going to be a costly operation given an extra HEAD request is added to each rename, this implementation will be guarded over a config and can enabled by customers who have workloads that do multiple renames. Long term plan to handle rename idempotency without HEAD request is being discussed. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17409) Remove S3Guard - no longer needed
[ https://issues.apache.org/jira/browse/HADOOP-17409?focusedWorklogId=681625=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681625 ] ASF GitHub Bot logged work on HADOOP-17409: --- Author: ASF GitHub Bot Created on: 15/Nov/21 17:12 Start Date: 15/Nov/21 17:12 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3534: URL: https://github.com/apache/hadoop/pull/3534#issuecomment-969126236 testing s3 london, all good. removed ITestPartialRenamesDeletes test which was failing as the output of the processing of the multi object delete exception didn't match expectations. We don't use that feature in production any more, once the need to update the s3guard tables goes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681625) Time Spent: 2h 50m (was: 2h 40m) > Remove S3Guard - no longer needed > - > > Key: HADOOP-17409 > URL: https://issues.apache.org/jira/browse/HADOOP-17409 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > With Consistent S3, S3Guard is superfluous. > stop developing it and wean people off it as soon as they can. > Then we can worry about what to do in the code. It has gradually insinuated > its way through the layers, especially things like multi-object delete > handling (see HADOOP-17244). Things would be a lot simpler without it > This work is being done in the feature branch HADOOP-17409-remove-s3guard -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3534: HADOOP-17409. Remove s3guard from S3A module
steveloughran commented on pull request #3534: URL: https://github.com/apache/hadoop/pull/3534#issuecomment-969126236 testing s3 london, all good. removed ITestPartialRenamesDeletes test which was failing as the output of the processing of the multi object delete exception didn't match expectations. We don't use that feature in production any more, once the need to update the s3guard tables goes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17409) Remove S3Guard - no longer needed
[ https://issues.apache.org/jira/browse/HADOOP-17409?focusedWorklogId=681623=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681623 ] ASF GitHub Bot logged work on HADOOP-17409: --- Author: ASF GitHub Bot Created on: 15/Nov/21 17:11 Start Date: 15/Nov/21 17:11 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3534: URL: https://github.com/apache/hadoop/pull/3534#issuecomment-962095446 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681623) Time Spent: 2h 40m (was: 2.5h) > Remove S3Guard - no longer needed > - > > Key: HADOOP-17409 > URL: https://issues.apache.org/jira/browse/HADOOP-17409 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > With Consistent S3, S3Guard is superfluous. > stop developing it and wean people off it as soon as they can. > Then we can worry about what to do in the code. It has gradually insinuated > its way through the layers, especially things like multi-object delete > handling (see HADOOP-17244). Things would be a lot simpler without it > This work is being done in the feature branch HADOOP-17409-remove-s3guard -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3534: HADOOP-17409. Remove s3guard from S3A module
hadoop-yetus removed a comment on pull request #3534: URL: https://github.com/apache/hadoop/pull/3534#issuecomment-962095446 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18011) ABFS: Enable config control for default connection timeout
[ https://issues.apache.org/jira/browse/HADOOP-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-18011: --- Description: ABFS driver has a default connection timeout and read timeout value of 30 secs. For jobs that are time sensitive, preference would be quick failure and have shorter HTTP connection and read timeout. This Jira is created enable config control over the default connection and read timeout. New config name: fs.azure.http.connection.timeout fs.azure.http.read.timeout was: ABFS driver has a default connection timeout value of 30 secs. For jobs that are time sensitive, preference would be quick failure and would prefer a shorted connection timeout. This Jira is created enable config control over the default connection timeout. New config name: fs.azure.http.connection.timeout > ABFS: Enable config control for default connection timeout > --- > > Key: HADOOP-18011 > URL: https://issues.apache.org/jira/browse/HADOOP-18011 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > > ABFS driver has a default connection timeout and read timeout value of 30 > secs. For jobs that are time sensitive, preference would be quick failure and > have shorter HTTP connection and read timeout. > This Jira is created enable config control over the default connection and > read timeout. > New config name: > fs.azure.http.connection.timeout > fs.azure.http.read.timeout -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18003) Add a method appendIfAbsent for CallerContext
[ https://issues.apache.org/jira/browse/HADOOP-18003?focusedWorklogId=681583=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681583 ] ASF GitHub Bot logged work on HADOOP-18003: --- Author: ASF GitHub Bot Created on: 15/Nov/21 15:56 Start Date: 15/Nov/21 15:56 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3644: URL: https://github.com/apache/hadoop/pull/3644#issuecomment-969052288 > Merged. Thanks for your contribution, @tomscut! Thanks @tasanuma for the merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681583) Time Spent: 2h 20m (was: 2h 10m) > Add a method appendIfAbsent for CallerContext > - > > Key: HADOOP-18003 > URL: https://issues.apache.org/jira/browse/HADOOP-18003 > Project: Hadoop Common > Issue Type: New Feature >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > As we discussed here [#3635.|#discussion_r746873078] > In some cases, when we need to add a _key:value_ to the {_}CallerContext{_}, > we need to check whether the _key_ already exists in the outer layer, which > is a bit of a hassle. To solve this problem, we can add a new method > {_}CallerContext#appendIfAbsent{_}. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3644: HADOOP-18003. Add a method appendIfAbsent for CallerContext
tomscut commented on pull request #3644: URL: https://github.com/apache/hadoop/pull/3644#issuecomment-969052288 > Merged. Thanks for your contribution, @tomscut! Thanks @tasanuma for the merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth closed pull request #3628: YARN-11001. Add docs on removing node-to-labels mapping
szilard-nemeth closed pull request #3628: URL: https://github.com/apache/hadoop/pull/3628 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on pull request #3628: YARN-11001. Add docs on removing node-to-labels mapping
szilard-nemeth commented on pull request #3628: URL: https://github.com/apache/hadoop/pull/3628#issuecomment-968998397 Hi @manuzhang , Thanks for your contribution. Patch lookgs good to me, committed to trunk. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18010) some time delay (0.3s) for swebhdfs + kerberos + observer setting.
[ https://issues.apache.org/jira/browse/HADOOP-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuan-Heng Hsiao resolved HADOOP-18010. --- Resolution: Duplicate More suitable to be in HDFS issues. > some time delay (0.3s) for swebhdfs + kerberos + observer setting. > -- > > Key: HADOOP-18010 > URL: https://issues.apache.org/jira/browse/HADOOP-18010 > Project: Hadoop Common > Issue Type: Bug > Components: auth, hdfs-client >Affects Versions: 3.3.1 > Environment: ubuntu 20.04 > hadoop 3.3.1 > openjdk-8 > >Reporter: Chuan-Heng Hsiao >Priority: Major > > Settings: > 1 master namenode (A), 1 standby namenode (B), 1 observer namenode (C). > following > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html] > except that > dfs.client.failover.observer.auto-msync-period. > is set to -1 (not auto -msync) > > uable to do curl - - negotiate -u ':' > 'https://:/webhdfs/v1/...' > because it seems like due to the following issue: > https://issues.apache.org/jira/browse/HDFS-14443 > using curl --negotiate -u ':' 'https://:/webhdfs/v1/...' > can successfully get 307 redirect with the corresponding Location. > but got > token (token for xxx HDFS_DELEGATION_TOKEN owner=xxx renewer=xxx > masterKeyID=ooo) can't be found in cache" > if redirect the url within 300ms. > > Not issue if waiting for more than 300ms and then do the redirect. > No issue if changing (C) to Standby (no observers) (and redirect within 10 ms) > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18010) some time delay (0.3s) for swebhdfs + kerberos + observer setting.
[ https://issues.apache.org/jira/browse/HADOOP-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443846#comment-17443846 ] Chuan-Heng Hsiao commented on HADOOP-18010: --- Looks like that this issue is more suitable in hadoop-hdfs. Created same issue in https://issues.apache.org/jira/browse/HDFS-16325 > some time delay (0.3s) for swebhdfs + kerberos + observer setting. > -- > > Key: HADOOP-18010 > URL: https://issues.apache.org/jira/browse/HADOOP-18010 > Project: Hadoop Common > Issue Type: Bug > Components: auth, hdfs-client >Affects Versions: 3.3.1 > Environment: ubuntu 20.04 > hadoop 3.3.1 > openjdk-8 > >Reporter: Chuan-Heng Hsiao >Priority: Major > > Settings: > 1 master namenode (A), 1 standby namenode (B), 1 observer namenode (C). > following > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html] > except that > dfs.client.failover.observer.auto-msync-period. > is set to -1 (not auto -msync) > > uable to do curl - - negotiate -u ':' > 'https://:/webhdfs/v1/...' > because it seems like due to the following issue: > https://issues.apache.org/jira/browse/HDFS-14443 > using curl --negotiate -u ':' 'https://:/webhdfs/v1/...' > can successfully get 307 redirect with the corresponding Location. > but got > token (token for xxx HDFS_DELEGATION_TOKEN owner=xxx renewer=xxx > masterKeyID=ooo) can't be found in cache" > if redirect the url within 300ms. > > Not issue if waiting for more than 300ms and then do the redirect. > No issue if changing (C) to Standby (no observers) (and redirect within 10 ms) > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18006) maven-enforcer-plugin's execution of banned-illegal-imports gets overridden in child poms
[ https://issues.apache.org/jira/browse/HADOOP-18006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-18006: -- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > maven-enforcer-plugin's execution of banned-illegal-imports gets overridden > in child poms > - > > Key: HADOOP-18006 > URL: https://issues.apache.org/jira/browse/HADOOP-18006 > Project: Hadoop Common > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > When we specify any maven plugin with execution tag in the parent as well as > child modules, child module plugin overrides parent plugin. For instance, > when {{banned-illegal-imports}} is applied for any child module with only one > banned import (let’s say {{{}Preconditions{}}}), then only that banned import > is covered by that child module and all imports defined in parent module (e.g > Sets, Lists etc) are overridden and they are no longer applied. > After this > [commit|https://github.com/apache/hadoop/commit/62c86eaa0e539a4307ca794e0fcd502a77ebceb8], > hadoop-hdfs module will not complain about {{Sets}} even if i import it from > guava banned imports but on the other hand, hadoop-yarn module doesn’t have > any child level {{banned-illegal-imports}} defined so yarn modules will fail > if {{Sets}} guava import is used. > So going forward, it would be good to replace guava imports with Hadoop’s own > imports module-by-module and only at the end, we should add new entry to > parent pom {{banned-illegal-imports}} list. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18006) maven-enforcer-plugin's execution of banned-illegal-imports gets overridden in child poms
[ https://issues.apache.org/jira/browse/HADOOP-18006?focusedWorklogId=681514=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681514 ] ASF GitHub Bot logged work on HADOOP-18006: --- Author: ASF GitHub Bot Created on: 15/Nov/21 13:57 Start Date: 15/Nov/21 13:57 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #3648: URL: https://github.com/apache/hadoop/pull/3648#issuecomment-968935647 Merged it. Thanks for your contribution, @virajjasani. Thanks for your review, @amahussein. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681514) Time Spent: 1h 20m (was: 1h 10m) > maven-enforcer-plugin's execution of banned-illegal-imports gets overridden > in child poms > - > > Key: HADOOP-18006 > URL: https://issues.apache.org/jira/browse/HADOOP-18006 > Project: Hadoop Common > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > When we specify any maven plugin with execution tag in the parent as well as > child modules, child module plugin overrides parent plugin. For instance, > when {{banned-illegal-imports}} is applied for any child module with only one > banned import (let’s say {{{}Preconditions{}}}), then only that banned import > is covered by that child module and all imports defined in parent module (e.g > Sets, Lists etc) are overridden and they are no longer applied. > After this > [commit|https://github.com/apache/hadoop/commit/62c86eaa0e539a4307ca794e0fcd502a77ebceb8], > hadoop-hdfs module will not complain about {{Sets}} even if i import it from > guava banned imports but on the other hand, hadoop-yarn module doesn’t have > any child level {{banned-illegal-imports}} defined so yarn modules will fail > if {{Sets}} guava import is used. > So going forward, it would be good to replace guava imports with Hadoop’s own > imports module-by-module and only at the end, we should add new entry to > parent pom {{banned-illegal-imports}} list. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18006) maven-enforcer-plugin's execution of banned-illegal-imports gets overridden in child poms
[ https://issues.apache.org/jira/browse/HADOOP-18006?focusedWorklogId=681513=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681513 ] ASF GitHub Bot logged work on HADOOP-18006: --- Author: ASF GitHub Bot Created on: 15/Nov/21 13:57 Start Date: 15/Nov/21 13:57 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #3648: URL: https://github.com/apache/hadoop/pull/3648 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681513) Time Spent: 1h 10m (was: 1h) > maven-enforcer-plugin's execution of banned-illegal-imports gets overridden > in child poms > - > > Key: HADOOP-18006 > URL: https://issues.apache.org/jira/browse/HADOOP-18006 > Project: Hadoop Common > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > When we specify any maven plugin with execution tag in the parent as well as > child modules, child module plugin overrides parent plugin. For instance, > when {{banned-illegal-imports}} is applied for any child module with only one > banned import (let’s say {{{}Preconditions{}}}), then only that banned import > is covered by that child module and all imports defined in parent module (e.g > Sets, Lists etc) are overridden and they are no longer applied. > After this > [commit|https://github.com/apache/hadoop/commit/62c86eaa0e539a4307ca794e0fcd502a77ebceb8], > hadoop-hdfs module will not complain about {{Sets}} even if i import it from > guava banned imports but on the other hand, hadoop-yarn module doesn’t have > any child level {{banned-illegal-imports}} defined so yarn modules will fail > if {{Sets}} guava import is used. > So going forward, it would be good to replace guava imports with Hadoop’s own > imports module-by-module and only at the end, we should add new entry to > parent pom {{banned-illegal-imports}} list. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3648: HADOOP-18006. maven-enforcer-plugin's execution of banned-illegal-imports gets overridden in child poms
tasanuma commented on pull request #3648: URL: https://github.com/apache/hadoop/pull/3648#issuecomment-968935647 Merged it. Thanks for your contribution, @virajjasani. Thanks for your review, @amahussein. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma merged pull request #3648: HADOOP-18006. maven-enforcer-plugin's execution of banned-illegal-imports gets overridden in child poms
tasanuma merged pull request #3648: URL: https://github.com/apache/hadoop/pull/3648 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18003) Add a method appendIfAbsent for CallerContext
[ https://issues.apache.org/jira/browse/HADOOP-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HADOOP-18003. --- Fix Version/s: 3.4.0 Resolution: Fixed > Add a method appendIfAbsent for CallerContext > - > > Key: HADOOP-18003 > URL: https://issues.apache.org/jira/browse/HADOOP-18003 > Project: Hadoop Common > Issue Type: New Feature >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > As we discussed here [#3635.|#discussion_r746873078] > In some cases, when we need to add a _key:value_ to the {_}CallerContext{_}, > we need to check whether the _key_ already exists in the outer layer, which > is a bit of a hassle. To solve this problem, we can add a new method > {_}CallerContext#appendIfAbsent{_}. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3659: HDFS-16323. DatanodeHttpServer doesn't require handler state map while retrieving filter handlers
hadoop-yetus commented on pull request #3659: URL: https://github.com/apache/hadoop/pull/3659#issuecomment-968926593 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 15s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 41s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 347m 29s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3659/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 455m 7s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3659/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3659 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux c29d9d04303d 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 81c1a76ea3b24bf05c7ab826ba133d07b830a068 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3659/2/testReport/ | | Max. process+thread count | 2198 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3659/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message
[jira] [Work logged] (HADOOP-18003) Add a method appendIfAbsent for CallerContext
[ https://issues.apache.org/jira/browse/HADOOP-18003?focusedWorklogId=681499=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681499 ] ASF GitHub Bot logged work on HADOOP-18003: --- Author: ASF GitHub Bot Created on: 15/Nov/21 13:45 Start Date: 15/Nov/21 13:45 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #3644: URL: https://github.com/apache/hadoop/pull/3644 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681499) Time Spent: 2h (was: 1h 50m) > Add a method appendIfAbsent for CallerContext > - > > Key: HADOOP-18003 > URL: https://issues.apache.org/jira/browse/HADOOP-18003 > Project: Hadoop Common > Issue Type: New Feature >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > As we discussed here [#3635.|#discussion_r746873078] > In some cases, when we need to add a _key:value_ to the {_}CallerContext{_}, > we need to check whether the _key_ already exists in the outer layer, which > is a bit of a hassle. To solve this problem, we can add a new method > {_}CallerContext#appendIfAbsent{_}. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18003) Add a method appendIfAbsent for CallerContext
[ https://issues.apache.org/jira/browse/HADOOP-18003?focusedWorklogId=681500=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681500 ] ASF GitHub Bot logged work on HADOOP-18003: --- Author: ASF GitHub Bot Created on: 15/Nov/21 13:45 Start Date: 15/Nov/21 13:45 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #3644: URL: https://github.com/apache/hadoop/pull/3644#issuecomment-968924601 Merged. Thanks for your contribution, @tomscut! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681500) Time Spent: 2h 10m (was: 2h) > Add a method appendIfAbsent for CallerContext > - > > Key: HADOOP-18003 > URL: https://issues.apache.org/jira/browse/HADOOP-18003 > Project: Hadoop Common > Issue Type: New Feature >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > As we discussed here [#3635.|#discussion_r746873078] > In some cases, when we need to add a _key:value_ to the {_}CallerContext{_}, > we need to check whether the _key_ already exists in the outer layer, which > is a bit of a hassle. To solve this problem, we can add a new method > {_}CallerContext#appendIfAbsent{_}. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3644: HADOOP-18003. Add a method appendIfAbsent for CallerContext
tasanuma commented on pull request #3644: URL: https://github.com/apache/hadoop/pull/3644#issuecomment-968924601 Merged. Thanks for your contribution, @tomscut! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma merged pull request #3644: HADOOP-18003. Add a method appendIfAbsent for CallerContext
tasanuma merged pull request #3644: URL: https://github.com/apache/hadoop/pull/3644 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GuoPhilipse opened a new pull request #3661: HDFS-16324. fix error log in BlockManagerSafeMode
GuoPhilipse opened a new pull request #3661: URL: https://github.com/apache/hadoop/pull/3661 ### Description of PR if `recheckInterval` was set as invalid value, there will be warning log output, but the message seems not that proper ,we can improve it. ### How was this patch tested? No need test cases, just update warning log. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in
[ https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443809#comment-17443809 ] Brahma Reddy Battula commented on HADOOP-17996: --- [~prabhujoseph] and [~Sushma_28] thanks for reporting and working on this..IMO, this was just to track the re-login attempt so that so many retries can be avoided.? Configuring *kerberosMinSecondsBeforeRelogin* with low value will not work here if it's needed.? After this fix , on failure it will continuously retry..? > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in > -- > > Key: HADOOP-17996 > URL: https://issues.apache.org/jira/browse/HADOOP-17996 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.3.1 >Reporter: Prabhu Joseph >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17996.001.patch > > > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in. IPC#Client does reloginFromKeytab when there is a connection > reset failure from AD which does logout and set the last login time to now > and then tries to login. The login also fails as not able to connect to AD. > Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check > fails. All Client and Server operations fails with *GSS initiate failed* > {code} > 2021-10-31 09:50:53,546 WARN ha.EditLogTailer - Unable to trigger a roll of > the active NN > java.util.concurrent.ExecutionException: > org.apache.hadoop.security.KerberosAuthException: DestHost:destPort > namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local > exception: org.apache.hadoop.security.KerberosAuthException: Login failure > for user: nn/nameno...@example.com javax.security.auth.login.LoginException: > Connection reset > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712) > at > org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423) > Caused by: org.apache.hadoop.security.KerberosAuthException: > DestHost:destPort namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. > Failed on local exception: org.apache.hadoop.security.KerberosAuthException: > Login failure for user: nn/nameno...@example.com > javax.security.auth.login.LoginException: Connection reset > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501) > at org.apache.hadoop.ipc.Client.call(Client.java:1443) > at org.apache.hadoop.ipc.Client.call(Client.java:1353) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514) > at
[jira] [Work logged] (HADOOP-17975) Fallback to simple auth does not work for a secondary DistributedFileSystem instance
[ https://issues.apache.org/jira/browse/HADOOP-17975?focusedWorklogId=681447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681447 ] ASF GitHub Bot logged work on HADOOP-17975: --- Author: ASF GitHub Bot Created on: 15/Nov/21 12:40 Start Date: 15/Nov/21 12:40 Worklog Time Spent: 10m Work Description: fapifta commented on a change in pull request #3658: URL: https://github.com/apache/hadoop/pull/3658#discussion_r749290889 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java ## @@ -1679,11 +1679,13 @@ private Connection getConnection(ConnectionId remoteId, private final boolean doPing; //do we need to send ping message private final int pingInterval; // how often sends ping to the server in msecs private String saslQop; // here for testing +private final AtomicBoolean fallbackToSimpleAuth; private final Configuration conf; // used to get the expected kerberos principal name ConnectionId(InetSocketAddress address, Class protocol, UserGroupInformation ticket, int rpcTimeout, - RetryPolicy connectionRetryPolicy, Configuration conf) { + RetryPolicy connectionRetryPolicy, Configuration conf, + AtomicBoolean fallbackToSimpleAuth) { Review comment: I have tried to simply use a boolean here, but it did not work, and I realized that we need to distinguish between connections based on the reference of the AtomicBoolean what Object.equals will do here. Here is why: If we pass in the boolean value of AtomicBoolean, then the event sequence is this: - dfs client created with atomic boolean set to false - we get the connection based on connection id, and the boolean value false - we call setupIOStreams at first connect, which sets the AtomicBoolean to true - at next call when we get a connection, we get an other one, based on the boolean value true - the next client at initialization will create an other AtomicBoolean, and then gets the connection associated with the boolean value false, and never initializes its own AtomicBoolean via setupIOStreams again The situation is similar also, if we use the value of the AtomicBooleans in the hashcode and equals. That is why I chose to compare based on Object.equals (which will use AtomicBoolean's equals, which falls back to Object.equals), and also the hashcode method of AtomicBoolean (which will also give back something based on the Object.hashcode). With that every distinct client with its own AtomicBoolean will get a distinct connection, and will initialize the fallback properly via setupIOStreams. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681447) Time Spent: 7h 10m (was: 7h) > Fallback to simple auth does not work for a secondary DistributedFileSystem > instance > > > Key: HADOOP-17975 > URL: https://issues.apache.org/jira/browse/HADOOP-17975 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: István Fajth >Assignee: István Fajth >Priority: Major > Labels: pull-request-available > Time Spent: 7h 10m > Remaining Estimate: 0h > > The following code snippet demonstrates what is necessary to cause a failure > in connection to a non secure cluster with fallback to SIMPLE auth allowed > from a secure cluster. > {code:java} > Configuration conf = new Configuration(); > conf.setBoolean("ipc.client.fallback-to-simple-auth-allowed", true); > URI fsUri = new URI("hdfs://"); > conf.setBoolean("fs.hdfs.impl.disable.cache", true); > FileSystem fs = FileSystem.get(fsUri, conf); > FSDataInputStream src = fs.open(new Path("/path/to/a/file")); > FileOutputStream dst = new FileOutputStream(File.createTempFile("foo", > "bar")); > IOUtils.copyBytes(src, dst, 1024); > // The issue happens even if we re-enable cache at this point > //conf.setBoolean("fs.hdfs.impl.disable.cache", false); > // The issue does not happen when we close the first FileSystem object > // before creating the second. > //fs.close(); > FileSystem fs2 = FileSystem.get(fsUri, conf); > FSDataInputStream src2 = fs2.open(new Path("/path/to/a/file")); > FileOutputStream dst2 = new FileOutputStream(File.createTempFile("foo", > "bar")); >
[GitHub] [hadoop] fapifta commented on a change in pull request #3658: HADOOP-17975 Fallback to simple auth does not work for a secondary DistributedFileSystem instance
fapifta commented on a change in pull request #3658: URL: https://github.com/apache/hadoop/pull/3658#discussion_r749290889 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java ## @@ -1679,11 +1679,13 @@ private Connection getConnection(ConnectionId remoteId, private final boolean doPing; //do we need to send ping message private final int pingInterval; // how often sends ping to the server in msecs private String saslQop; // here for testing +private final AtomicBoolean fallbackToSimpleAuth; private final Configuration conf; // used to get the expected kerberos principal name ConnectionId(InetSocketAddress address, Class protocol, UserGroupInformation ticket, int rpcTimeout, - RetryPolicy connectionRetryPolicy, Configuration conf) { + RetryPolicy connectionRetryPolicy, Configuration conf, + AtomicBoolean fallbackToSimpleAuth) { Review comment: I have tried to simply use a boolean here, but it did not work, and I realized that we need to distinguish between connections based on the reference of the AtomicBoolean what Object.equals will do here. Here is why: If we pass in the boolean value of AtomicBoolean, then the event sequence is this: - dfs client created with atomic boolean set to false - we get the connection based on connection id, and the boolean value false - we call setupIOStreams at first connect, which sets the AtomicBoolean to true - at next call when we get a connection, we get an other one, based on the boolean value true - the next client at initialization will create an other AtomicBoolean, and then gets the connection associated with the boolean value false, and never initializes its own AtomicBoolean via setupIOStreams again The situation is similar also, if we use the value of the AtomicBooleans in the hashcode and equals. That is why I chose to compare based on Object.equals (which will use AtomicBoolean's equals, which falls back to Object.equals), and also the hashcode method of AtomicBoolean (which will also give back something based on the Object.hashcode). With that every distinct client with its own AtomicBoolean will get a distinct connection, and will initialize the fallback properly via setupIOStreams. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18011) ABFS: Enable config control for default connection timeout
[ https://issues.apache.org/jira/browse/HADOOP-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-18011: --- Description: ABFS driver has a default connection timeout value of 30 secs. For jobs that are time sensitive, preference would be quick failure and would prefer a shorted connection timeout. This Jira is created enable config control over the default connection timeout. New config name: fs.azure.http.connection.timeout was: ABFS driver has a default connection timeout value of 30 secs. For jobs that are time sensitive, preference would be quick failure and would prefer a shorted connection timeout. This Jira is created enable config control over the default connection timeout. New config name: fs.azure.connection.timeout > ABFS: Enable config control for default connection timeout > --- > > Key: HADOOP-18011 > URL: https://issues.apache.org/jira/browse/HADOOP-18011 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > > ABFS driver has a default connection timeout value of 30 secs. For jobs that > are time sensitive, preference would be quick failure and would prefer a > shorted connection timeout. > This Jira is created enable config control over the default connection > timeout. > New config name: fs.azure.http.connection.timeout -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18002) abfs rename idempotency broken -remove recovery
[ https://issues.apache.org/jira/browse/HADOOP-18002?focusedWorklogId=681441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681441 ] ASF GitHub Bot logged work on HADOOP-18002: --- Author: ASF GitHub Bot Created on: 15/Nov/21 12:11 Start Date: 15/Nov/21 12:11 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3641: URL: https://github.com/apache/hadoop/pull/3641#issuecomment-968849106 @mukund-thakur way too many changes since the original change went in -and that did rename and delete -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681441) Time Spent: 1h 10m (was: 1h) > abfs rename idempotency broken -remove recovery > --- > > Key: HADOOP-18002 > URL: https://issues.apache.org/jira/browse/HADOOP-18002 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > rename idempotency logic of HADOOP-17105 is broken as modtimes aren't > uodated on rename. > remove, with the changes from the PR of HADOOP-17981. > also fix delete recovery test after HADOOP-17934 -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3641: HADOOP-18002. abfs rename idempotency broken -remove recovery
steveloughran commented on pull request #3641: URL: https://github.com/apache/hadoop/pull/3641#issuecomment-968849106 @mukund-thakur way too many changes since the original change went in -and that did rename and delete -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18002) abfs rename idempotency broken -remove recovery
[ https://issues.apache.org/jira/browse/HADOOP-18002?focusedWorklogId=681436=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-681436 ] ASF GitHub Bot logged work on HADOOP-18002: --- Author: ASF GitHub Bot Created on: 15/Nov/21 12:01 Start Date: 15/Nov/21 12:01 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3641: URL: https://github.com/apache/hadoop/pull/3641#issuecomment-968841090 good idea..let me see. delete idempotency is easier...if the file isn't there, delete worked -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 681436) Time Spent: 1h (was: 50m) > abfs rename idempotency broken -remove recovery > --- > > Key: HADOOP-18002 > URL: https://issues.apache.org/jira/browse/HADOOP-18002 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > rename idempotency logic of HADOOP-17105 is broken as modtimes aren't > uodated on rename. > remove, with the changes from the PR of HADOOP-17981. > also fix delete recovery test after HADOOP-17934 -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3641: HADOOP-18002. abfs rename idempotency broken -remove recovery
steveloughran commented on pull request #3641: URL: https://github.com/apache/hadoop/pull/3641#issuecomment-968841090 good idea..let me see. delete idempotency is easier...if the file isn't there, delete worked -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3638: HDFS-16313. Add metrics for each subcluster
ferhui commented on pull request #3638: URL: https://github.com/apache/hadoop/pull/3638#issuecomment-968774697 @goiri Do you have any other comments? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3654: HDFS-16320. Datanode retrieve slownode information from NameNode
ferhui commented on pull request #3654: URL: https://github.com/apache/hadoop/pull/3654#issuecomment-968767997 @aajisaka @Hexiaoqiao @jojochuang Would you give any advices? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3654: HDFS-16320. Datanode retrieve slownode information from NameNode
ferhui commented on pull request #3654: URL: https://github.com/apache/hadoop/pull/3654#issuecomment-968766994 @symious Thanks. Right now only namenode knows that which datanode is slow, and it will avoid to choose this node for the following requirement. But we have no ways to handle the writing pipeline. I think it's good idea, go ahead. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] TiborKovacsCloudera opened a new pull request #3660: YARN-10982
TiborKovacsCloudera opened a new pull request #3660: URL: https://github.com/apache/hadoop/pull/3660 Use QueuePath class where it's reasonable. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on pull request #3651: HDFS-16314. Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
haiyang1987 commented on pull request #3651: URL: https://github.com/apache/hadoop/pull/3651#issuecomment-968674241 @ferhui @tomscut I submitted some code. Can you help review. thank you very much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in
[ https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443619#comment-17443619 ] Prabhu Joseph commented on HADOOP-17996: Thanks [~Sushma_28] for the patch. The patch looks good to me, +1. Will commit it tomorrow if no other comments. > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in > -- > > Key: HADOOP-17996 > URL: https://issues.apache.org/jira/browse/HADOOP-17996 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.3.1 >Reporter: Prabhu Joseph >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17996.001.patch > > > UserGroupInformation#unprotectedRelogin sets the last login time before > logging in. IPC#Client does reloginFromKeytab when there is a connection > reset failure from AD which does logout and set the last login time to now > and then tries to login. The login also fails as not able to connect to AD. > Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check > fails. All Client and Server operations fails with *GSS initiate failed* > {code} > 2021-10-31 09:50:53,546 WARN ha.EditLogTailer - Unable to trigger a roll of > the active NN > java.util.concurrent.ExecutionException: > org.apache.hadoop.security.KerberosAuthException: DestHost:destPort > namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local > exception: org.apache.hadoop.security.KerberosAuthException: Login failure > for user: nn/nameno...@example.com javax.security.auth.login.LoginException: > Connection reset > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712) > at > org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423) > Caused by: org.apache.hadoop.security.KerberosAuthException: > DestHost:destPort namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. > Failed on local exception: org.apache.hadoop.security.KerberosAuthException: > Login failure for user: nn/nameno...@example.com > javax.security.auth.login.LoginException: Connection reset > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501) > at org.apache.hadoop.ipc.Client.call(Client.java:1443) > at org.apache.hadoop.ipc.Client.call(Client.java:1353) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at