[GitHub] [hadoop] prasad-acit opened a new pull request, #4162: HDFS-16526. Add metrics for slow DataNode
prasad-acit opened a new pull request, #4162: URL: https://github.com/apache/hadoop/pull/4162 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on a diff in pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread
tasanuma commented on code in PR #4032: URL: https://github.com/apache/hadoop/pull/4032#discussion_r847973070 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java: ## @@ -248,13 +251,22 @@ public void run() { pendingWorkForDirectory.get(startINode); if (dirPendingWorkInfo != null && dirPendingWorkInfo.isDirWorkDone()) { -ctxt.removeSPSHint(startINode); +try { + ctxt.removeSPSHint(startINode); +} catch (FileNotFoundException e) { + // ignore if the file doesn't already exist + startINode = null; +} pendingWorkForDirectory.remove(startINode); } } startINode = null; // Current inode successfully scanned. } } catch (Throwable t) { + retryCount++; + if (retryCount >= 3) { +startINode = null; + } Review Comment: @liubingxing - Let's define the constant of the max retry count (`private static final int MAX_RETRY_COUNT = 3;`) in `SPSPathIdProcessor`. - How about logging a message when skipping the inode? ```suggestion retryCount++; if (retryCount >= MAX_RETRY_COUNT) { LOG.warn("Skipping this inode {} due to too many retries.", startINode); startINode = null; } ``` - And I think it's better to move the retry logic to the end of the catch block. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException
Hexiaoqiao commented on PR #4077: URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1095956053 Thanks @cndaimin for your great catch here. Would you mind to add new unit test to cover this case? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException
Hexiaoqiao commented on PR #4077: URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1095956052 Thanks @cndaimin for your great catch here. Would you mind to add new unit test to cover this case? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #4148: HDFS-16531. Avoid setReplication writing an edit record if old replication equals the new value
Hexiaoqiao commented on code in PR #4148: URL: https://github.com/apache/hadoop/pull/4148#discussion_r847921761 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java: ## @@ -2466,11 +2466,12 @@ boolean setReplication(final String src, final short replication) logAuditEvent(false, operationName, src); throw e; } -if (success) { +if (status == FSDirAttrOp.SetRepStatus.SUCCESS) { getEditLog().logSync(); - logAuditEvent(true, operationName, src); } -return success; +logAuditEvent(status != FSDirAttrOp.SetRepStatus.INVALID, Review Comment: IMO, this does not changes the prior log except additional false log when setReplication failed. a. if status == FSDirAttrOp.SetRepStatus.SUCCESS, it logs a true since `status != FSDirAttrOp.SetRepStatus.INVALID`; b. if status == FSDirAttrOp.SetRepStatus.UNCHANGED, it also logs a true which is same as before. c. if status == FSDirAttrOp.SetRepStatus.INVALID, it logs a false which is additional compare to before logic. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liubingxing commented on a diff in pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread
liubingxing commented on code in PR #4032: URL: https://github.com/apache/hadoop/pull/4032#discussion_r847908964 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java: ## @@ -248,13 +250,18 @@ public void run() { pendingWorkForDirectory.get(startINode); if (dirPendingWorkInfo != null && dirPendingWorkInfo.isDirWorkDone()) { -ctxt.removeSPSHint(startINode); pendingWorkForDirectory.remove(startINode); +ctxt.removeSPSHint(startINode); Review Comment: > @liubingxing I still prefer to catch the FileNotFoundException here. What do you think? Sorry @tasanuma, I misunderstand your meaning here. I updated the code, please take a look. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a diff in pull request #4148: HDFS-16531. Avoid setReplication writing an edit record if old replication equals the new value
jojochuang commented on code in PR #4148: URL: https://github.com/apache/hadoop/pull/4148#discussion_r847906125 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java: ## @@ -2466,11 +2466,12 @@ boolean setReplication(final String src, final short replication) logAuditEvent(false, operationName, src); throw e; } -if (success) { +if (status == FSDirAttrOp.SetRepStatus.SUCCESS) { getEditLog().logSync(); - logAuditEvent(true, operationName, src); } -return success; +logAuditEvent(status != FSDirAttrOp.SetRepStatus.INVALID, Review Comment: Prior to this change, if success, it logs a true. Now if status == FSDirAttrOp.SetRepStatus.SUCCESS, it logs a false? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on a diff in pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread
tasanuma commented on code in PR #4032: URL: https://github.com/apache/hadoop/pull/4032#discussion_r847892437 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java: ## @@ -248,13 +250,18 @@ public void run() { pendingWorkForDirectory.get(startINode); if (dirPendingWorkInfo != null && dirPendingWorkInfo.isDirWorkDone()) { -ctxt.removeSPSHint(startINode); pendingWorkForDirectory.remove(startINode); +ctxt.removeSPSHint(startINode); Review Comment: @liubingxing Thank you for updating the PR. I think we can keep the retry logic implemented before while catching the FileNotFoundException here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on a diff in pull request #4138: HDFS-16479. EC: NameNode should not send a reconstruction work when the source datanodes are insufficient
tasanuma commented on code in PR #4138: URL: https://github.com/apache/hadoop/pull/4138#discussion_r847876025 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java: ## @@ -852,6 +852,101 @@ public void testChooseSrcDNWithDupECInDecommissioningNode() throws Exception { 0, numReplicas.redundantInternalBlocks()); } + @Test + public void testSkipReconstructionWithManyBusyNodes() { +long blockId = -9223372036854775776L; // real ec block id +// RS-3-2 EC policy +ErasureCodingPolicy ecPolicy = +SystemErasureCodingPolicies.getPolicies().get(1); + +// striped blockInfo: 3 data blocks + 2 parity blocks +Block aBlock = new Block(blockId, ecPolicy.getCellSize() * ecPolicy.getNumDataUnits(), 0); +BlockInfoStriped aBlockInfoStriped = new BlockInfoStriped(aBlock, ecPolicy); + +// create 4 storageInfo, which means 1 block is missing +DatanodeStorageInfo ds1 = DFSTestUtil.createDatanodeStorageInfo( +"storage1", "1.1.1.1", "rack1", "host1"); +DatanodeStorageInfo ds2 = DFSTestUtil.createDatanodeStorageInfo( +"storage2", "2.2.2.2", "rack2", "host2"); +DatanodeStorageInfo ds3 = DFSTestUtil.createDatanodeStorageInfo( +"storage3", "3.3.3.3", "rack3", "host3"); +DatanodeStorageInfo ds4 = DFSTestUtil.createDatanodeStorageInfo( +"storage4", "4.4.4.4", "rack4", "host4"); + +// link block with storage +aBlockInfoStriped.addStorage(ds1, aBlock); +aBlockInfoStriped.addStorage(ds2, new Block(blockId + 1, 0, 0)); +aBlockInfoStriped.addStorage(ds3, new Block(blockId + 2, 0, 0)); +aBlockInfoStriped.addStorage(ds4, new Block(blockId + 3, 0, 0)); + +addEcBlockToBM(blockId, ecPolicy); +aBlockInfoStriped.setBlockCollectionId(mockINodeId); + +// reconstruction should be scheduled +BlockReconstructionWork work = bm.scheduleReconstruction(aBlockInfoStriped, 3); +assertNotNull(work); + +// simulate the 2 nodes reach maxReplicationStreams +for(int i = 0; i < bm.maxReplicationStreams; i++){ + ds3.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets(); + ds4.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets(); +} + +// reconstruction should be skipped since the number of non-busy nodes are not enough +work = bm.scheduleReconstruction(aBlockInfoStriped, 3); +assertNull(work); + } + + @Test + public void testSkipReconstructionWithManyBusyNodes2() { +long blockId = -9223372036854775776L; // real ec block id +// RS-3-2 EC policy +ErasureCodingPolicy ecPolicy = +SystemErasureCodingPolicies.getPolicies().get(1); + +// striped blockInfo: 2 data blocks + 2 paritys +Block aBlock = new Block(blockId, ecPolicy.getCellSize() * (ecPolicy.getNumDataUnits() - 1), 0); +BlockInfoStriped aBlockInfoStriped = new BlockInfoStriped(aBlock, ecPolicy); Review Comment: I updated the variable name. I want to keep the comment to clarify the difference between `testSkipReconstructionWithManyBusyNodes` and `testSkipReconstructionWithManyBusyNodes2`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #4138: HDFS-16479. EC: NameNode should not send a reconstruction work when the source datanodes are insufficient
tasanuma commented on PR #4138: URL: https://github.com/apache/hadoop/pull/4138#issuecomment-1095814107 @ayushtkn Thanks for your reviews. I update the PR addressing your comments. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] singer-bin commented on pull request #4122: HDFS-16525.System.err should be used when error occurs in multiple methods in DFSAdmin class
singer-bin commented on PR #4122: URL: https://github.com/apache/hadoop/pull/4122#issuecomment-1095802817 ok,thanks @ayushtkn for the review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a diff in pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation
tomscut commented on code in PR #4127: URL: https://github.com/apache/hadoop/pull/4127#discussion_r847843114 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java: ## @@ -1380,8 +1437,9 @@ private static boolean isExpectedValue(Object expectedValue, Object value) { final CallerContext originContext = CallerContext.getCurrent(); for (final T location : locations) { String nsId = location.getNameserviceId(); + boolean isObserverRead = observerReadEnabled && isReadCall(m); final List namenodes = - getNamenodesForNameservice(nsId); + msync(nsId, ugi, isObserverRead); Review Comment: > @tomscut for "Here's how we do it.", is there a link you meant to attach? I mean, in our cluster, we ran into this problem, and this is how we solved it. ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java: ## @@ -79,6 +79,8 @@ String DFS_NAMENODE_HTTPS_ADDRESS_KEY = "dfs.namenode.https-address"; String DFS_HA_NAMENODES_KEY_PREFIX = "dfs.ha.namenodes"; int DFS_NAMENODE_RPC_PORT_DEFAULT = 8020; + String DFS_OBSERVER_READ_ENABLE = "dfs.observer.read.enable"; + boolean DFS_OBSERVER_READ_ENABLE_DEFAULT = true; Review Comment: Thank you for your explanation. I understand. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4161: remove explicit dependency on jackson 1
hadoop-yetus commented on PR #4161: URL: https://github.com/apache/hadoop/pull/4161#issuecomment-1095670692 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 15s | | trunk passed | | +1 :green_heart: | compile | 25m 24s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 21m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | mvnsite | 4m 1s | | trunk passed | | +1 :green_heart: | javadoc | 3m 24s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 44s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 125m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 7m 39s | | the patch passed | | +1 :green_heart: | compile | 24m 17s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 24m 17s | | the patch passed | | +1 :green_heart: | compile | 21m 17s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 3m 56s | | the patch passed | | +1 :green_heart: | xml | 0m 6s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 15s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 45s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 31m 58s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 26s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 17m 43s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 51s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 0m 47s | | hadoop-resourceestimator in the patch passed. | | +1 :green_heart: | unit | 0m 28s | | hadoop-client-minicluster in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 241m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4161/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4161 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml | | uname | Linux c5068fe4dabf 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0b20978eb14da4e3a37b12d0dfdbe5d22c19639f | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4161/1/testReport/ | | Max. process+thread count | 2607 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-tools/hadoop-resourceestimator hadoop-client-modules/hadoop-client-minicluster U: . | | Console output |
[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=755397=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-755397 ] ASF GitHub Bot logged work on HADOOP-15327: --- Author: ASF GitHub Bot Created on: 11/Apr/22 19:28 Start Date: 11/Apr/22 19:28 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #3259: URL: https://github.com/apache/hadoop/pull/3259#issuecomment-1095473597 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 12s | | trunk passed | | +1 :green_heart: | compile | 22m 57s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 20m 15s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 3m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 24s | | trunk passed | | +1 :green_heart: | javadoc | 4m 18s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 57s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 39s | | branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 20m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 56s | | the patch passed | | +1 :green_heart: | compile | 22m 14s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 22m 14s | [/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 generated 1 new + 1814 unchanged - 0 fixed = 1815 total (was 1814) | | +1 :green_heart: | compile | 20m 3s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 20m 2s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 1686 unchanged - 0 fixed = 1687 total (was 1686) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/blanks-eol.txt) | The patch has 25 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 37s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 84 new + 132 unchanged - 12 fixed = 216 total (was 144) | | +1 :green_heart: | mvnsite | 5m 25s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 4m 11s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 4m 0s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 4m 25s | [/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html) | hadoop-mapreduce-project/hadoop-mapreduce-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4
hadoop-yetus commented on PR #3259: URL: https://github.com/apache/hadoop/pull/3259#issuecomment-1095473597 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 12s | | trunk passed | | +1 :green_heart: | compile | 22m 57s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 20m 15s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 3m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 24s | | trunk passed | | +1 :green_heart: | javadoc | 4m 18s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 57s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 39s | | branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 20m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 56s | | the patch passed | | +1 :green_heart: | compile | 22m 14s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 22m 14s | [/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 generated 1 new + 1814 unchanged - 0 fixed = 1815 total (was 1814) | | +1 :green_heart: | compile | 20m 3s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 20m 2s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 1686 unchanged - 0 fixed = 1687 total (was 1686) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/blanks-eol.txt) | The patch has 25 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 37s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 84 new + 132 unchanged - 12 fixed = 216 total (was 144) | | +1 :green_heart: | mvnsite | 5m 25s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 4m 11s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 4m 0s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 4m 25s | [/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html) | hadoop-mapreduce-project/hadoop-mapreduce-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | spotbugs | 1m 47s | [/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/6/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 1 new + 0 unchanged - 0 fixed = 1
[GitHub] [hadoop] hadoop-yetus commented on pull request #4141: HDFS-16534. Split FsDatasetImpl from block pool locks to volume grain locks.
hadoop-yetus commented on PR #4141: URL: https://github.com/apache/hadoop/pull/4141#issuecomment-1095460870 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 35s | | trunk passed | | +1 :green_heart: | compile | 1m 29s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 1m 30s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 1m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 58s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 241m 57s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 350m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4141/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4141 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ae792b971a1e 4.15.0-161-generic #169-Ubuntu SMP Fri Oct 15 13:41:54 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6f4ffd611c6d58576afa921de67dad35a970d0bb | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4141/3/testReport/ | | Max. process+thread count | 3225 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4141/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hadoop] pjfanning opened a new pull request, #4161: remove explicit dependency on jackson 1
pjfanning opened a new pull request, #4161: URL: https://github.com/apache/hadoop/pull/4161 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18198) Release Hadoop 3.3.3: hadoop-3.3.2 with somefixes
[ https://issues.apache.org/jira/browse/HADOOP-18198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18198: Description: Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with * CVE fixes in Hadoop source * CVE fixes in dependencies we know of * replacement of log4j 1.2.17 to reload4j * some changes which shipped in hadoop 3.2.3 for consistency This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. The next release of branch-3.3 will be numbered hadoop-3.4; updating maven versions and JIRA fix versions is part of this release process. was: Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with * CVE fixes in Hadoop source * CVE fixes in dependencies * replacement of log4j 1.2.17 to reload4j * some changes which shipped in hadoop 3.2.3 for consistency This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. The next release of branch-3.3 will be numbered hadoop-3.4; updating maven versions and JIRA fix versions is part of this release process. > Release Hadoop 3.3.3: hadoop-3.3.2 with somefixes > - > > Key: HADOOP-18198 > URL: https://issues.apache.org/jira/browse/HADOOP-18198 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with > * CVE fixes in Hadoop source > * CVE fixes in dependencies we know of > * replacement of log4j 1.2.17 to reload4j > * some changes which shipped in hadoop 3.2.3 for consistency > This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. > The next release of branch-3.3 will be numbered hadoop-3.4; updating maven > versions and JIRA fix versions is part of this release process. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18198) Release Hadoop 3.3.3: hadoop-3.3.2 with somefixes
[ https://issues.apache.org/jira/browse/HADOOP-18198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18198: Summary: Release Hadoop 3.3.3: hadoop-3.3.2 with somefixes (was: Release Hadoop 3.3.3: hadoop-3.3.2 with CVE fixes) > Release Hadoop 3.3.3: hadoop-3.3.2 with somefixes > - > > Key: HADOOP-18198 > URL: https://issues.apache.org/jira/browse/HADOOP-18198 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with > * CVE fixes in Hadoop source > * CVE fixes in dependencies > * replacement of log4j 1.2.17 to reload4j > * some changes which shipped in hadoop 3.2.3 for consistency > This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. > The next release of branch-3.3 will be numbered hadoop-3.4; updating maven > versions and JIRA fix versions is part of this release process. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18198) Release Hadoop 3.3.3: hadoop-3.3.2 with somefixes
[ https://issues.apache.org/jira/browse/HADOOP-18198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18198: Description: Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with * CVE fixes in Hadoop source * CVE fixes in dependencies we know of * replacement of log4j 1.2.17 to reload4j * some changes which shipped in hadoop 3.2.3 for consistency This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. The next release of branch-3.3 will be numbered hadoop-3.4; updating maven versions and JIRA fix versions is part of this release process. The changes here are already in branch 3.2.4; this completes the set was: Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with * CVE fixes in Hadoop source * CVE fixes in dependencies we know of * replacement of log4j 1.2.17 to reload4j * some changes which shipped in hadoop 3.2.3 for consistency This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. The next release of branch-3.3 will be numbered hadoop-3.4; updating maven versions and JIRA fix versions is part of this release process. > Release Hadoop 3.3.3: hadoop-3.3.2 with somefixes > - > > Key: HADOOP-18198 > URL: https://issues.apache.org/jira/browse/HADOOP-18198 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with > * CVE fixes in Hadoop source > * CVE fixes in dependencies we know of > * replacement of log4j 1.2.17 to reload4j > * some changes which shipped in hadoop 3.2.3 for consistency > This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. > The next release of branch-3.3 will be numbered hadoop-3.4; updating maven > versions and JIRA fix versions is part of this release process. > The changes here are already in branch 3.2.4; this completes the set -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread
hadoop-yetus commented on PR #4032: URL: https://github.com/apache/hadoop/pull/4032#issuecomment-1095345044 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 37s | | trunk passed | | +1 :green_heart: | compile | 1m 26s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 28s | | trunk passed | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 1m 26s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 26s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 22s | | the patch passed | | +1 :green_heart: | javadoc | 0m 53s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 20s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 229m 16s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 338m 34s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4032/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4032 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4fa62a7018a1 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 06a71384d7e9c61e96b10acac61eda158073f6a1 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4032/3/testReport/ | | Max. process+thread count | 3682 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4032/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4104: HDFS-16520. Improve EC pread: avoid potential reading whole block
hadoop-yetus commented on PR #4104: URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1095344815 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 2s | | trunk passed | | +1 :green_heart: | compile | 5m 59s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 5m 44s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 30s | | trunk passed | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 16s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 5s | | the patch passed | | +1 :green_heart: | compile | 5m 54s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 54s | | the patch passed | | +1 :green_heart: | compile | 5m 37s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 5m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 5s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 10s | | the patch passed | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 0s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 23s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 228m 54s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 369m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4104 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 480a7fa0fba0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2ce6ac6e6c7161fa9dd6d84bfb3645c4127ae533 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/5/testReport/ | | Max. process+thread count | 3437 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on
[GitHub] [hadoop] hadoop-yetus commented on pull request #4160: HDFS-16537.Fix oev decode xml error
hadoop-yetus commented on PR #4160: URL: https://github.com/apache/hadoop/pull/4160#issuecomment-1095317778 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 34s | | trunk passed | | +1 :green_heart: | compile | 1m 35s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 26s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 39s | | trunk passed | | +1 :green_heart: | javadoc | 1m 16s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 57s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 31s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 1s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 57s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 28s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 241m 59s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 359m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4160/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4160 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 52ae822bd966 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3276544a8d8f0cf050c9d60c74e222316f8235a0 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4160/1/testReport/ | | Max. process+thread count | 3320 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4160/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about
[jira] [Work logged] (HADOOP-18196) Remove replace-guava from replacer plugin
[ https://issues.apache.org/jira/browse/HADOOP-18196?focusedWorklogId=755322=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-755322 ] ASF GitHub Bot logged work on HADOOP-18196: --- Author: ASF GitHub Bot Created on: 11/Apr/22 17:02 Start Date: 11/Apr/22 17:02 Worklog Time Spent: 10m Work Description: virajjasani commented on PR #4152: URL: https://github.com/apache/hadoop/pull/4152#issuecomment-1095307329 Thank you for the review @steveloughran. Could you please help merge this PR? This change is only meant for trunk, no backport required. Issue Time Tracking --- Worklog Id: (was: 755322) Time Spent: 0.5h (was: 20m) > Remove replace-guava from replacer plugin > - > > Key: HADOOP-18196 > URL: https://issues.apache.org/jira/browse/HADOOP-18196 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > While running the build, realized that all replacer plugin executions run > only after "banned-illegal-imports" enforcer plugin. > For instance, > {code:java} > [INFO] --- maven-enforcer-plugin:3.0.0:enforce (banned-illegal-imports) @ > hadoop-cloud-storage --- > [INFO] > [INFO] --- replacer:1.5.3:replace (replace-generated-sources) @ > hadoop-cloud-storage --- > [INFO] Skipping > [INFO] > [INFO] --- replacer:1.5.3:replace (replace-sources) @ hadoop-cloud-storage --- > [INFO] Skipping > [INFO] > [INFO] --- replacer:1.5.3:replace (replace-guava) @ hadoop-cloud-storage --- > [INFO] Replacement run on 0 file. > [INFO] {code} > Hence, if our source code uses com.google.common, banned-illegal-imports will > cause the build failure and replacer plugin would not even get executed. > We should remove it as it is only redundant execution step. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #4152: HADOOP-18196. Remove replace-guava from replacer plugin
virajjasani commented on PR #4152: URL: https://github.com/apache/hadoop/pull/4152#issuecomment-1095307329 Thank you for the review @steveloughran. Could you please help merge this PR? This change is only meant for trunk, no backport required. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation
simbadzina commented on code in PR #4127: URL: https://github.com/apache/hadoop/pull/4127#discussion_r847541480 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java: ## @@ -79,6 +79,8 @@ String DFS_NAMENODE_HTTPS_ADDRESS_KEY = "dfs.namenode.https-address"; String DFS_HA_NAMENODES_KEY_PREFIX = "dfs.ha.namenodes"; int DFS_NAMENODE_RPC_PORT_DEFAULT = 8020; + String DFS_OBSERVER_READ_ENABLE = "dfs.observer.read.enable"; + boolean DFS_OBSERVER_READ_ENABLE_DEFAULT = true; Review Comment: A default of false preserves client behavior of sending the actual state ID. When this is true the client sends "-1" as its last seen state ID. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation
simbadzina commented on code in PR #4127: URL: https://github.com/apache/hadoop/pull/4127#discussion_r847532359 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java: ## @@ -1380,8 +1437,9 @@ private static boolean isExpectedValue(Object expectedValue, Object value) { final CallerContext originContext = CallerContext.getCurrent(); for (final T location : locations) { String nsId = location.getNameserviceId(); + boolean isObserverRead = observerReadEnabled && isReadCall(m); final List namenodes = - getNamenodesForNameservice(nsId); + msync(nsId, ugi, isObserverRead); Review Comment: @tomscut for "Here's how we do it.", is there a link you meant to attach? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15983) Remove the usage of jersey-json to remove jackson 1.x dependency.
[ https://issues.apache.org/jira/browse/HADOOP-15983?focusedWorklogId=755253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-755253 ] ASF GitHub Bot logged work on HADOOP-15983: --- Author: ASF GitHub Bot Created on: 11/Apr/22 14:58 Start Date: 11/Apr/22 14:58 Worklog Time Spent: 10m Work Description: pjfanning commented on PR #3988: URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1095160613 @aajisaka there is a still a problem with this change - see https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3988/16/artifact/out/patch-mvninstall-root.txt ``` [INFO] Apache Hadoop Client Packaging Invariants for Test . FAILURE [ 0.824 s] ``` The issue is this (truncated a bit): ``` [ERROR] Found artifact with unexpected contents: '/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-3988/ubuntu-focal/src/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.4.0-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. javax/ javax/xml/ javax/xml/bind/ javax/xml/bind/annotation/ javax/xml/bind/annotation/adapters/ javax/xml/bind/annotation/adapters/XmlAdapter.class javax/xml/bind/annotation/adapters/XmlJavaTypeAdapters.class javax/xml/bind/annotation/adapters/XmlJavaTypeAdapter$DEFAULT.class javax/xml/bind/annotation/adapters/XmlJavaTypeAdapter.class javax/xml/bind/annotation/adapters/CollapsedStringAdapter.class javax/xml/bind/annotation/adapters/HexBinaryAdapter.class javax/xml/bind/annotation/adapters/NormalizedStringAdapter.class javax/xml/bind/annotation/XmlValue.class javax/xml/bind/annotation/XmlRegistry.class javax/xml/bind/annotation/XmlElements.class javax/xml/bind/annotation/XmlElement$DEFAULT.class javax/xml/bind/annotation/XmlElement.class javax/xml/bind/annotation/XmlSchema.class javax/xml/bind/annotation/XmlNs.class javax/xml/bind/annotation/XmlNsForm.class javax/xml/bind/annotation/XmlType$DEFAULT.class javax/xml/bind/annotation/XmlType.class javax/xml/bind/annotation/XmlElementRefs.class javax/xml/bind/annotation/XmlElementRef$DEFAULT.class javax/xml/bind/annotation/XmlElementRef.class javax/xml/bind/annotation/XmlElementDecl$GLOBAL.class javax/xml/bind/annotation/XmlElementDecl.class javax/xml/bind/annotation/XmlElementWrapper.class javax/xml/bind/annotation/DomHandler.class javax/xml/bind/annotation/XmlMimeType.class javax/xml/bind/annotation/XmlSeeAlso.class javax/xml/bind/annotation/W3CDomHandler.class javax/xml/bind/annotation/XmlIDREF.class javax/xml/bind/annotation/XmlAccessType.class javax/xml/bind/annotation/XmlAccessorOrder.class javax/xml/bind/annotation/XmlAccessOrder.class javax/xml/bind/annotation/XmlAttachmentRef.class javax/xml/bind/annotation/XmlAnyElement.class javax/xml/bind/annotation/XmlSchemaTypes.class javax/xml/bind/annotation/XmlSchemaType$DEFAULT.class javax/xml/bind/annotation/XmlSchemaType.class javax/xml/bind/annotation/XmlRootElement.class javax/xml/bind/annotation/XmlAttribute.class javax/xml/bind/annotation/XmlMixed.class javax/xml/bind/annotation/XmlAccessorType.class ``` Would you have any advice on how to proceed? Issue Time Tracking --- Worklog Id: (was: 755253) Time Spent: 2h 40m (was: 2.5h) > Remove the usage of jersey-json to remove jackson 1.x dependency. > - > > Key: HADOOP-15983 > URL: https://issues.apache.org/jira/browse/HADOOP-15983 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pjfanning commented on pull request #3988: [HADOOP-15983] use jersey-json that is built to use jackson2
pjfanning commented on PR #3988: URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1095160613 @aajisaka there is a still a problem with this change - see https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3988/16/artifact/out/patch-mvninstall-root.txt ``` [INFO] Apache Hadoop Client Packaging Invariants for Test . FAILURE [ 0.824 s] ``` The issue is this (truncated a bit): ``` [ERROR] Found artifact with unexpected contents: '/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-3988/ubuntu-focal/src/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.4.0-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. javax/ javax/xml/ javax/xml/bind/ javax/xml/bind/annotation/ javax/xml/bind/annotation/adapters/ javax/xml/bind/annotation/adapters/XmlAdapter.class javax/xml/bind/annotation/adapters/XmlJavaTypeAdapters.class javax/xml/bind/annotation/adapters/XmlJavaTypeAdapter$DEFAULT.class javax/xml/bind/annotation/adapters/XmlJavaTypeAdapter.class javax/xml/bind/annotation/adapters/CollapsedStringAdapter.class javax/xml/bind/annotation/adapters/HexBinaryAdapter.class javax/xml/bind/annotation/adapters/NormalizedStringAdapter.class javax/xml/bind/annotation/XmlValue.class javax/xml/bind/annotation/XmlRegistry.class javax/xml/bind/annotation/XmlElements.class javax/xml/bind/annotation/XmlElement$DEFAULT.class javax/xml/bind/annotation/XmlElement.class javax/xml/bind/annotation/XmlSchema.class javax/xml/bind/annotation/XmlNs.class javax/xml/bind/annotation/XmlNsForm.class javax/xml/bind/annotation/XmlType$DEFAULT.class javax/xml/bind/annotation/XmlType.class javax/xml/bind/annotation/XmlElementRefs.class javax/xml/bind/annotation/XmlElementRef$DEFAULT.class javax/xml/bind/annotation/XmlElementRef.class javax/xml/bind/annotation/XmlElementDecl$GLOBAL.class javax/xml/bind/annotation/XmlElementDecl.class javax/xml/bind/annotation/XmlElementWrapper.class javax/xml/bind/annotation/DomHandler.class javax/xml/bind/annotation/XmlMimeType.class javax/xml/bind/annotation/XmlSeeAlso.class javax/xml/bind/annotation/W3CDomHandler.class javax/xml/bind/annotation/XmlIDREF.class javax/xml/bind/annotation/XmlAccessType.class javax/xml/bind/annotation/XmlAccessorOrder.class javax/xml/bind/annotation/XmlAccessOrder.class javax/xml/bind/annotation/XmlAttachmentRef.class javax/xml/bind/annotation/XmlAnyElement.class javax/xml/bind/annotation/XmlSchemaTypes.class javax/xml/bind/annotation/XmlSchemaType$DEFAULT.class javax/xml/bind/annotation/XmlSchemaType.class javax/xml/bind/annotation/XmlRootElement.class javax/xml/bind/annotation/XmlAttribute.class javax/xml/bind/annotation/XmlMixed.class javax/xml/bind/annotation/XmlAccessorType.class ``` Would you have any advice on how to proceed? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18198) Release Hadoop 3.3.3: hadoop-3.3.2 with CVE fixes
Steve Loughran created HADOOP-18198: --- Summary: Release Hadoop 3.3.3: hadoop-3.3.2 with CVE fixes Key: HADOOP-18198 URL: https://issues.apache.org/jira/browse/HADOOP-18198 Project: Hadoop Common Issue Type: Task Components: build Affects Versions: 3.3.2 Reporter: Steve Loughran Assignee: Steve Loughran Hadoop 3.3.3 is a minor followup release to Hadoop 3.3.2 with * CVE fixes in Hadoop source * CVE fixes in dependencies * replacement of log4j 1.2.17 to reload4j * some changes which shipped in hadoop 3.2.3 for consistency This is not a release off branch-3.3, it is a fork of 3.3.2 with the changes. The next release of branch-3.3 will be numbered hadoop-3.4; updating maven versions and JIRA fix versions is part of this release process. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15983) Remove the usage of jersey-json to remove jackson 1.x dependency.
[ https://issues.apache.org/jira/browse/HADOOP-15983?focusedWorklogId=755226=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-755226 ] ASF GitHub Bot logged work on HADOOP-15983: --- Author: ASF GitHub Bot Created on: 11/Apr/22 13:37 Start Date: 11/Apr/22 13:37 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #3988: URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1095063897 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 23s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 12s | | trunk passed | | +1 :green_heart: | compile | 22m 53s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 20m 5s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | mvnsite | 27m 26s | | trunk passed | | +1 :green_heart: | javadoc | 8m 12s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 8m 6s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 35m 38s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 35m 57s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 44s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 21m 16s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3988/16/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | +1 :green_heart: | compile | 22m 11s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 22m 11s | | the patch passed | | +1 :green_heart: | compile | 20m 2s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 21m 31s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 15s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 40s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 8m 17s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | shadedclient | 38m 11s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 775m 0s | | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 29s | | The patch does not generate ASF License warnings. | | | | 1062m 28s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3988/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3988 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml shellcheck shelldocs | | uname | Linux 9bd6e244dcc4 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2387b0a6a06b413f69234e99308edca951126647 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04
[GitHub] [hadoop] hadoop-yetus commented on pull request #3988: [HADOOP-15983] use jersey-json that is built to use jackson2
hadoop-yetus commented on PR #3988: URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1095063897 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 23s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 12s | | trunk passed | | +1 :green_heart: | compile | 22m 53s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 20m 5s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | mvnsite | 27m 26s | | trunk passed | | +1 :green_heart: | javadoc | 8m 12s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 8m 6s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 35m 38s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 35m 57s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 44s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 21m 16s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3988/16/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | +1 :green_heart: | compile | 22m 11s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 22m 11s | | the patch passed | | +1 :green_heart: | compile | 20m 2s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 21m 31s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 15s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 40s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 8m 17s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | shadedclient | 38m 11s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 775m 0s | | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 29s | | The patch does not generate ASF License warnings. | | | | 1062m 28s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3988/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3988 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml shellcheck shelldocs | | uname | Linux 9bd6e244dcc4 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2387b0a6a06b413f69234e99308edca951126647 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3988/16/testReport/ | | Max. process+thread count | 3192 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
[GitHub] [hadoop] MingXiangLi commented on pull request #4141: HDFS-16534. Split FsDatasetImpl from block pool locks to volume grain locks.
MingXiangLi commented on PR #4141: URL: https://github.com/apache/hadoop/pull/4141#issuecomment-1095060902 > some method is not frequently and if split to volume lock it have to get locks and release locks sequence.So just acquire block pool lock is enough. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16202) Enhance openFile() for better read performance against object stores
[ https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=755223=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-755223 ] ASF GitHub Bot logged work on HADOOP-16202: --- Author: ASF GitHub Bot Created on: 11/Apr/22 13:34 Start Date: 11/Apr/22 13:34 Worklog Time Spent: 10m Work Description: mehakmeet commented on code in PR #2584: URL: https://github.com/apache/hadoop/pull/2584#discussion_r847102156 ## hadoop-common-project/hadoop-common/src/site/markdown/filesystem/openfile.md: ## @@ -0,0 +1,122 @@ + + +# `FileSystem.openFile()`/`FileContext.openFile()` + +This is a method provided by both FileSystem and FileContext for +advanced file opening options and, where implemented, +an asynchrounous/lazy opening of a file. + +Creates a builder to open a file, supporting options +both standard and filesystem specific. The return +value of the `build()` call is a `Future`, +which must be waited on. The file opening may be +asynchronous, and it may actually be postponed (including +permission/existence checks) until reads are actually +performed. + +This API call was added to `FileSystem` and `FileContext` in +Hadoop 3.3.0; it was tuned in Hadoop 3.3.1 as follows. + +* Added `opt(key, long)` and `must(key, long)`. +* Declared that `withFileStatus(null)` is allowed. +* Declared that `withFileStatus(status)` only checks + the filename of the path, not the full path. + This is needed to support passthrough/mounted filesystems. +* Added standard option keys. + +### `FutureDataInputStreamBuilder openFile(Path path)` + +Creates a [`FutureDataInputStreamBuilder`](fsdatainputstreambuilder.html) +to construct a operation to open the file at `path` for reading. + +When `build()` is invoked on the returned `FutureDataInputStreamBuilder` instance, +the builder parameters are verified and +`FileSystem.openFileWithOptions(Path, OpenFileParameters)` or +`AbstractFileSystem.openFileWithOptions(Path, OpenFileParameters)` invoked. + +These protected methods returns a `CompletableFuture` +which, when its `get()` method is called, either returns an input +stream of the contents of opened file, or raises an exception. + +The base implementation of the `FileSystem.openFileWithOptions(PathHandle, OpenFileParameters)` +ultimately invokes `FileSystem.open(Path, int)`. + +Thus the chain `FileSystem.openFile(path).build().get()` has the same preconditions +and postconditions as `FileSystem.open(Path p, int bufferSize)` + +However, there is one difference which implementations are free to +take advantage of: + +The returned stream MAY implement a lazy open where file non-existence or +access permission failures may not surface until the first `read()` of the +actual data. + +This saves network IO on object stores. + +The `openFile()` operation MAY check the state of the filesystem during its +invocation, but as the state of the filesystem may change betwen this call and Review Comment: typo: "between" ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -4799,23 +4802,26 @@ public AWSCredentialProviderList shareCredentials(final String purpose) { @Retries.RetryTranslated @AuditEntryPoint private FSDataInputStream select(final Path source, - final String expression, final Configuration options, - final Optional providedStatus) + final OpenFileSupport.OpenFileInformation fileInformation) Review Comment: Javadoc correction: Remove params not needed from javadocs of this method. ## hadoop-common-project/hadoop-common/src/site/markdown/filesystem/openfile.md: ## @@ -0,0 +1,122 @@ + + +# `FileSystem.openFile()`/`FileContext.openFile()` + +This is a method provided by both FileSystem and FileContext for +advanced file opening options and, where implemented, +an asynchrounous/lazy opening of a file. + +Creates a builder to open a file, supporting options +both standard and filesystem specific. The return +value of the `build()` call is a `Future`, +which must be waited on. The file opening may be +asynchronous, and it may actually be postponed (including +permission/existence checks) until reads are actually +performed. + +This API call was added to `FileSystem` and `FileContext` in +Hadoop 3.3.0; it was tuned in Hadoop 3.3.1 as follows. + +* Added `opt(key, long)` and `must(key, long)`. +* Declared that `withFileStatus(null)` is allowed. +* Declared that `withFileStatus(status)` only checks + the filename of the path, not the full path. + This is needed to support passthrough/mounted filesystems. +* Added standard option keys. + +### `FutureDataInputStreamBuilder openFile(Path path)` + +Creates a [`FutureDataInputStreamBuilder`](fsdatainputstreambuilder.html) +to construct a operation to open the file at `path` for reading. + +When `build()` is invoked on the returned
[GitHub] [hadoop] mehakmeet commented on a diff in pull request #2584: HADOOP-16202. Enhance openFile() for better read performance against object stores
mehakmeet commented on code in PR #2584: URL: https://github.com/apache/hadoop/pull/2584#discussion_r847102156 ## hadoop-common-project/hadoop-common/src/site/markdown/filesystem/openfile.md: ## @@ -0,0 +1,122 @@ + + +# `FileSystem.openFile()`/`FileContext.openFile()` + +This is a method provided by both FileSystem and FileContext for +advanced file opening options and, where implemented, +an asynchrounous/lazy opening of a file. + +Creates a builder to open a file, supporting options +both standard and filesystem specific. The return +value of the `build()` call is a `Future`, +which must be waited on. The file opening may be +asynchronous, and it may actually be postponed (including +permission/existence checks) until reads are actually +performed. + +This API call was added to `FileSystem` and `FileContext` in +Hadoop 3.3.0; it was tuned in Hadoop 3.3.1 as follows. + +* Added `opt(key, long)` and `must(key, long)`. +* Declared that `withFileStatus(null)` is allowed. +* Declared that `withFileStatus(status)` only checks + the filename of the path, not the full path. + This is needed to support passthrough/mounted filesystems. +* Added standard option keys. + +### `FutureDataInputStreamBuilder openFile(Path path)` + +Creates a [`FutureDataInputStreamBuilder`](fsdatainputstreambuilder.html) +to construct a operation to open the file at `path` for reading. + +When `build()` is invoked on the returned `FutureDataInputStreamBuilder` instance, +the builder parameters are verified and +`FileSystem.openFileWithOptions(Path, OpenFileParameters)` or +`AbstractFileSystem.openFileWithOptions(Path, OpenFileParameters)` invoked. + +These protected methods returns a `CompletableFuture` +which, when its `get()` method is called, either returns an input +stream of the contents of opened file, or raises an exception. + +The base implementation of the `FileSystem.openFileWithOptions(PathHandle, OpenFileParameters)` +ultimately invokes `FileSystem.open(Path, int)`. + +Thus the chain `FileSystem.openFile(path).build().get()` has the same preconditions +and postconditions as `FileSystem.open(Path p, int bufferSize)` + +However, there is one difference which implementations are free to +take advantage of: + +The returned stream MAY implement a lazy open where file non-existence or +access permission failures may not surface until the first `read()` of the +actual data. + +This saves network IO on object stores. + +The `openFile()` operation MAY check the state of the filesystem during its +invocation, but as the state of the filesystem may change betwen this call and Review Comment: typo: "between" ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -4799,23 +4802,26 @@ public AWSCredentialProviderList shareCredentials(final String purpose) { @Retries.RetryTranslated @AuditEntryPoint private FSDataInputStream select(final Path source, - final String expression, final Configuration options, - final Optional providedStatus) + final OpenFileSupport.OpenFileInformation fileInformation) Review Comment: Javadoc correction: Remove params not needed from javadocs of this method. ## hadoop-common-project/hadoop-common/src/site/markdown/filesystem/openfile.md: ## @@ -0,0 +1,122 @@ + + +# `FileSystem.openFile()`/`FileContext.openFile()` + +This is a method provided by both FileSystem and FileContext for +advanced file opening options and, where implemented, +an asynchrounous/lazy opening of a file. + +Creates a builder to open a file, supporting options +both standard and filesystem specific. The return +value of the `build()` call is a `Future`, +which must be waited on. The file opening may be +asynchronous, and it may actually be postponed (including +permission/existence checks) until reads are actually +performed. + +This API call was added to `FileSystem` and `FileContext` in +Hadoop 3.3.0; it was tuned in Hadoop 3.3.1 as follows. + +* Added `opt(key, long)` and `must(key, long)`. +* Declared that `withFileStatus(null)` is allowed. +* Declared that `withFileStatus(status)` only checks + the filename of the path, not the full path. + This is needed to support passthrough/mounted filesystems. +* Added standard option keys. + +### `FutureDataInputStreamBuilder openFile(Path path)` + +Creates a [`FutureDataInputStreamBuilder`](fsdatainputstreambuilder.html) +to construct a operation to open the file at `path` for reading. + +When `build()` is invoked on the returned `FutureDataInputStreamBuilder` instance, +the builder parameters are verified and +`FileSystem.openFileWithOptions(Path, OpenFileParameters)` or +`AbstractFileSystem.openFileWithOptions(Path, OpenFileParameters)` invoked. + +These protected methods returns a `CompletableFuture` +which, when its `get()` method is called, either returns an input +stream of the contents of opened file, or raises an exception. + +The base
[GitHub] [hadoop] MingXiangLi commented on a diff in pull request #4141: HDFS-16534. Split FsDatasetImpl from block pool locks to volume grain locks.
MingXiangLi commented on code in PR #4141: URL: https://github.com/apache/hadoop/pull/4141#discussion_r847329561 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java: ## @@ -629,6 +634,9 @@ public void removeVolumes( synchronized (this) { for (String storageUuid : storageToRemove) { storageMap.remove(storageUuid); +for (String bp : volumeMap.getBlockPoolList()) { Review Comment: It may add lot of locks. All method related add/remove locks is get synchronized first. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] MingXiangLi commented on a diff in pull request #4141: HDFS-16534. Split FsDatasetImpl from block pool locks to volume grain locks.
MingXiangLi commented on code in PR #4141: URL: https://github.com/apache/hadoop/pull/4141#discussion_r847326503 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java: ## @@ -1887,12 +1896,12 @@ public ReplicaHandler createTemporary(StorageType storageType, false); } long startHoldLockTimeMs = Time.monotonicNow(); -try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, -b.getBlockPoolId())) { - FsVolumeReference ref = volumes.getNextVolume(storageType, storageId, b - .getNumBytes()); - FsVolumeImpl v = (FsVolumeImpl) ref.getVolume(); - ReplicaInPipeline newReplicaInfo; +FsVolumeReference ref = volumes.getNextVolume(storageType, storageId, b Review Comment: volumes. getNextVolume() is thread safe, and no need protected by dataset lock. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer
[ https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17520566#comment-17520566 ] Steve Loughran commented on HADOOP-13363: - we've just had HADOOP-18197 reported and the cve CVE-2021-22569. protobuf 3.7.1 is still vulnerable by the look of things SOLR-15911 which probably lmeans: move to 3.18.x. joy. > Upgrade protobuf from 2.5.0 to something newer > -- > > Key: HADOOP-13363 > URL: https://issues.apache.org/jira/browse/HADOOP-13363 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Anu Engineer >Assignee: Vinayakumar B >Priority: Major > Labels: security > Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, > HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch > > > Standard protobuf 2.5.0 does not work properly on many platforms. (See, for > example, https://gist.github.com/BennettSmith/7111094 ). In order for us to > avoid crazy work arounds in the build environment and the fact that 2.5.0 is > starting to slowly disappear as a standard install-able package for even > Linux/x86, we need to either upgrade or self bundle or something else. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18197) Update protobuf 3.7.1 to a version without CVE-2021-22569
[ https://issues.apache.org/jira/browse/HADOOP-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17520557#comment-17520557 ] Steve Loughran commented on HADOOP-18197: - [~ivan.viaznikov] HADOOP-16557 upgraded our internal binaries to compile against 3.7.1, as we shade the classes we can update/upgrade without the risk of breaking every other app. we do still ship the old jar, which is something we can revisit. we will need to update our own protobuf version though > Update protobuf 3.7.1 to a version without CVE-2021-22569 > - > > Key: HADOOP-18197 > URL: https://issues.apache.org/jira/browse/HADOOP-18197 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan Viaznikov >Priority: Major > Labels: security > > The artifact `org.apache.hadoop:hadoop-common` brings in a dependency > `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version > released in 2013 and it contains a vulnerability > [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569]. > Therefore, requesting you to clarify if this library version is going to be > updated in the following releases -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] (HADOOP-18197) Update protobuf 3.7.1 to a version without CVE-2021-22569
[ https://issues.apache.org/jira/browse/HADOOP-18197 ] Steve Loughran deleted comment on HADOOP-18197: - was (Author: ste...@apache.org): duplicate of HADOOP-17860 you involvement here would be welcome; the move to shaded versions of the library does now make it possible without breaking everything else. thanks > Update protobuf 3.7.1 to a version without CVE-2021-22569 > - > > Key: HADOOP-18197 > URL: https://issues.apache.org/jira/browse/HADOOP-18197 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan Viaznikov >Priority: Major > Labels: security > > The artifact `org.apache.hadoop:hadoop-common` brings in a dependency > `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version > released in 2013 and it contains a vulnerability > [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569]. > Therefore, requesting you to clarify if this library version is going to be > updated in the following releases -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18197) Update protobuf 3.7.1 to a version without CVE-2021-22569
[ https://issues.apache.org/jira/browse/HADOOP-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18197: Summary: Update protobuf 3.7.1 to a version without CVE-2021-22569 (was: Update the vulnerable protobuf-java:2.5.0 to a newer version) > Update protobuf 3.7.1 to a version without CVE-2021-22569 > - > > Key: HADOOP-18197 > URL: https://issues.apache.org/jira/browse/HADOOP-18197 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan Viaznikov >Priority: Major > Labels: security > > The artifact `org.apache.hadoop:hadoop-common` brings in a dependency > `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version > released in 2013 and it contains a vulnerability > [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569]. > Therefore, requesting you to clarify if this library version is going to be > updated in the following releases -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17860) Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities #CVE-2015-5237, CVE-2019-15544,
[ https://issues.apache.org/jira/browse/HADOOP-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17860: Summary: Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities #CVE-2015-5237, CVE-2019-15544, (was: Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities #CVE-2015-5237, CVE-2019-15544, CVE-2021-22569) > Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities > #CVE-2015-5237, CVE-2019-15544, > -- > > Key: HADOOP-17860 > URL: https://issues.apache.org/jira/browse/HADOOP-17860 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sushanta Sen >Priority: Major > > Third party jar protobuf-java-2.5.0.jar reports vulnerabilities # > CVE-2015-5237, CVE-2019-15544 and need to be upgraded. > CVE-2019-15544: > Vulnerability Description:An issue was discovered in the protobuf crate > before 2.6.0 for Rust. Attackers can exhaust all memory via Vec::reserve > calls. > CVE-2015-5237: > Vulnerability Description:protobuf allows remote authenticated attackers to > cause a heap-based buffer overflow. > > Please review and let me know if you have any concerns or would like to add > more details to upgrade. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-18197) Update the vulnerable protobuf-java:2.5.0 to a newer version
[ https://issues.apache.org/jira/browse/HADOOP-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-18197: - as the real protobuf jar is vulnerable to the CVE, reopening this and changing the title. this is not *just* a protobuf 2.5 issue > Update the vulnerable protobuf-java:2.5.0 to a newer version > > > Key: HADOOP-18197 > URL: https://issues.apache.org/jira/browse/HADOOP-18197 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan Viaznikov >Priority: Major > Labels: security > > The artifact `org.apache.hadoop:hadoop-common` brings in a dependency > `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version > released in 2013 and it contains a vulnerability > [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569]. > Therefore, requesting you to clarify if this library version is going to be > updated in the following releases -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17860) Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities #CVE-2015-5237, CVE-2019-15544, CVE-2021-22569
[ https://issues.apache.org/jira/browse/HADOOP-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17860: Summary: Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities #CVE-2015-5237, CVE-2019-15544, CVE-2021-22569 (was: Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities #CVE-2015-5237, CVE-2019-15544) > Upgrade third party protobuf-java-2.5.0.jar to address vulnerabilities > #CVE-2015-5237, CVE-2019-15544, CVE-2021-22569 > -- > > Key: HADOOP-17860 > URL: https://issues.apache.org/jira/browse/HADOOP-17860 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sushanta Sen >Priority: Major > > Third party jar protobuf-java-2.5.0.jar reports vulnerabilities # > CVE-2015-5237, CVE-2019-15544 and need to be upgraded. > CVE-2019-15544: > Vulnerability Description:An issue was discovered in the protobuf crate > before 2.6.0 for Rust. Attackers can exhaust all memory via Vec::reserve > calls. > CVE-2015-5237: > Vulnerability Description:protobuf allows remote authenticated attackers to > cause a heap-based buffer overflow. > > Please review and let me know if you have any concerns or would like to add > more details to upgrade. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18197) Update the vulnerable protobuf-java:2.5.0 to a newer version
[ https://issues.apache.org/jira/browse/HADOOP-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18197. - Resolution: Duplicate duplicate of HADOOP-17860 you involvement here would be welcome; the move to shaded versions of the library does now make it possible without breaking everything else. thanks > Update the vulnerable protobuf-java:2.5.0 to a newer version > > > Key: HADOOP-18197 > URL: https://issues.apache.org/jira/browse/HADOOP-18197 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ivan Viaznikov >Priority: Major > Labels: security > > The artifact `org.apache.hadoop:hadoop-common` brings in a dependency > `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version > released in 2013 and it contains a vulnerability > [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569]. > Therefore, requesting you to clarify if this library version is going to be > updated in the following releases -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liubingxing commented on a diff in pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread
liubingxing commented on code in PR #4032: URL: https://github.com/apache/hadoop/pull/4032#discussion_r847262936 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java: ## @@ -248,13 +250,18 @@ public void run() { pendingWorkForDirectory.get(startINode); if (dirPendingWorkInfo != null && dirPendingWorkInfo.isDirWorkDone()) { -ctxt.removeSPSHint(startINode); pendingWorkForDirectory.remove(startINode); +ctxt.removeSPSHint(startINode); Review Comment: @tasanuma Thanks for your suggestion. It is also a good way to solve this problem by catching the FileNotFoundException here. I updated the code according to your suggestion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4151: HADOOP-18088. Replace log4j 1.x with reload4j.
hadoop-yetus commented on PR #4151: URL: https://github.com/apache/hadoop/pull/4151#issuecomment-1094927966 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-2.10 Compile Tests _ | | +0 :ok: | mvndep | 3m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 14m 30s | | branch-2.10 passed | | +1 :green_heart: | compile | 13m 57s | | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 11m 46s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | checkstyle | 2m 15s | | branch-2.10 passed | | +1 :green_heart: | mvnsite | 12m 35s | | branch-2.10 passed | | -1 :x: | javadoc | 0m 47s | [/branch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt) | root in branch-2.10 failed with JDK Azul Systems, Inc.-1.7.0_262-b10. | | +1 :green_heart: | javadoc | 5m 49s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +0 :ok: | spotbugs | 0m 39s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 18s | | branch/hadoop-assemblies no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 27m 28s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-spotbugs-root-warnings.html) | root in branch-2.10 has 4 extant spotbugs warnings. | | -1 :x: | spotbugs | 1m 58s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in branch-2.10 has 2 extant spotbugs warnings. | | -1 :x: | spotbugs | 2m 20s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html) | hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 1 extant spotbugs warnings. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 21m 3s | | the patch passed | | +1 :green_heart: | compile | 13m 5s | | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 13m 5s | | the patch passed | | +1 :green_heart: | compile | 10m 59s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | javac | 10m 59s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 5s | | the patch passed | | +1 :green_heart: | mvnsite | 10m 54s | | the patch passed | | +1 :green_heart: | xml | 0m 32s | | The patch has no ill-formed XML file. | | -1 :x: | javadoc | 0m 26s | [/patch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/patch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt) | root in the patch failed with JDK Azul Systems, Inc.-1.7.0_262-b10. | | +1 :green_heart: | javadoc | 5m 24s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +0 :ok: | spotbugs | 0m 14s | | hadoop-project has no data from spotbugs | | +0 :ok: | spotbugs | 0m 14s | | hadoop-assemblies has no data from spotbugs | _ Other Tests _ | | -1 :x: | unit | 435m 36s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/patch-unit-root.txt) | root in the patch passed. | | -1 :x: | asflicense | 1m 20s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 671m 20s | | | | Reason | Tests | |---:|:--| |
[jira] [Work logged] (HADOOP-18088) Replace log4j 1.x with reload4j
[ https://issues.apache.org/jira/browse/HADOOP-18088?focusedWorklogId=755182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-755182 ] ASF GitHub Bot logged work on HADOOP-18088: --- Author: ASF GitHub Bot Created on: 11/Apr/22 11:19 Start Date: 11/Apr/22 11:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4151: URL: https://github.com/apache/hadoop/pull/4151#issuecomment-1094927966 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-2.10 Compile Tests _ | | +0 :ok: | mvndep | 3m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 14m 30s | | branch-2.10 passed | | +1 :green_heart: | compile | 13m 57s | | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 11m 46s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | checkstyle | 2m 15s | | branch-2.10 passed | | +1 :green_heart: | mvnsite | 12m 35s | | branch-2.10 passed | | -1 :x: | javadoc | 0m 47s | [/branch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt) | root in branch-2.10 failed with JDK Azul Systems, Inc.-1.7.0_262-b10. | | +1 :green_heart: | javadoc | 5m 49s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +0 :ok: | spotbugs | 0m 39s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 18s | | branch/hadoop-assemblies no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 27m 28s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-spotbugs-root-warnings.html) | root in branch-2.10 has 4 extant spotbugs warnings. | | -1 :x: | spotbugs | 1m 58s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in branch-2.10 has 2 extant spotbugs warnings. | | -1 :x: | spotbugs | 2m 20s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html) | hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 1 extant spotbugs warnings. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 21m 3s | | the patch passed | | +1 :green_heart: | compile | 13m 5s | | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 13m 5s | | the patch passed | | +1 :green_heart: | compile | 10m 59s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | javac | 10m 59s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 5s | | the patch passed | | +1 :green_heart: | mvnsite | 10m 54s | | the patch passed | | +1 :green_heart: | xml | 0m 32s | | The patch has no ill-formed XML file. | | -1 :x: | javadoc | 0m 26s | [/patch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4151/2/artifact/out/patch-javadoc-root-jdkAzulSystems,Inc.-1.7.0_262-b10.txt) | root in the patch failed with JDK Azul Systems, Inc.-1.7.0_262-b10. | | +1 :green_heart: | javadoc | 5m 24s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +0 :ok: | spotbugs | 0m 14s | | hadoop-project has no data from spotbugs | | +0 :ok: | spotbugs | 0m 14s | | hadoop-assemblies has no data from spotbugs | _ Other Tests _ | | -1 :x: | unit | 435m 36s |
[GitHub] [hadoop] GuoPhilipse opened a new pull request, #4160: HDFS-16537.Fix oev decode xml error
GuoPhilipse opened a new pull request, #4160: URL: https://github.com/apache/hadoop/pull/4160 JIRA:HDFS-16537 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18197) Update the vulnerable protobuf-java:2.5.0 to a newer version
Ivan Viaznikov created HADOOP-18197: --- Summary: Update the vulnerable protobuf-java:2.5.0 to a newer version Key: HADOOP-18197 URL: https://issues.apache.org/jira/browse/HADOOP-18197 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan Viaznikov The artifact `org.apache.hadoop:hadoop-common` brings in a dependency `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version released in 2013 and it contains a vulnerability [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569]. Therefore, requesting you to clarify if this library version is going to be updated in the following releases -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4104: HDFS-16520. Improve EC pread: avoid potential reading whole block
hadoop-yetus commented on PR #4104: URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1094839139 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 13s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 18s | | trunk passed | | +1 :green_heart: | compile | 6m 1s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 5m 45s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 25s | | trunk passed | | +1 :green_heart: | javadoc | 1m 52s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 20s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 7s | | the patch passed | | +1 :green_heart: | compile | 5m 54s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 54s | | the patch passed | | +1 :green_heart: | compile | 5m 33s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 5m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 6s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 1 new + 29 unchanged - 0 fixed = 30 total (was 29) | | +1 :green_heart: | mvnsite | 2m 11s | | the patch passed | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 57s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 26s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 229m 20s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 370m 31s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4104 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 9f7e91d244d9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d7c5bd02a8c6b3bec5a5a950e68cb09d0aa71ed5 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/testReport/ | | Max. process+thread
[GitHub] [hadoop] hadoop-yetus commented on pull request #4159: YARN-10553. Refactor TestDistributedShell (#2581)
hadoop-yetus commented on PR #4159: URL: https://github.com/apache/hadoop/pull/4159#issuecomment-1094810600 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 53s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 28s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 30s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 34s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 33s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 1m 5s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 23m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: The patch generated 0 new + 83 unchanged - 11 fixed = 83 total (was 94) | | +1 :green_heart: | mvnsite | 0m 22s | | the patch passed | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed | | +1 :green_heart: | spotbugs | 0m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 41s | | hadoop-yarn-applications-distributedshell in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 114m 28s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4159/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4159 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5eb236c50e42 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / ad851f654b41548a1818daf3c191657cd5ce34eb | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4159/1/testReport/ | | Max. process+thread count | 697 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4159/1/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #4090: HDFS-16516. Fix Fsshell wrong params
Hexiaoqiao commented on PR #4090: URL: https://github.com/apache/hadoop/pull/4090#issuecomment-1094669958 Committed to trunk. Thanks @GuoPhilipse for your contribution, and thanks @tomscut @cndaimin for your reviews. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao merged pull request #4090: HDFS-16516. Fix Fsshell wrong params
Hexiaoqiao merged PR #4090: URL: https://github.com/apache/hadoop/pull/4090 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka opened a new pull request, #4159: YARN-10553. Refactor TestDistributedShell (#2581)
aajisaka opened a new pull request, #4159: URL: https://github.com/apache/hadoop/pull/4159 (cherry picked from commit 890f2da624465473a5f401a3bcfc4bbd068289a1) Conflicts: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDSWithMultipleNodeManager.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java ### Description of PR Backport YARN-10553 to branch-3.3. Fixed conflicts in TestDSWithMultipleNodeManager because YARN-10360 is only in trunk. ### How was this patch tested? Manually ran the related tests but TestDSTimelineV20 failed. I'll investigate why the test is failing: ``` [ERROR] Tests run: 6, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 422.012 s <<< FAILURE! - in org.apache.hadoop.yarn.applications.distributedshell.TestDSTimelineV20 [ERROR] testDSShellWithoutDomain(org.apache.hadoop.yarn.applications.distributedshell.TestDSTimelineV20) Time elapsed: 73.413 s <<< FAILURE! java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at org.junit.Assert.assertTrue(Assert.java:42) at org.junit.Assert.assertTrue(Assert.java:53) at org.apache.hadoop.yarn.applications.distributedshell.TestDSTimelineV20.verifyEntityTypeFileExists(TestDSTimelineV20.java:478) ... ``` ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - n/a Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - n/a If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - n/a If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cndaimin commented on pull request #4090: HDFS-16516. Fix Fsshell wrong params
cndaimin commented on PR #4090: URL: https://github.com/apache/hadoop/pull/4090#issuecomment-1094619991 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cndaimin commented on pull request #4090: HDFS-16516. Fix Fsshell wrong params
cndaimin commented on PR #4090: URL: https://github.com/apache/hadoop/pull/4090#issuecomment-1094618474 I have some concern on change of API, FsShell also, since some ops-tool might rely on it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] singer-bin commented on pull request #4156: HDFS-16457.Make fs.getspaceused.classname reconfigurable (apache#4069)
singer-bin commented on PR #4156: URL: https://github.com/apache/hadoop/pull/4156#issuecomment-1094609750 Also thanks to @tasanuma for the review and suggestion, thank you. I also have a jira [16525](https://issues.apache.org/jira/browse/HDFS-16525) I don't know if you are interested in taking a look? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18191) Log retry count while handling exceptions in RetryInvocationHandler
[ https://issues.apache.org/jira/browse/HADOOP-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17520326#comment-17520326 ] Viraj Jasani edited comment on HADOOP-18191 at 4/11/22 6:35 AM: Thank you [~tasanuma] and [~ste...@apache.org] for your reviews! was (Author: vjasani): Thank you [~tasanuma] ! > Log retry count while handling exceptions in RetryInvocationHandler > --- > > Key: HADOOP-18191 > URL: https://issues.apache.org/jira/browse/HADOOP-18191 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3 > > Time Spent: 1h > Remaining Estimate: 0h > > As part of failure handling in RetryInvocationHandler, we log details of the > Exception details with which API was invoked, failover attempts, delay. > For the purpose of better debugging as well as fine-tuning of retry params, > it would be good to also log retry count that we already maintain in the > Counter object. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18191) Log retry count while handling exceptions in RetryInvocationHandler
[ https://issues.apache.org/jira/browse/HADOOP-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17520326#comment-17520326 ] Viraj Jasani commented on HADOOP-18191: --- Thank you [~tasanuma] ! > Log retry count while handling exceptions in RetryInvocationHandler > --- > > Key: HADOOP-18191 > URL: https://issues.apache.org/jira/browse/HADOOP-18191 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3 > > Time Spent: 1h > Remaining Estimate: 0h > > As part of failure handling in RetryInvocationHandler, we log details of the > Exception details with which API was invoked, failover attempts, delay. > For the purpose of better debugging as well as fine-tuning of retry params, > it would be good to also log retry count that we already maintain in the > Counter object. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on a diff in pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread
tasanuma commented on code in PR #4032: URL: https://github.com/apache/hadoop/pull/4032#discussion_r846985459 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java: ## @@ -248,13 +250,18 @@ public void run() { pendingWorkForDirectory.get(startINode); if (dirPendingWorkInfo != null && dirPendingWorkInfo.isDirWorkDone()) { -ctxt.removeSPSHint(startINode); pendingWorkForDirectory.remove(startINode); +ctxt.removeSPSHint(startINode); Review Comment: @liubingxing I still prefer to catch the FileNotFoundException here. What do you think? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cndaimin commented on a diff in pull request #3982: HDFS-16454:fix inconsistent comments in DataNode
cndaimin commented on code in PR #3982: URL: https://github.com/apache/hadoop/pull/3982#discussion_r846984707 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java: ## @@ -2227,14 +2227,14 @@ public void shutdown() { // wait reconfiguration thread, if any, to exit shutdownReconfigurationTask(); -LOG.info("Waiting up to 30 seconds for transfer threads to complete"); +LOG.info("Waiting up to 15 seconds for transfer threads to complete"); Review Comment: `executorService.awaitTermination(timeout, unit)` will be possibly invoked 2 times in `HadoopExecutors#shutdown`. The time value in log looks ok. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18191) Log retry count while handling exceptions in RetryInvocationHandler
[ https://issues.apache.org/jira/browse/HADOOP-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HADOOP-18191. --- Fix Version/s: 3.4.0 2.10.2 3.2.4 3.3.3 Resolution: Fixed Resolved. Thanks for your contribution, [~vjasani]. > Log retry count while handling exceptions in RetryInvocationHandler > --- > > Key: HADOOP-18191 > URL: https://issues.apache.org/jira/browse/HADOOP-18191 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3 > > Time Spent: 1h > Remaining Estimate: 0h > > As part of failure handling in RetryInvocationHandler, we log details of the > Exception details with which API was invoked, failover attempts, delay. > For the purpose of better debugging as well as fine-tuning of retry params, > it would be good to also log retry count that we already maintain in the > Counter object. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-17116: -- Fix Version/s: 2.10.2 3.2.4 3.3.3 > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3 > > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17520319#comment-17520319 ] Takanobu Asanuma commented on HADOOP-17116: --- This fix is a good improvement, and I'd like to backport this into lower branches. > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #4156: HDFS-16457.Make fs.getspaceused.classname reconfigurable (apache#4069)
tasanuma commented on PR #4156: URL: https://github.com/apache/hadoop/pull/4156#issuecomment-1094582191 @singer-bin Thanks for your contribution! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org