[PR] improvement: yetus exit [hadoop]

2024-04-17 Thread via GitHub


chenshuai1995 opened a new pull request, #6744:
URL: https://github.com/apache/hadoop/pull/6744

   
   
   ### Description of PR
   1. If there is a local YETUS_HOME, use it directly and exit after execution. 
If you don't exit, it will continue to execute down the line, which is not 
necessary.
   2. If you have already downloaded it locally, use it directly and exit after 
execution. If you do not exit, it will continue to the next execution, there is 
no need.
   3. If the above execution, are not quit, will still download the yetus, and 
then execute the yetus, and wait for the script execution is completed before 
exiting.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17459. [FGL] Add documentation [hadoop]

2024-04-17 Thread via GitHub


kokonguyen191 commented on code in PR #6737:
URL: https://github.com/apache/hadoop/pull/6737#discussion_r1568339380


##
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/NamenodeFGL.md:
##
@@ -0,0 +1,201 @@
+
+
+HDFS Namenode Fine-grained Locking
+==
+
+
+
+Overview
+
+
+HDFS relies on a single master, the Namenode (NN), as its metadata center.
+From an architectural point of view, a few elements make NN the bottleneck of 
an HDFS cluster:
+* NN keeps the entire namespace in memory (directory tree, blocks, Datanode 
related info, etc.)
+* Read requests (`getListing`, `getFileInfo`, `getBlockLocations`) are served 
from memory.
+Write requests (`mkdir`, `create`, `addBlock`, `complete`) update the memory 
state and write a journal transaction into QJM.
+Both types of requests need a locking mechanism to ensure data consistency and 
correctness.
+* All requests are funneled into NN and have to go through the global FS lock.
+Each write operation acquires this lock in write mode and holds it until that 
operation is executed.
+This lock mode prevents concurrent execution of write operations even if they 
involve different branches of the directory tree.
+
+NN fine-grained locking (FGL) implementation aims to alleviate this bottleneck 
by allowing concurrency of disjoint write operations.
+
+JIRA: [HDFS-17366](https://issues.apache.org/jira/browse/HDFS-17366)
+
+Design
+--
+In theory, fully independent operations can be processed concurrently, such as 
operations involving different subdirectory trees.
+As such, NN can split the global lock into the full path lock, just using the 
full path lock to protect a special subdirectory tree.
+
+### RPC Categorization
+
+Roughly, RPC operations handled by NN can be divided into 8 main categories
+
+| Category   | Operations  





  |
+||---|
+| Involving namespace tree   | `mkdir`, `create` (without 
overwrite), `getFileInfo` (without locations), `getListing` (without 
locations), `setOwner`, `setPermission`, `getStoragePolicy`, 
`setStoragePolicy`, `rename`, `isFileClosed`, `getFileLinkInfo`, `setTimes`, 
`modifyAclEntries`, `removeAclEntries`, `setAcl`, `getAcl`, `setXAttr`, 
`getXAttrs`, `listXAttrs`, `removeXAttr`, `checkAccess`, 
`getErasureCodingPolicy`, `unsetErasureCodingPolicy`, `getQuotaUsage`, 
`getPreferredBlockSize` |
+| Involving only blocks  | `reportBadBlocks`, 
`updateBlockForPipeline`, `updatePipeline`  




   |
+| Involving only DNs | `registerDatanode`, 
`setBalancerBandwidth`, `sendHeartbeat` 




  |
+| Involving both namespace tree & blocks | `getBlockLocation`, `create` (with 
overwrite), `append`, `setReplication`, `abandonBlock`, `addBlock`, 
`getAdditionalDatanode`, `complete`, `concat`, `truncate`, `delete`, 
`getListing` (with locations), `getFileInfo` (with locations), `recoverLease`, 
`listCorruptFileBlocks`, `fsync`, `commitBlockSynchronization`, 
`RedundancyMonitor`, `processMisReplicatedBlocks`   
   |
+| Involving both DNs & blocks| `getBlocks`, `errorReport`  
  

Re: [PR] HDFS-17459. [FGL] Add documentation [hadoop]

2024-04-17 Thread via GitHub


kokonguyen191 commented on code in PR #6737:
URL: https://github.com/apache/hadoop/pull/6737#discussion_r1568338790


##
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/NamenodeFGL.md:
##
@@ -0,0 +1,201 @@
+
+
+HDFS Namenode Fine-grained Locking
+==
+
+
+
+Overview
+
+
+HDFS relies on a single master, the Namenode (NN), as its metadata center.
+From an architectural point of view, a few elements make NN the bottleneck of 
an HDFS cluster:
+* NN keeps the entire namespace in memory (directory tree, blocks, Datanode 
related info, etc.)
+* Read requests (`getListing`, `getFileInfo`, `getBlockLocations`) are served 
from memory.
+Write requests (`mkdir`, `create`, `addBlock`, `complete`) update the memory 
state and write a journal transaction into QJM.
+Both types of requests need a locking mechanism to ensure data consistency and 
correctness.
+* All requests are funneled into NN and have to go through the global FS lock.
+Each write operation acquires this lock in write mode and holds it until that 
operation is executed.
+This lock mode prevents concurrent execution of write operations even if they 
involve different branches of the directory tree.
+
+NN fine-grained locking (FGL) implementation aims to alleviate this bottleneck 
by allowing concurrency of disjoint write operations.
+
+JIRA: [HDFS-17366](https://issues.apache.org/jira/browse/HDFS-17366)
+
+Design
+--
+In theory, fully independent operations can be processed concurrently, such as 
operations involving different subdirectory trees.
+As such, NN can split the global lock into the full path lock, just using the 
full path lock to protect a special subdirectory tree.
+
+### RPC Categorization
+
+Roughly, RPC operations handled by NN can be divided into 8 main categories
+
+| Category   | Operations  





  |
+||---|
+| Involving namespace tree   | `mkdir`, `create` (without 
overwrite), `getFileInfo` (without locations), `getListing` (without 
locations), `setOwner`, `setPermission`, `getStoragePolicy`, 
`setStoragePolicy`, `rename`, `isFileClosed`, `getFileLinkInfo`, `setTimes`, 
`modifyAclEntries`, `removeAclEntries`, `setAcl`, `getAcl`, `setXAttr`, 
`getXAttrs`, `listXAttrs`, `removeXAttr`, `checkAccess`, 
`getErasureCodingPolicy`, `unsetErasureCodingPolicy`, `getQuotaUsage`, 
`getPreferredBlockSize` |
+| Involving only blocks  | `reportBadBlocks`, 
`updateBlockForPipeline`, `updatePipeline`  




   |
+| Involving only DNs | `registerDatanode`, 
`setBalancerBandwidth`, `sendHeartbeat` 




  |
+| Involving both namespace tree & blocks | `getBlockLocation`, `create` (with 
overwrite), `append`, `setReplication`, `abandonBlock`, `addBlock`, 
`getAdditionalDatanode`, `complete`, `concat`, `truncate`, `delete`, 
`getListing` (with locations), `getFileInfo` (with locations), `recoverLease`, 
`listCorruptFileBlocks`, `fsync`, `commitBlockSynchronization`, 
`RedundancyMonitor`, `processMisReplicatedBlocks`   
   |
+| Involving both DNs & blocks| `getBlocks`, `errorReport`  
  

Re: [PR] HADOOP-19151. Support configurable SASL mechanism. [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6740:
URL: https://github.com/apache/hadoop/pull/6740#issuecomment-2060592814

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |  10m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 51s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6740/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 4 new + 138 unchanged - 2 fixed = 142 total (was 
140)  |
   | +1 :green_heart: |  mvnsite  |   5m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |  11m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  19m 31s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6740/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 46s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 228m 26s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 120m 54s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 641m 47s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestSaslRPC |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6740/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6740 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6a002d7875ff 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 
20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 99e0caa7d2eabf3a234eaaadcd8c265ed00fca75 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubun

[jira] [Commented] (HADOOP-19151) Support configurable SASL mechanism

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838013#comment-17838013
 ] 

ASF GitHub Bot commented on HADOOP-19151:
-

hadoop-yetus commented on PR #6740:
URL: https://github.com/apache/hadoop/pull/6740#issuecomment-2060592814

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |  10m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 51s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6740/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 4 new + 138 unchanged - 2 fixed = 142 total (was 
140)  |
   | +1 :green_heart: |  mvnsite  |   5m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |  11m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  19m 31s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6740/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 46s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 228m 26s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 120m 54s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 641m 47s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestSaslRPC |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6740/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6740 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6a002d7875ff 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 
20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

Re: [PR] HDFS-17471. Correct the percentage of file I/O events. [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6742:
URL: https://github.com/apache/hadoop/pull/6742#issuecomment-2060605281

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 198m 47s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 295m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6742/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6742 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5859d9b23df2 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cb17014faf1a2d66d75971584b365a0d7d60ce47 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6742/1/testReport/ |
   | Max. process+thread count | 4361 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6742/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568391467


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##
@@ -817,8 +817,8 @@ public AbfsInputStream openFileForRead(Path path,
   FileStatus fileStatus = parameters.map(OpenFileParameters::getStatus)
   .orElse(null);
   String relativePath = getRelativePath(path);
-  String resourceType, eTag;
-  long contentLength;
+  String resourceType = null, eTag = null;

Review Comment:
   to prevent compilation error. As these two variables are used to create 
inputStream. Now, these variables may or may not be initialized, hence we need 
to set as null in the starting. In case if it would be that these variables 
were always initialized, we would not need to set as null in the starting.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838018#comment-17838018
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568391467


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##
@@ -817,8 +817,8 @@ public AbfsInputStream openFileForRead(Path path,
   FileStatus fileStatus = parameters.map(OpenFileParameters::getStatus)
   .orElse(null);
   String relativePath = getRelativePath(path);
-  String resourceType, eTag;
-  long contentLength;
+  String resourceType = null, eTag = null;

Review Comment:
   to prevent compilation error. As these two variables are used to create 
inputStream. Now, these variables may or may not be initialized, hence we need 
to set as null in the starting. In case if it would be that these variables 
were always initialized, we would not need to set as null in the starting.





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568395910


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1099,7 +1099,9 @@ public AbfsRestOperation read(final String path,
 AbfsHttpHeader rangeHeader = new AbfsHttpHeader(RANGE,
 String.format("bytes=%d-%d", position, position + bufferLength - 1));
 requestHeaders.add(rangeHeader);
-requestHeaders.add(new AbfsHttpHeader(IF_MATCH, eTag));
+if (eTag == null || !eTag.isEmpty()) {

Review Comment:
   Thats a good catch! corrected.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838019#comment-17838019
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568395910


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1099,7 +1099,9 @@ public AbfsRestOperation read(final String path,
 AbfsHttpHeader rangeHeader = new AbfsHttpHeader(RANGE,
 String.format("bytes=%d-%d", position, position + bufferLength - 1));
 requestHeaders.add(rangeHeader);
-requestHeaders.add(new AbfsHttpHeader(IF_MATCH, eTag));
+if (eTag == null || !eTag.isEmpty()) {

Review Comment:
   Thats a good catch! corrected.





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-04-17 Thread via GitHub


ayushtkn commented on code in PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#discussion_r1568441923


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java:
##
@@ -60,6 +62,9 @@ public class TestFTPFileSystem {
   @Rule
   public Timeout testTimeout = new Timeout(18, TimeUnit.MILLISECONDS);
 
+  @Rule
+  public TestName name = new TestName();
+

Review Comment:
   you could have put the actual testname, "testName" as string is 
non-indicative



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838036#comment-17838036
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

ayushtkn commented on code in PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#discussion_r1568441923


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java:
##
@@ -60,6 +62,9 @@ public class TestFTPFileSystem {
   @Rule
   public Timeout testTimeout = new Timeout(18, TimeUnit.MILLISECONDS);
 
+  @Rule
+  public TestName name = new TestName();
+

Review Comment:
   you could have put the actual testname, "testName" as string is 
non-indicative





> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-27-09-59-12-381.png, 
> image-2024-03-28-09-58-19-721.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
>    When use fs shell to put/rename file in ftp server with full qualified 
> path , it always get "Input/output error"(eg. 
> [ftp://user:password@localhost/pathxxx]), the reason is that 
> changeWorkingDirectory command underneath is being passed a string with 
> [file://|file:///] uri prefix which will not be understand by ftp server
> !image-2024-03-27-09-59-12-381.png|width=948,height=156!
>  
> in our case, after 
> client.changeWorkingDirectory("ftp://mytest:myt...@10.5.xx.xx/files";)
> executed, the workingDirectory of ftp server is still "/", which is 
> incorrect(not understand by ftp server)
> !image-2024-03-28-09-58-19-721.png|width=745,height=431!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> [file://|file:///] uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I create this issue and add related unit test.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17457. [FGL] UTs support fine-grained locking [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6741:
URL: https://github.com/apache/hadoop/pull/6741#issuecomment-2060740027

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 40 new or modified test files.  |
    _ HDFS-17384 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  9s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 46s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   8m 49s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  3s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m  8s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   0m 54s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6741/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in HDFS-17384 has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  19m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m  4s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6741/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 356 unchanged - 1 fixed = 357 total (was 
357)  |
   | +1 :green_heart: |  mvnsite  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 203m 14s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  unit  |  29m 42s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6741/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-fs2img in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 381m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6741/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6741 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 4b84f807fc73 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Li

Re: [PR] improvement: yetus exit [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6744:
URL: https://github.com/apache/hadoop/pull/6744#issuecomment-2060742687

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  34m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  32m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  99m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6744/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6744 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs |
   | uname | Linux 6c145f2766ba 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 251fc9167d93edcd53c6e12bb29dc54462763cbd |
   | Max. process+thread count | 564 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6744/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838053#comment-17838053
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

hadoop-yetus commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2060770546

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  17m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 15s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 4 
unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 40s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 225m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6678 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c8bd749db5be 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d2b33a3a3c59af27f6fc0eb0dda892f016861c3d |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/4/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Con

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838072#comment-17838072
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568558190


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -376,32 +439,48 @@ private int readLastBlock(final byte[] b, final int off, 
final int len)
 // data need to be copied to user buffer from index bCursor,
 // AbfsInutStream buffer is going to contain data from last block start. In
 // that case bCursor will be set to fCursor - lastBlockStart
-long lastBlockStart = max(0, contentLength - footerReadSize);
+if (!fileStatusInformationPresent.get()) {
+  long lastBlockStart = max(0, (fCursor + len) - footerReadSize);
+  bCursor = (int) (fCursor - lastBlockStart);
+  return optimisedRead(b, off, len, lastBlockStart, min(fCursor + len, 
footerReadSize), true);
+}
+long lastBlockStart = max(0, getContentLength() - footerReadSize);
 bCursor = (int) (fCursor - lastBlockStart);
 // 0 if contentlength is < buffersize
-long actualLenToRead = min(footerReadSize, contentLength);
-return optimisedRead(b, off, len, lastBlockStart, actualLenToRead);
+long actualLenToRead = min(footerReadSize, getContentLength());
+return optimisedRead(b, off, len, lastBlockStart, actualLenToRead, false);
   }
 
   private int optimisedRead(final byte[] b, final int off, final int len,
-  final long readFrom, final long actualLen) throws IOException {
+  final long readFrom, final long actualLen,
+  final boolean isReadWithoutContentLengthInformation) throws IOException {
 fCursor = readFrom;
 int totalBytesRead = 0;
 int lastBytesRead = 0;
 try {
   buffer = new byte[bufferSize];
+  boolean fileStatusInformationPresentBeforeRead = 
fileStatusInformationPresent.get();
   for (int i = 0;
-   i < MAX_OPTIMIZED_READ_ATTEMPTS && fCursor < contentLength; i++) {
+   i < MAX_OPTIMIZED_READ_ATTEMPTS && 
(!fileStatusInformationPresent.get()
+   || fCursor < getContentLength()); i++) {

Review Comment:
   Content length would not be available for the first optimized read in case
   of lazy head optimization in inputStream. In such case, read of the first 
optimized read
   would be done without the contentLength constraint. Post first call, the 
contentLength
   would be present and should be used for further reads.
   
   Have added it as comment.
   





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2060770546

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  17m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 15s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 4 
unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 40s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 225m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6678 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c8bd749db5be 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d2b33a3a3c59af27f6fc0eb0dda892f016861c3d |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/4/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generat

Re: [PR] HDFS-17469. Audit log for reportBadBlocks RPC [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6731:
URL: https://github.com/apache/hadoop/pull/6731#issuecomment-2060830075

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 204m 16s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 301m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6731/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6731 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8781f77f186c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 17b822eb25e449b13a908eebf6c7a15628356b8c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6731/2/testReport/ |
   | Max. process+thread count | 4630 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6731/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact 

Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-04-17 Thread via GitHub


Neilxzn commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-2060941447

   > @Neilxzn Hi, this patch is very useful, would you mind further fixing this 
PR?
   
   Sorry for my late reply.  I have updated the patch based on the suggestions 
above. Please review it again. @haiyang1987 @zhangshuyan0 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17472. [FGL] gcDeletedSnapshot and getDelegationToken support FGL [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6743:
URL: https://github.com/apache/hadoop/pull/6743#issuecomment-2060784984

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m  5s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  35m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 228m 40s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 380m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6743/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6743 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8d4af09480ea 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / 7f82bcf7a0ba76485b6125df7cd40a73f42c1f37 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6743/1/testReport/ |
   | Max. process+thread count | 3511 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6743/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to G

[PR] HDFS-17475. Add verifyReadable command to check if files are readable [hadoop]

2024-04-17 Thread via GitHub


kokonguyen191 opened a new pull request, #6745:
URL: https://github.com/apache/hadoop/pull/6745

   ### Description of PR
   
   Sometimes a job can fail due to one unreadable file down the line due to 
missing replicas or dead DNs or other reason. This command allows users to 
check whether files are readable by checking for metadata on DNs without 
executing full read pipelines of the files.
   
   ### How was this patch tested?
   
   Unit tests, local deployment, production. Also tested for performance.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17459. [FGL] Add documentation [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6737:
URL: https://github.com/apache/hadoop/pull/6737#issuecomment-2060836090

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  52m 21s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  94m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 144m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6737/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6737 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint |
   | uname | Linux 9b370944cfc2 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / 1ac8f4f7e1a76dc31492f3b27e8fb3f61b3b992f |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6737/3/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19129: [Backport to 3.4] [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-17 Thread via GitHub


anujmodi2021 opened a new pull request, #6746:
URL: https://github.com/apache/hadoop/pull/6746

   ## Description of PR
   Jira: https://issues.apache.org/jira/browse/HADOOP-19129
   Merged PR in trunk: https://github.com/apache/hadoop/pull/6676
   
   Test Script used by ABFS to validate changes has following two issues:
   
   When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests. To resolve this, we 
have come up with new regex that will only target one line test names for 
reporting them into aggregated test results.
   While running the test suite for different combinations of Auth type and 
account type, we add the combination specific configs first and then include 
the account specific configs in core-site.xml file. This will override the 
combination specific configs like auth type if the same config is present in 
account specific config file. To avoid this, we will first include the account 
specific configs and then add the combination specific configs.
   Some tests were reported to be failing on OSS trunk including those reported 
via following Jira:
   https://issues.apache.org/jira/browse/HADOOP-19110
   https://issues.apache.org/jira/browse/HADOOP-19106
   
   For details of the failing tests and their fixes please refer to the Jira 
Description:
   https://issues.apache.org/jira/browse/HADOOP-19129
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838096#comment-17838096
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

anujmodi2021 opened a new pull request, #6746:
URL: https://github.com/apache/hadoop/pull/6746

   ## Description of PR
   Jira: https://issues.apache.org/jira/browse/HADOOP-19129
   Merged PR in trunk: https://github.com/apache/hadoop/pull/6676
   
   Test Script used by ABFS to validate changes has following two issues:
   
   When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests. To resolve this, we 
have come up with new regex that will only target one line test names for 
reporting them into aggregated test results.
   While running the test suite for different combinations of Auth type and 
account type, we add the combination specific configs first and then include 
the account specific configs in core-site.xml file. This will override the 
combination specific configs like auth type if the same config is present in 
account specific config file. To avoid this, we will first include the account 
specific configs and then add the combination specific configs.
   Some tests were reported to be failing on OSS trunk including those reported 
via following Jira:
   https://issues.apache.org/jira/browse/HADOOP-19110
   https://issues.apache.org/jira/browse/HADOOP-19106
   
   For details of the failing tests and their fixes please refer to the Jira 
Description:
   https://issues.apache.org/jira/browse/HADOOP-19129
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath(

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568646953


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -582,6 +701,29 @@ int readRemote(long position, byte[] b, int offset, int 
length, TracingContext t
 return (int) bytesRead;
   }
 
+  private void initPathPropertiesFromReadPathResponseHeader(final 
AbfsHttpOperation op) throws IOException {
+if (DIRECTORY.equals(
+op.getResponseHeader(HttpHeaderConfigurations.X_MS_RESOURCE_TYPE))) {
+  throw new FileNotFoundException(
+  "read must be used with files and not directories. Path: " + path);
+}
+contentLength = parseFromRange(
+op.getResponseHeader(HttpHeaderConfigurations.CONTENT_RANGE));
+eTag = op.getResponseHeader(HttpHeaderConfigurations.ETAG);
+fileStatusInformationPresent.set(true);

Review Comment:
   Content_range is expected to be in format 
-/.
   In case if server doesnt follow this format, content-range would be marked 
as -1. Keeping it as -1 would stop future reads on the inputStream from 
happening. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838098#comment-17838098
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568646953


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -582,6 +701,29 @@ int readRemote(long position, byte[] b, int offset, int 
length, TracingContext t
 return (int) bytesRead;
   }
 
+  private void initPathPropertiesFromReadPathResponseHeader(final 
AbfsHttpOperation op) throws IOException {
+if (DIRECTORY.equals(
+op.getResponseHeader(HttpHeaderConfigurations.X_MS_RESOURCE_TYPE))) {
+  throw new FileNotFoundException(
+  "read must be used with files and not directories. Path: " + path);
+}
+contentLength = parseFromRange(
+op.getResponseHeader(HttpHeaderConfigurations.CONTENT_RANGE));
+eTag = op.getResponseHeader(HttpHeaderConfigurations.ETAG);
+fileStatusInformationPresent.set(true);

Review Comment:
   Content_range is expected to be in format 
-/.
   In case if server doesnt follow this format, content-range would be marked 
as -1. Keeping it as -1 would stop future reads on the inputStream from 
happening. 





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838097#comment-17838097
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2060977805

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  36m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | -1 :x: |  javac  |  16m 25s | 
[/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 35 unchanged - 0 
fixed = 36 total (was 35)  |
   | +1 :green_heart: |  compile  |  15m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  javac  |  15m 51s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 1 new + 44 unchanged - 0 
fixed = 45 total (was 44)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 10s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 17 unchanged - 0 fixed = 19 total (was 
17)  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 21s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 

Re: [PR] HDFS-17367. Add PercentUsed for Different StorageTypes in JMX [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6735:
URL: https://github.com/apache/hadoop/pull/6735#issuecomment-2060990212

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   6m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   6m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 46s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 231m 12s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 484m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6735/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6735 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 5e5a43553ecc 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9cc6066b5881092a26c8fa70472b08b1a00d76c6 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6735/3/testReport/ |
   | Max. process+thread count | 4025 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6735/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.o

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568558190


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -376,32 +439,48 @@ private int readLastBlock(final byte[] b, final int off, 
final int len)
 // data need to be copied to user buffer from index bCursor,
 // AbfsInutStream buffer is going to contain data from last block start. In
 // that case bCursor will be set to fCursor - lastBlockStart
-long lastBlockStart = max(0, contentLength - footerReadSize);
+if (!fileStatusInformationPresent.get()) {
+  long lastBlockStart = max(0, (fCursor + len) - footerReadSize);
+  bCursor = (int) (fCursor - lastBlockStart);
+  return optimisedRead(b, off, len, lastBlockStart, min(fCursor + len, 
footerReadSize), true);
+}
+long lastBlockStart = max(0, getContentLength() - footerReadSize);
 bCursor = (int) (fCursor - lastBlockStart);
 // 0 if contentlength is < buffersize
-long actualLenToRead = min(footerReadSize, contentLength);
-return optimisedRead(b, off, len, lastBlockStart, actualLenToRead);
+long actualLenToRead = min(footerReadSize, getContentLength());
+return optimisedRead(b, off, len, lastBlockStart, actualLenToRead, false);
   }
 
   private int optimisedRead(final byte[] b, final int off, final int len,
-  final long readFrom, final long actualLen) throws IOException {
+  final long readFrom, final long actualLen,
+  final boolean isReadWithoutContentLengthInformation) throws IOException {
 fCursor = readFrom;
 int totalBytesRead = 0;
 int lastBytesRead = 0;
 try {
   buffer = new byte[bufferSize];
+  boolean fileStatusInformationPresentBeforeRead = 
fileStatusInformationPresent.get();
   for (int i = 0;
-   i < MAX_OPTIMIZED_READ_ATTEMPTS && fCursor < contentLength; i++) {
+   i < MAX_OPTIMIZED_READ_ATTEMPTS && 
(!fileStatusInformationPresent.get()
+   || fCursor < getContentLength()); i++) {

Review Comment:
   Content length would not be available for the first optimized read in case
   of lazy head optimization in inputStream. In such case, read of the first 
optimized read
   would be done without the contentLength constraint. Post first call, the 
contentLength
   would be present and should be used for further reads.
   
   Have added it as comment.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2060993230

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  41m 43s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | -1 :x: |  javac  |  20m 26s | 
[/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 35 unchanged - 0 
fixed = 36 total (was 35)  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  javac  |  18m 18s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 1 new + 44 unchanged - 0 
fixed = 45 total (was 44)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 38s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 17 unchanged - 0 fixed = 18 total (was 
17)  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 14s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 38s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 270m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   |

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2060977805

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  36m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | -1 :x: |  javac  |  16m 25s | 
[/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 35 unchanged - 0 
fixed = 36 total (was 35)  |
   | +1 :green_heart: |  compile  |  15m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  javac  |  15m 51s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 1 new + 44 unchanged - 0 
fixed = 45 total (was 44)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 10s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/39/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 17 unchanged - 0 fixed = 19 total (was 
17)  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 21s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 46s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 245m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   |

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838106#comment-17838106
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2060993230

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  41m 43s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | -1 :x: |  javac  |  20m 26s | 
[/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 35 unchanged - 0 
fixed = 36 total (was 35)  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  javac  |  18m 18s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 1 new + 44 unchanged - 0 
fixed = 45 total (was 44)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 38s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/38/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 17 unchanged - 0 fixed = 18 total (was 
17)  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 14s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 

Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-04-17 Thread via GitHub


haiyang1987 commented on code in PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#discussion_r1568748150


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java:
##
@@ -233,41 +235,63 @@ private ByteBufferStrategy[] 
getReadStrategies(StripingChunk chunk) {
 
   private int readToBuffer(BlockReader blockReader,
   DatanodeInfo currentNode, ByteBufferStrategy strategy,
-  ExtendedBlock currentBlock) throws IOException {
+  LocatedBlock currentBlock, int chunkIndex, long offsetInBlock)
+  throws IOException {
 final int targetLength = strategy.getTargetLength();
-int length = 0;
-try {
-  while (length < targetLength) {
-int ret = strategy.readFromBlock(blockReader);
-if (ret < 0) {
-  throw new IOException("Unexpected EOS from the reader");
+int curAttempts = 0;
+while (curAttempts < readDNMaxAttempts) {
+  curAttempts++;
+  int length = 0;
+  try {
+while (length < targetLength) {
+  int ret = strategy.readFromBlock(blockReader);
+  if (ret < 0) {
+throw new IOException("Unexpected EOS from the reader");
+  }
+  length += ret;
+}
+return length;
+  } catch (ChecksumException ce) {
+DFSClient.LOG.warn("Found Checksum error for "
++ currentBlock + " from " + currentNode
++ " at " + ce.getPos());
+//Clear buffer to make next decode success
+strategy.getReadBuffer().clear();
+// we want to remember which block replicas we have tried
+corruptedBlocks.addCorruptedBlock(currentBlock.getBlock(), 
currentNode);
+throw ce;
+  } catch (IOException e) {
+//Clear buffer to make next decode success
+strategy.getReadBuffer().clear();
+if (curAttempts < readDNMaxAttempts) {
+  if (readerInfos[chunkIndex].reader != null) {
+readerInfos[chunkIndex].reader.close();
+  }
+  if (dfsStripedInputStream.createBlockReader(currentBlock,
+  offsetInBlock, targetBlocks,
+  readerInfos, chunkIndex, readTo)) {
+blockReader = readerInfos[chunkIndex].reader;
+String msg = "Reconnect to " + currentNode.getInfoAddr()
++ " for block " + currentBlock.getBlock();
+DFSClient.LOG.warn(msg);
+continue;
+  }
 }
-length += ret;
+DFSClient.LOG.warn("Exception while reading from "

Review Comment:
   Here also can use to `warn("{}", arg)` format?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-04-17 Thread via GitHub


haiyang1987 commented on code in PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#discussion_r1568748476


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java:
##
@@ -233,41 +235,63 @@ private ByteBufferStrategy[] 
getReadStrategies(StripingChunk chunk) {
 
   private int readToBuffer(BlockReader blockReader,
   DatanodeInfo currentNode, ByteBufferStrategy strategy,
-  ExtendedBlock currentBlock) throws IOException {
+  LocatedBlock currentBlock, int chunkIndex, long offsetInBlock)
+  throws IOException {
 final int targetLength = strategy.getTargetLength();
-int length = 0;
-try {
-  while (length < targetLength) {
-int ret = strategy.readFromBlock(blockReader);
-if (ret < 0) {
-  throw new IOException("Unexpected EOS from the reader");
+int curAttempts = 0;
+while (curAttempts < readDNMaxAttempts) {
+  curAttempts++;
+  int length = 0;
+  try {
+while (length < targetLength) {
+  int ret = strategy.readFromBlock(blockReader);
+  if (ret < 0) {
+throw new IOException("Unexpected EOS from the reader");
+  }
+  length += ret;
+}
+return length;
+  } catch (ChecksumException ce) {
+DFSClient.LOG.warn("Found Checksum error for "

Review Comment:
   here also



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-04-17 Thread via GitHub


haiyang1987 commented on code in PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#discussion_r1568756254


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java:
##
@@ -233,41 +235,63 @@ private ByteBufferStrategy[] 
getReadStrategies(StripingChunk chunk) {
 
   private int readToBuffer(BlockReader blockReader,
   DatanodeInfo currentNode, ByteBufferStrategy strategy,
-  ExtendedBlock currentBlock) throws IOException {
+  LocatedBlock currentBlock, int chunkIndex, long offsetInBlock)
+  throws IOException {
 final int targetLength = strategy.getTargetLength();
-int length = 0;
-try {
-  while (length < targetLength) {
-int ret = strategy.readFromBlock(blockReader);
-if (ret < 0) {
-  throw new IOException("Unexpected EOS from the reader");
+int curAttempts = 0;
+while (curAttempts < readDNMaxAttempts) {

Review Comment:
   here update  `while (true)` and can remove  line[286~288], how about it ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-04-17 Thread via GitHub


haiyang1987 commented on code in PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#discussion_r1568745996


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java:
##
@@ -233,41 +235,63 @@ private ByteBufferStrategy[] 
getReadStrategies(StripingChunk chunk) {
 
   private int readToBuffer(BlockReader blockReader,
   DatanodeInfo currentNode, ByteBufferStrategy strategy,
-  ExtendedBlock currentBlock) throws IOException {
+  LocatedBlock currentBlock, int chunkIndex, long offsetInBlock)
+  throws IOException {
 final int targetLength = strategy.getTargetLength();
-int length = 0;
-try {
-  while (length < targetLength) {
-int ret = strategy.readFromBlock(blockReader);
-if (ret < 0) {
-  throw new IOException("Unexpected EOS from the reader");
+int curAttempts = 0;
+while (curAttempts < readDNMaxAttempts) {
+  curAttempts++;
+  int length = 0;
+  try {
+while (length < targetLength) {
+  int ret = strategy.readFromBlock(blockReader);
+  if (ret < 0) {
+throw new IOException("Unexpected EOS from the reader");
+  }
+  length += ret;
+}
+return length;
+  } catch (ChecksumException ce) {
+DFSClient.LOG.warn("Found Checksum error for "
++ currentBlock + " from " + currentNode
++ " at " + ce.getPos());
+//Clear buffer to make next decode success
+strategy.getReadBuffer().clear();
+// we want to remember which block replicas we have tried
+corruptedBlocks.addCorruptedBlock(currentBlock.getBlock(), 
currentNode);
+throw ce;
+  } catch (IOException e) {
+//Clear buffer to make next decode success
+strategy.getReadBuffer().clear();
+if (curAttempts < readDNMaxAttempts) {
+  if (readerInfos[chunkIndex].reader != null) {
+readerInfos[chunkIndex].reader.close();
+  }
+  if (dfsStripedInputStream.createBlockReader(currentBlock,
+  offsetInBlock, targetBlocks,
+  readerInfos, chunkIndex, readTo)) {
+blockReader = readerInfos[chunkIndex].reader;
+String msg = "Reconnect to " + currentNode.getInfoAddr()
++ " for block " + currentBlock.getBlock();
+DFSClient.LOG.warn(msg);

Review Comment:
   Can use the 
   ```
   DFSClient.LOG.warn("Reconnect to {} for block {}", currentNode.getInfoAddr(),
   currentBlock.getBlock());
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568789037


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -306,7 +366,7 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 //If buffer is empty, then fill the buffer.
 if (bCursor == limit) {
   //If EOF, then return -1
-  if (fCursor >= contentLength) {
+  if (fileStatusInformationPresent.get() && fCursor >= getContentLength()) 
{

Review Comment:
   Can you please elaborate on the question please. This would be the case in 
the first sequential read.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838164#comment-17838164
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568789037


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -306,7 +366,7 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 //If buffer is empty, then fill the buffer.
 if (bCursor == limit) {
   //If EOF, then return -1
-  if (fCursor >= contentLength) {
+  if (fileStatusInformationPresent.get() && fCursor >= getContentLength()) 
{

Review Comment:
   Can you please elaborate on the question please. This would be the case in 
the first sequential read.





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838165#comment-17838165
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568792795


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -447,8 +537,7 @@ private boolean validate(final byte[] b, final int off, 
final int len)
 Preconditions.checkNotNull(b);
 LOG.debug("read one block requested b.length = {} off {} len {}", b.length,
 off, len);
-
-if (this.available() == 0) {
+if (fileStatusInformationPresent.get() && this.available() == 0) {

Review Comment:
   available() is a public api and can be called from other places as well. 
check in available is for the fileStatus creation heuristic in available().





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838168#comment-17838168
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568796124


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -564,11 +669,25 @@ int readRemote(long position, byte[] b, int offset, int 
length, TracingContext t
 } catch (AzureBlobFileSystemException ex) {
   if (ex instanceof AbfsRestOperationException) {
 AbfsRestOperationException ere = (AbfsRestOperationException) ex;
+abfsHttpOperation = ((AbfsRestOperationException) 
ex).getAbfsHttpOperation();
 if (ere.getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) {
   throw new FileNotFoundException(ere.getMessage());
 }
+/*
+ * Status 416 is sent when read range is out of contentLength range.
+ * This would happen only in the case if contentLength is not known 
before
+ * opening the inputStream.
+ */
+if (ere.getStatusCode() == READ_PATH_REQUEST_NOT_SATISFIABLE
+&& !fileStatusInformationPresent.get()) {
+  return -1;
+}
   }
   throw new IOException(ex);
+} finally {
+  if (!fileStatusInformationPresent.get() && abfsHttpOperation != null) {
+initPathPropertiesFromReadPathResponseHeader(abfsHttpOperation);

Review Comment:
   sure, have refactored to `initPropertiesFromReadResponseHeader`.





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568773530


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -192,6 +210,30 @@ public int read(long position, byte[] buffer, int offset, 
int length)
 throw new IOException(FSExceptionMessages.STREAM_IS_CLOSED);
   }
 }
+
+/*
+ * When the inputStream is started, if the application tries to parallelly 
read
+ * ont he inputStream, the first read will be synchronized and the 
subsequent
+ * reads will be non-synchronized.
+ */
+if (!successfulUsage) {

Review Comment:
   Have renamed it to `sequentialReadStarted`. Have removed the blocking in 
position read, since, it doesnt lead to prefetching.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838161#comment-17838161
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568773530


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -192,6 +210,30 @@ public int read(long position, byte[] buffer, int offset, 
int length)
 throw new IOException(FSExceptionMessages.STREAM_IS_CLOSED);
   }
 }
+
+/*
+ * When the inputStream is started, if the application tries to parallelly 
read
+ * ont he inputStream, the first read will be synchronized and the 
subsequent
+ * reads will be non-synchronized.
+ */
+if (!successfulUsage) {

Review Comment:
   Have renamed it to `sequentialReadStarted`. Have removed the blocking in 
position read, since, it doesnt lead to prefetching.





> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568792795


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -447,8 +537,7 @@ private boolean validate(final byte[] b, final int off, 
final int len)
 Preconditions.checkNotNull(b);
 LOG.debug("read one block requested b.length = {} off {} len {}", b.length,
 off, len);
-
-if (this.available() == 0) {
+if (fileStatusInformationPresent.get() && this.available() == 0) {

Review Comment:
   available() is a public api and can be called from other places as well. 
check in available is for the fileStatus creation heuristic in available().



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11672. Create a CgroupHandler implementation for cgroup v2 [hadoop]

2024-04-17 Thread via GitHub


tomicooler commented on code in PR #6734:
URL: https://github.com/apache/hadoop/pull/6734#discussion_r1568394772


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsV2HandlerImpl.java:
##
@@ -0,0 +1,165 @@
+/*
+ * *
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements. See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership. The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License. You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ * /
+ */
+
+package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.io.PrintWriter;
+import java.io.Writer;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * Support for interacting with various CGroup v2 subsystems. Thread-safe.
+ */
+
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+class CGroupsV2HandlerImpl extends AbstractCGroupsHandler {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(CGroupsV2HandlerImpl.class);
+
+  private static final String CGROUP2_FSTYPE = "cgroup2";
+
+  /**
+   * Create cgroup v2 handler object.
+   * @param conf configuration
+   * @param privilegedOperationExecutor provides mechanisms to execute
+   *PrivilegedContainerOperations
+   * @param mtab mount file location
+   * @throws ResourceHandlerException if initialization failed
+   */
+  CGroupsV2HandlerImpl(Configuration conf, PrivilegedOperationExecutor
+  privilegedOperationExecutor, String mtab)
+  throws ResourceHandlerException {
+super(conf, privilegedOperationExecutor, mtab);
+  }
+
+  /**
+   * Create cgroup v2 handler object.
+   * @param conf configuration
+   * @param privilegedOperationExecutor provides mechanisms to execute
+   *PrivilegedContainerOperations
+   * @throws ResourceHandlerException if initialization failed
+   */
+  CGroupsV2HandlerImpl(Configuration conf, PrivilegedOperationExecutor
+  privilegedOperationExecutor) throws ResourceHandlerException {
+this(conf, privilegedOperationExecutor, MTAB_FILE);
+  }
+
+  @Override
+  protected Map> parsePreConfiguredMountPath() throws 
IOException {
+Map> controllerMappings = new HashMap<>();
+String controllerPath = this.cGroupsMountConfig.getMountPath() + 
Path.SEPARATOR + this.cGroupPrefix;
+controllerMappings.put(this.cGroupsMountConfig.getMountPath(), 
parseControllersFile(controllerPath));
+return controllerMappings;
+  }
+
+  @Override
+  protected Set handleMtabEntry(String path, String type, String 
options) throws IOException {
+if (type.equals(CGROUP2_FSTYPE)) {
+  return parseControllersFile(path);
+}
+
+return null;
+  }
+
+  @Override
+  protected void mountCGroupController(CGroupController controller) {
+throw new UnsupportedOperationException("Mounting cgroup controllers is 
not supported in cgroup v2");
+  }
+
+  /**
+   * Parse the cgroup v2 controllers file to check the enabled controllers.
+   * @param cgroupPath path to the cgroup directory
+   * @return set of enabled and YARN supported controllers.
+   * @throws IOException if the file is not found or cannot be read
+   */
+  public Set parseControllersFile(String cgroupPath) throws 
IOException {
+File cgroupControllersFile = new File(cgroupPath + Path.SEPARATOR + 
CGROUP_CONTROLLERS_FILE);
+if (!cgroupControllersFile.exists()) {
+  throw new IOException("No cgroup controllers file found in the directory 
specified: " +
+  cgroupPath);
+}
+
+String enabledControllers = 
FileUtils.readFileToString(cgroupControllersFile, S

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1568796124


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -564,11 +669,25 @@ int readRemote(long position, byte[] b, int offset, int 
length, TracingContext t
 } catch (AzureBlobFileSystemException ex) {
   if (ex instanceof AbfsRestOperationException) {
 AbfsRestOperationException ere = (AbfsRestOperationException) ex;
+abfsHttpOperation = ((AbfsRestOperationException) 
ex).getAbfsHttpOperation();
 if (ere.getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) {
   throw new FileNotFoundException(ere.getMessage());
 }
+/*
+ * Status 416 is sent when read range is out of contentLength range.
+ * This would happen only in the case if contentLength is not known 
before
+ * opening the inputStream.
+ */
+if (ere.getStatusCode() == READ_PATH_REQUEST_NOT_SATISFIABLE
+&& !fileStatusInformationPresent.get()) {
+  return -1;
+}
   }
   throw new IOException(ex);
+} finally {
+  if (!fileStatusInformationPresent.get() && abfsHttpOperation != null) {
+initPathPropertiesFromReadPathResponseHeader(abfsHttpOperation);

Review Comment:
   sure, have refactored to `initPropertiesFromReadResponseHeader`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2061226157

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 54s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6678 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 64f6ae2612bc 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15cc0ec1692ed71055b75b5621216d55178d6aaa |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/5/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, pleas

[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838173#comment-17838173
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

hadoop-yetus commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2061226157

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 54s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6678 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 64f6ae2612bc 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15cc0ec1692ed71055b75b5621216d55178d6aaa |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/5/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> FTPFileS

[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838186#comment-17838186
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

zj619 commented on code in PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#discussion_r1568840475


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java:
##
@@ -60,6 +62,9 @@ public class TestFTPFileSystem {
   @Rule
   public Timeout testTimeout = new Timeout(18, TimeUnit.MILLISECONDS);
 
+  @Rule
+  public TestName name = new TestName();
+

Review Comment:
   ok, I change it to "renamefile".





> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-27-09-59-12-381.png, 
> image-2024-03-28-09-58-19-721.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
>    When use fs shell to put/rename file in ftp server with full qualified 
> path , it always get "Input/output error"(eg. 
> [ftp://user:password@localhost/pathxxx]), the reason is that 
> changeWorkingDirectory command underneath is being passed a string with 
> [file://|file:///] uri prefix which will not be understand by ftp server
> !image-2024-03-27-09-59-12-381.png|width=948,height=156!
>  
> in our case, after 
> client.changeWorkingDirectory("ftp://mytest:myt...@10.5.xx.xx/files";)
> executed, the workingDirectory of ftp server is still "/", which is 
> incorrect(not understand by ftp server)
> !image-2024-03-28-09-58-19-721.png|width=745,height=431!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> [file://|file:///] uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I create this issue and add related unit test.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-04-17 Thread via GitHub


zj619 commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2061249198

   > Change "testName" maybe with the actual test name or something more 
indicative like "renamedir" or so. rest changes LGTM
   
   
   
   > Change "testName" maybe with the actual test name or something more 
indicative like "renamedir" or so. rest changes LGTM
   
   OK,I change to "renamefile".


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838184#comment-17838184
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

zj619 commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2061249198

   > Change "testName" maybe with the actual test name or something more 
indicative like "renamedir" or so. rest changes LGTM
   
   
   
   > Change "testName" maybe with the actual test name or something more 
indicative like "renamedir" or so. rest changes LGTM
   
   OK,I change to "renamefile".




> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-27-09-59-12-381.png, 
> image-2024-03-28-09-58-19-721.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
>    When use fs shell to put/rename file in ftp server with full qualified 
> path , it always get "Input/output error"(eg. 
> [ftp://user:password@localhost/pathxxx]), the reason is that 
> changeWorkingDirectory command underneath is being passed a string with 
> [file://|file:///] uri prefix which will not be understand by ftp server
> !image-2024-03-27-09-59-12-381.png|width=948,height=156!
>  
> in our case, after 
> client.changeWorkingDirectory("ftp://mytest:myt...@10.5.xx.xx/files";)
> executed, the workingDirectory of ftp server is still "/", which is 
> incorrect(not understand by ftp server)
> !image-2024-03-28-09-58-19-721.png|width=745,height=431!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> [file://|file:///] uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I create this issue and add related unit test.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-04-17 Thread via GitHub


zj619 commented on code in PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#discussion_r1568840475


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java:
##
@@ -60,6 +62,9 @@ public class TestFTPFileSystem {
   @Rule
   public Timeout testTimeout = new Timeout(18, TimeUnit.MILLISECONDS);
 
+  @Rule
+  public TestName name = new TestName();
+

Review Comment:
   ok, I change it to "renamefile".



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19129: [Backport to 3.4] [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061266963

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 30s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  38m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 4 unchanged - 3 fixed = 
4 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6746/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6746 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs compile javac javadoc mvninstall shadedclient spotbugs 
checkstyle markdownlint |
   | uname | Linux 8b3f67a59de7 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / d0655327ee26013f444f9dfb47622f0259f83e8f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6746/1/testReport/ |
   | Max. process+thread count | 534 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6746/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 shellcheck=0.7.0 |
   | Power

[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838191#comment-17838191
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

hadoop-yetus commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061266963

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 30s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  38m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 4 unchanged - 3 fixed = 
4 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6746/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6746 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs compile javac javadoc mvninstall shadedclient spotbugs 
checkstyle markdownlint |
   | uname | Linux 8b3f67a59de7 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / d0655327ee26013f444f9dfb47622f0259f83e8f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6746/1/testReport/ |
   | Max. process+thread count | 534 (vs. ulimit of 5500

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838228#comment-17838228
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2061433664

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  15m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 27s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  15m 41s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/40/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 16s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 49s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 240m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/40/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux f686fbaf3ecf 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02b342119867d755306064052c12814ed0974815 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2061433664

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  15m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 27s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  15m 41s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/40/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 16s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 49s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 240m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/40/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux f686fbaf3ecf 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02b342119867d755306064052c12814ed0974815 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/40/te

Re: [PR] YARN-11672. Create a CgroupHandler implementation for cgroup v2 [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6734:
URL: https://github.com/apache/hadoop/pull/6734#issuecomment-2061484488

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 28s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6734/6/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 55 new + 1 unchanged - 2 fixed = 56 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  24m 20s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 170m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6734/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6734 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 479327a92f79 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4cb588607af3575977d5b0fb27fe67cf5b02ea1f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6734/6/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   | Console outp

[jira] [Commented] (HADOOP-19084) prune dependency exports of hadoop-* modules

2024-04-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838241#comment-17838241
 ] 

Steve Loughran commented on HADOOP-19084:
-

logback is still being exported by hadoop-common via zk. 

> prune dependency exports of hadoop-* modules
> 
>
> Key: HADOOP-19084
> URL: https://issues.apache.org/jira/browse/HADOOP-19084
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0, 3.4.1
>
>
> this is probably caused by HADOOP-18613:
> ZK is pulling in some extra transitive stuff which surfaces in applications 
> which import hadoop-common into their poms. It doesn't seem to show up in our 
> distro, but downstream you get warnings about duplicate logging stuff
> {code}
> |  +- org.apache.zookeeper:zookeeper:jar:3.8.3:compile
> |  |  +- org.apache.zookeeper:zookeeper-jute:jar:3.8.3:compile
> |  |  |  \- (org.apache.yetus:audience-annotations:jar:0.12.0:compile - 
> omitted for duplicate)
> |  |  +- org.apache.yetus:audience-annotations:jar:0.12.0:compile
> |  |  +- (io.netty:netty-handler:jar:4.1.94.Final:compile - omitted for 
> conflict with 4.1.100.Final)
> |  |  +- (io.netty:netty-transport-native-epoll:jar:4.1.94.Final:compile - 
> omitted for conflict with 4.1.100.Final)
> |  |  +- (org.slf4j:slf4j-api:jar:1.7.30:compile - omitted for duplicate)
> |  |  +- ch.qos.logback:logback-core:jar:1.2.10:compile
> |  |  +- ch.qos.logback:logback-classic:jar:1.2.10:compile
> |  |  |  +- (ch.qos.logback:logback-core:jar:1.2.10:compile - omitted for 
> duplicate)
> |  |  |  \- (org.slf4j:slf4j-api:jar:1.7.32:compile - omitted for conflict 
> with 1.7.30)
> |  |  \- (commons-io:commons-io:jar:2.11.0:compile - omitted for conflict 
> with 2.14.0)
> {code}
> proposed: exclude the zk dependencies we either override outselves or don't 
> need. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19153) hadoop-common still exports logback as a transitive dependency

2024-04-17 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19153:
---

 Summary: hadoop-common still exports logback as a transitive 
dependency
 Key: HADOOP-19153
 URL: https://issues.apache.org/jira/browse/HADOOP-19153
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, common
Affects Versions: 3.4.0
Reporter: Steve Loughran


Even though HADOOP-19084 set out to stop it, somehow ZK's declaration of a 
logback dependency is still contaminating the hadoop-common dependency graph, 
so causing problems downstream.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19129: [Backport to 3.4] [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-17 Thread via GitHub


anujmodi2021 commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061506141

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 620, Failures: 0, Errors: 0, Skipped: 73
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 54
   
   
   HNS-SharedKey
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [WARNING] Tests run: 620, Failures: 0, Errors: 0, Skipped: 28
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 41
   
   
   NonHNS-SharedKey
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [WARNING] Tests run: 604, Failures: 0, Errors: 0, Skipped: 269
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 44
   
   
   AppendBlob-HNS-OAuth
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 620, Failures: 0, Errors: 0, Skipped: 75
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 78
   
   Time taken: 61 mins 50 secs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19129: [Backport to 3.4] [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-17 Thread via GitHub


anujmodi2021 commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061507050

   Hi @steveloughran, Please merge this backport PR to branch-3.4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838245#comment-17838245
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

anujmodi2021 commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061506141

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 620, Failures: 0, Errors: 0, Skipped: 73
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 54
   
   
   HNS-SharedKey
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [WARNING] Tests run: 620, Failures: 0, Errors: 0, Skipped: 28
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 41
   
   
   NonHNS-SharedKey
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [WARNING] Tests run: 604, Failures: 0, Errors: 0, Skipped: 269
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 44
   
   
   AppendBlob-HNS-OAuth
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 620, Failures: 0, Errors: 0, Skipped: 75
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 78
   
   Time taken: 61 mins 50 secs.
   




> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath() does not return more objects than what is configured 
> in maxListResults. Assertions should be that number of objects returned could 
> be less than expected as server might end up returning even lesser due to 
> partition splits along with a continuation token.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
> when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestGetNameSpaceEnabled.tes

[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838246#comment-17838246
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

anujmodi2021 commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061507050

   Hi @steveloughran, Please merge this backport PR to branch-3.4




> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath() does not return more objects than what is configured 
> in maxListResults. Assertions should be that number of objects returned could 
> be less than expected as server might end up returning even lesser due to 
> partition splits along with a continuation token.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
> when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): 
> Fail when "fs.azure.test.namespace.enabled" config is missing. Ignore the 
> test if config is missing.
>  # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
> "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
> "fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
> number of read operations can be more in case of append blobs as compared to 
> normal blob because of automatic flush. It could be same as that of normal 
> blob as well.
>  # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS: 
> Fails for FNS Account only when following config is present:  
> fs.azure.account.hns.enabled". Failure is because test wants to assert that 
> when driver does not know if the account is HNS enabled or not it makes a 
> server call and fails. But above config is letting driver know the account 
> type and skipping the head call. Remove these configs from the test specific 
> configurations and not from the account settings file.
>  # ITestAbfsTerasort.test_120_terasort: Fails with OAuth on HNS account. 
> Failure is because of identity mismatch. OAuth uses service principle OID as 
> owner of the resources whereas Shared Key uses local system identities. Fix 
> is to set configs that will allow overwrite of OID to localiden

Re: [PR] HDFS-17454. Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log. [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#issuecomment-2061544322

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 67 unchanged - 
0 fixed = 69 total (was 67)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 264m 20s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 48s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/8/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 422m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6709 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c4383051ab8b 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bf31b0654b294a3987fdfa7f014dcf7d8bde15ea |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/8/testReport/ |

Re: [PR] HDFS-17367. Add PercentUsed for Different StorageTypes in JMX [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6735:
URL: https://github.com/apache/hadoop/pull/6735#issuecomment-2061571160

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   5m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   6m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 288m 44s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 559m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6735/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6735 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 58e6c00c723a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 71c5c8543faf6f7acac12e23f9d9b1d86c8d44aa |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6735/4/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6735/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.o

Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-2061596554

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   2m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 32s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   2m 44s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/8/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 34s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/8/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 45 unchanged - 0 fixed = 
46 total (was 45)  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 54s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 209m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 318m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithTimeout |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 574d2feade6c 5.15.0-94-generic #1

Re: [PR] HADOOP-18656: [Backport to 3.4] [ABFS] Adding Support for Paginated Delete for Large Directories in HNS Account [hadoop]

2024-04-17 Thread via GitHub


anujmodi2021 commented on PR #6718:
URL: https://github.com/apache/hadoop/pull/6718#issuecomment-2061597117

   @steveloughran, @mukund-thakur...
   Requesting you to please get this merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18656) ABFS: Support for Pagination in Recursive Directory Delete

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838262#comment-17838262
 ] 

ASF GitHub Bot commented on HADOOP-18656:
-

anujmodi2021 commented on PR #6718:
URL: https://github.com/apache/hadoop/pull/6718#issuecomment-2061597117

   @steveloughran, @mukund-thakur...
   Requesting you to please get this merged.




> ABFS: Support for Pagination in Recursive Directory Delete 
> ---
>
> Key: HADOOP-18656
> URL: https://issues.apache.org/jira/browse/HADOOP-18656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.5
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Today, when a recursive delete is issued for a large directory in ADLS Gen2 
> (HNS) account, the directory deletion happens in O(1) but in backend ACL 
> Checks are done recursively for each object inside that directory which in 
> case of large directory could lead to request time out. Pagination is 
> introduced in the Azure Storage Backend for these ACL checks.
> More information on how pagination works can be found on public documentation 
> of [Azure Delete Path 
> API|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/delete?view=rest-storageservices-datalakestoragegen2-2019-12-12].
> This PR contains changes to support this from client side. To trigger 
> pagination, client needs to add a new query parameter "paginated" and set it 
> to true along with recursive set to true. In return if the directory is 
> large, server might return a continuation token back to the caller. If caller 
> gets back a continuation token, it has to call the delete API again with 
> continuation token along with recursive and pagination set to true. This is 
> similar to directory delete of FNS account.
> Pagination is available only in versions "2023-08-03" onwards.
> PR also contains functional tests to verify driver works well with different 
> combinations of recursive and pagination features for HNS.
> Full E2E testing of pagination requires large dataset to be created and hence 
> not added as part of driver test suite. But extensive E2E testing has been 
> performed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11622. Fix ResourceManager asynchronous switch from Standy to Active exception [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6352:
URL: https://github.com/apache/hadoop/pull/6352#issuecomment-2061626619

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  22m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 66 unchanged - 1 fixed = 66 total (was 67)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  78m 50s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/13/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 166m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6352 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 809d9c31990d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / dd7de21ff1b1db9f214b2c14379cd6f8aab70b41 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/13/testReport/ |
   | Max. process+thread count | 962 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/13/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19152. Do not hard code security providers. [hadoop]

2024-04-17 Thread via GitHub


szetszwo commented on code in PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#discussion_r1569133307


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java:
##
@@ -35,7 +34,6 @@
 import java.util.Map;
 import java.util.Objects;
 
-import org.bouncycastle.jce.provider.BouncyCastleProvider;
 import com.google.gson.stream.JsonReader;

Review Comment:
   Not sure about gson.  If there is a need, let's fix it separately.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2061786395

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6678 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 82354d5b13c9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4767ee8b2b768be4a08b6c97de565e3f32e1a6dc |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/6/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, pleas

[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838300#comment-17838300
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

hadoop-yetus commented on PR #6678:
URL: https://github.com/apache/hadoop/pull/6678#issuecomment-2061786395

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6678 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 82354d5b13c9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4767ee8b2b768be4a08b6c97de565e3f32e1a6dc |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/6/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6678/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> FTPFileS

[PR] HDFS-17476. fix: False positive "Observer Node is too far behind" due to long overflow. [hadoop]

2024-04-17 Thread via GitHub


KeeProMise opened a new pull request, #6747:
URL: https://github.com/apache/hadoop/pull/6747

   
   
   
   ### Description of PR
   
   seeAlse : https://issues.apache.org/jira/browse/HDFS-17476
   In the code GlobalStateIdContext#receiveRequestState(), if clientStateId is 
a small negative number, clientStateId-serverStateId may be greater than 
   
   (ESTIMATED_TRANSACTIONS_PER_SECOND due to overflow
 * TimeUnit.MILLISECONDS.toSeconds(clientWaitTime)
 * ESTIMATED_SERVER_TIME_MULTIPLIER),
   
   resulting in false positives that Observer Node is too far behind.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19152. Do not hard code security providers. [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2061803874

   > hadoop-azure seems okay? The tests did not fail.
   
   we'd have to see about the integration tests. Actually, I think it was 
minihdfs which needed it  for tests, rather than the actual code


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838307#comment-17838307
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

steveloughran commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2061803874

   > hadoop-azure seems okay? The tests did not fail.
   
   we'd have to see about the integration tests. Actually, I think it was 
minihdfs which needed it  for tests, rather than the actual code




> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17475. Add verifyReadable command to check if files are readable [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6745:
URL: https://github.com/apache/hadoop/pull/6745#issuecomment-2061806363

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  8s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6745/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 13 unchanged - 
0 fixed = 20 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   3m 44s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6745/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  39m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 285m 17s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6745/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 442m 41s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.tools.DebugAdmin$VerifyReadableCommand.handleArgs(String,
 String, String, String):in 
org.apache.hadoop.hdfs.tools.DebugAdmin$VerifyReadableCommand.handleArgs(String,
 String, String, String): new java.io.InputStreamReader(InputStream)  At 
DebugAdmin.java:[line 735] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.tools.DebugAdmin$VerifyReadableCommand.handleArgs(String,
 String, String, String):in 
org.apache.hadoop.hdfs.tools.DebugAdmin$VerifyReadableCommand.handleArgs(String,
 String, String, String): new java.io.OutputStreamWriter(OutputStream)  At 
DebugAdmin.java:[line 719] |
   |  |  
org.apache.hadoop.hdfs.tools.DebugAdmin$VerifyReadableCommand.handleArgs(String,
 String, String, String) may fail to close stream  At DebugAdmin.java:fail to 
close stream  At DebugAdmin.java:[line 734

Re: [PR] HADOOP-19129: [Backport to 3.4] [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-17 Thread via GitHub


steveloughran merged PR #6746:
URL: https://github.com/apache/hadoop/pull/6746


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838310#comment-17838310
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

steveloughran merged PR #6746:
URL: https://github.com/apache/hadoop/pull/6746




> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath() does not return more objects than what is configured 
> in maxListResults. Assertions should be that number of objects returned could 
> be less than expected as server might end up returning even lesser due to 
> partition splits along with a continuation token.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
> when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): 
> Fail when "fs.azure.test.namespace.enabled" config is missing. Ignore the 
> test if config is missing.
>  # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
> "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
> "fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
> number of read operations can be more in case of append blobs as compared to 
> normal blob because of automatic flush. It could be same as that of normal 
> blob as well.
>  # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS: 
> Fails for FNS Account only when following config is present:  
> fs.azure.account.hns.enabled". Failure is because test wants to assert that 
> when driver does not know if the account is HNS enabled or not it makes a 
> server call and fails. But above config is letting driver know the account 
> type and skipping the head call. Remove these configs from the test specific 
> configurations and not from the account settings file.
>  # ITestAbfsTerasort.test_120_terasort: Fails with OAuth on HNS account. 
> Failure is because of identity mismatch. OAuth uses service principle OID as 
> owner of the resources whereas Shared Key uses local system identities. Fix 
> is to set configs that will allow overwrite of OID to localidentity. This 
> will require a new config to be set by user that specify which OID has to be 
> su

[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838312#comment-17838312
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

steveloughran commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061808283

   merged.
   
   @anujmodi2021 -next time can you keep the commit title/body as the original 
one? I had to go and look it up rather than just let the github UI fill it in 
for me. thanks




> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath() does not return more objects than what is configured 
> in maxListResults. Assertions should be that number of objects returned could 
> be less than expected as server might end up returning even lesser due to 
> partition splits along with a continuation token.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
> when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): 
> Fail when "fs.azure.test.namespace.enabled" config is missing. Ignore the 
> test if config is missing.
>  # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
> "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
> "fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
> number of read operations can be more in case of append blobs as compared to 
> normal blob because of automatic flush. It could be same as that of normal 
> blob as well.
>  # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS: 
> Fails for FNS Account only when following config is present:  
> fs.azure.account.hns.enabled". Failure is because test wants to assert that 
> when driver does not know if the account is HNS enabled or not it makes a 
> server call and fails. But above config is letting driver know the account 
> type and skipping the head call. Remove these configs from the test specific 
> configurations and not from the account settings file.
>  # ITestAbfsTerasort.test_120_terasort: Fails with OAuth on HNS account. 
> Failure is because of identity mismatch. OAuth uses service principle OID as 
> owner of the resou

Re: [PR] HADOOP-19129: [Backport to 3.4] [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on PR #6746:
URL: https://github.com/apache/hadoop/pull/6746#issuecomment-2061808283

   merged.
   
   @anujmodi2021 -next time can you keep the commit title/body as the original 
one? I had to go and look it up rather than just let the github UI fill it in 
for me. thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569200611


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/impl/ManifestStoreOperations.java:
##
@@ -97,6 +97,36 @@ public boolean isFile(Path path) throws IOException {
   public abstract boolean delete(Path path, boolean recursive)
   throws IOException;
 
+  /**
+   * Forward to {@code delete(Path, true)}
+   * unless overridden.
+   * 
+   * If it returns without an error: there is no file at
+   * the end of the path.
+   * @param path path
+   * @return outcome
+   * @throws IOException failure.
+   */
+  public boolean deleteFile(Path path)
+  throws IOException {
+return delete(path, false);
+  }
+
+  /**
+   * Acquire the delete capacity then call {@code FileSystem#delete(Path, 
true)}

Review Comment:
   aah, I'd cut that. leaving delete capacity out of this PR as it'd need rate 
limiting in abfs *and* guessing about size/depth of directory. which we could 
actually add in future task manifests, leaving only the homework of aggregating 
it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-17 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2061812867

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m 45s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 30s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/41/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   4m 35s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 17s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 40s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 265m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/41/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 7818206bf2cf 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c08320c01c9d3439255d7409216a637259190d7 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/41/te

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838313#comment-17838313
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2061812867

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m 45s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 30s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/41/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   4m 35s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 17s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 40s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 265m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/41/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 7818206bf2cf 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c08320c01c9d3439255d7409216a637259190d7 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 

Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569201237


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/impl/ManifestStoreOperations.java:
##
@@ -97,6 +97,36 @@ public boolean isFile(Path path) throws IOException {
   public abstract boolean delete(Path path, boolean recursive)
   throws IOException;
 
+  /**
+   * Forward to {@code delete(Path, true)}
+   * unless overridden.
+   * 
+   * If it returns without an error: there is no file at
+   * the end of the path.
+   * @param path path
+   * @return outcome
+   * @throws IOException failure.
+   */
+  public boolean deleteFile(Path path)
+  throws IOException {
+return delete(path, false);

Review Comment:
   no it won't



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569203962


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/ManifestCommitterConstants.java:
##
@@ -143,6 +145,20 @@ public final class ManifestCommitterConstants {
*/
   public static final boolean OPT_CLEANUP_PARALLEL_DELETE_DIRS_DEFAULT = true;
 
+  /**
+   * Should parallel cleanup try to delete teh base first?
+   * Best for azure as it skips the task attempt deletions unless
+   * the toplevel delete fails.
+   * Value: {@value}.
+   */
+  public static final String OPT_CLEANUP_PARALLEL_DELETE_BASE_FIRST =
+  OPT_PREFIX + "cleanup.parallel.delete.base.first";
+
+  /**
+   * Default value of option {@link #OPT_CLEANUP_PARALLEL_DELETE_BASE_FIRST}:  
{@value}.
+   */
+  public static final boolean OPT_CLEANUP_PARALLEL_DELETE_BASE_FIRST_DEFAULT = 
true;

Review Comment:
   really don't know here. In the docs I try to cover this



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19152. Do not hard code security providers. [hadoop]

2024-04-17 Thread via GitHub


szetszwo commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2061830741

   > ... Actually, I think it was minihdfs which needed it for tests, rather 
than the actual code
   
   Yes, there are many tests requiring `BouncyCastle`.   They were not changed 
in the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569210996


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/AbstractJobOrTaskStage.java:
##
@@ -445,9 +448,29 @@ protected Boolean delete(
   final boolean recursive,
   final String statistic)
   throws IOException {
-return trackDuration(getIOStatistics(), statistic, () -> {
-  return operations.delete(path, recursive);
-});
+if (recursive) {

Review Comment:
   ok, deleteDir will also delete a file. let me highlight that.
   
   I'd done this delete dir/file split to support different capacity requests, 
without that it is a bit over-complex. it does let us collect different 
statistics though, which may be useful



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838323#comment-17838323
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

szetszwo commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2061850531

   @steveloughran , question to you:
   ```java
   +++ b/hadoop-common-project/hadoop-common/pom.xml
   @@ -375,6 +375,7 @@

  org.bouncycastle
  bcprov-jdk18on
   +  test


  org.apache.kerby
   ```
   According to our [Compatibility Java_Classpath 
doc](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Java_Classpath),
 removing a dependency is a compatible change.  The above change removes 
`bcprov-jdk18on` changes the scope from `compile` to `test`.  Is it a 
compatible change?
   
   Note that users currently using `BouncyCastleProvider` (and if all the 
downstream projects do not have `bcprov-jdk18on` dependency) have to make 
`bcprov-jdk18on` available by themselves with this change.




> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19152. Do not hard code security providers. [hadoop]

2024-04-17 Thread via GitHub


szetszwo commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2061850531

   @steveloughran , question to you:
   ```java
   +++ b/hadoop-common-project/hadoop-common/pom.xml
   @@ -375,6 +375,7 @@

  org.bouncycastle
  bcprov-jdk18on
   +  test


  org.apache.kerby
   ```
   According to our [Compatibility Java_Classpath 
doc](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Java_Classpath),
 removing a dependency is a compatible change.  The above change removes 
`bcprov-jdk18on` changes the scope from `compile` to `test`.  Is it a 
compatible change?
   
   Note that users currently using `BouncyCastleProvider` (and if all the 
downstream projects do not have `bcprov-jdk18on` dependency) have to make 
`bcprov-jdk18on` available by themselves with this change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838325#comment-17838325
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

ayushtkn merged PR #6678:
URL: https://github.com/apache/hadoop/pull/6678




> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Assignee: shawn
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-27-09-59-12-381.png, 
> image-2024-03-28-09-58-19-721.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
>    When use fs shell to put/rename file in ftp server with full qualified 
> path , it always get "Input/output error"(eg. 
> [ftp://user:password@localhost/pathxxx]), the reason is that 
> changeWorkingDirectory command underneath is being passed a string with 
> [file://|file:///] uri prefix which will not be understand by ftp server
> !image-2024-03-27-09-59-12-381.png|width=948,height=156!
>  
> in our case, after 
> client.changeWorkingDirectory("ftp://mytest:myt...@10.5.xx.xx/files";)
> executed, the workingDirectory of ftp server is still "/", which is 
> incorrect(not understand by ftp server)
> !image-2024-03-28-09-58-19-721.png|width=745,height=431!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> [file://|file:///] uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I create this issue and add related unit test.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569228824


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/AbstractJobOrTaskStage.java:
##
@@ -582,19 +605,46 @@ protected final Path directoryMustExist(
* Save a task manifest or summary. This will be done by
* writing to a temp path and then renaming.
* If the destination path exists: Delete it.
+   * This will retry so that a rename failure from abfs load or IO errors
+   * will not fail the task.
* @param manifestData the manifest/success file
* @param tempPath temp path for the initial save
* @param finalPath final path for rename.
-   * @throws IOException failure to load/parse
+   * @throws IOException failure to rename after retries.
*/
   @SuppressWarnings("unchecked")
   protected final  void save(T manifestData,
   final Path tempPath,
   final Path finalPath) throws IOException {
-LOG.trace("{}: save('{}, {}, {}')", getName(), manifestData, tempPath, 
finalPath);
-trackDurationOfInvocation(getIOStatistics(), OP_SAVE_TASK_MANIFEST, () ->
-operations.save(manifestData, tempPath, true));
-renameFile(tempPath, finalPath);
+boolean success = false;
+int failures = 0;
+while (!success) {
+  try {
+LOG.trace("{}: attempt {} save('{}, {}, {}')",
+getName(), failures, manifestData, tempPath, finalPath);
+
+trackDurationOfInvocation(getIOStatistics(), OP_SAVE_TASK_MANIFEST, () 
->
+operations.save(manifestData, tempPath, true));
+renameFile(tempPath, finalPath);

Review Comment:
   any error raised during rename triggers fallback of
   * catch IOE
   * save temp file again
   * delete dest path
   * rename temp path to final path
   
   this is attempted a configurable number of times, with a sleep in between.
   no attempt to be clever about which IOEs are unrecoverable (permissions 
etc), just catch, log, retry



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569225918


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/AbstractJobOrTaskStage.java:
##
@@ -582,19 +607,64 @@ protected final Path directoryMustExist(
* Save a task manifest or summary. This will be done by
* writing to a temp path and then renaming.
* If the destination path exists: Delete it.
+   * This will retry so that a rename failure from abfs load or IO errors
+   * will not fail the task.
* @param manifestData the manifest/success file
* @param tempPath temp path for the initial save
* @param finalPath final path for rename.
-   * @throws IOException failure to load/parse
+   * @throws IOException failure to rename after retries.
*/
   @SuppressWarnings("unchecked")
   protected final  void save(T manifestData,
   final Path tempPath,
   final Path finalPath) throws IOException {
-LOG.trace("{}: save('{}, {}, {}')", getName(), manifestData, tempPath, 
finalPath);
-trackDurationOfInvocation(getIOStatistics(), OP_SAVE_TASK_MANIFEST, () ->
-operations.save(manifestData, tempPath, true));
-renameFile(tempPath, finalPath);
+
+int retryCount = 0;
+RetryPolicy retryPolicy = retryUpToMaximumCountWithProportionalSleep(
+getStageConfig().getManifestSaveAttempts(),
+SAVE_SLEEP_INTERVAL,
+TimeUnit.MILLISECONDS);
+boolean success = false;
+while (!success) {
+  try {
+LOG.info("{}: save manifest to {} then rename as {}'); retry count={}",
+getName(), tempPath, finalPath, retryCount);
+
+trackDurationOfInvocation(getIOStatistics(), OP_SAVE_TASK_MANIFEST, () 
->
+operations.save(manifestData, tempPath, true));

Review Comment:
   so renameFile() has always deleted the destination because we need to do 
that to cope with failures of a previous/concurrent task attempt. Whoever 
commits last wins.
   
   To make this clearer I'm pulling up more of the code into this method and 
adding comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-19130.
---
Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Assignee: shawn
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
> Attachments: image-2024-03-27-09-59-12-381.png, 
> image-2024-03-28-09-58-19-721.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
>    When use fs shell to put/rename file in ftp server with full qualified 
> path , it always get "Input/output error"(eg. 
> [ftp://user:password@localhost/pathxxx]), the reason is that 
> changeWorkingDirectory command underneath is being passed a string with 
> [file://|file:///] uri prefix which will not be understand by ftp server
> !image-2024-03-27-09-59-12-381.png|width=948,height=156!
>  
> in our case, after 
> client.changeWorkingDirectory("ftp://mytest:myt...@10.5.xx.xx/files";)
> executed, the workingDirectory of ftp server is still "/", which is 
> incorrect(not understand by ftp server)
> !image-2024-03-28-09-58-19-721.png|width=745,height=431!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> [file://|file:///] uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I create this issue and add related unit test.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-04-17 Thread via GitHub


ayushtkn merged PR #6678:
URL: https://github.com/apache/hadoop/pull/6678


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569233047


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/CleanupJobStage.java:
##
@@ -142,64 +154,93 @@ protected Result executeStage(
 }
 
 Outcome outcome = null;
-IOException exception;
+IOException exception = null;
+boolean baseDirDeleted = false;
 
 
 // to delete.
 LOG.info("{}: Deleting job directory {}", getName(), baseDir);
 
 if (args.deleteTaskAttemptDirsInParallel) {
-  // Attempt to do a parallel delete of task attempt dirs;
-  // don't overreact if a delete fails, but stop trying
-  // to delete the others, and fall back to deleting the
-  // job dir.
-  Path taskSubDir
-  = getStageConfig().getJobAttemptTaskSubDir();
-  try (DurationInfo info = new DurationInfo(LOG,
-  "parallel deletion of task attempts in %s",
-  taskSubDir)) {
-RemoteIterator dirs =
-RemoteIterators.filteringRemoteIterator(
-listStatusIterator(taskSubDir),
-FileStatus::isDirectory);
-TaskPool.foreach(dirs)
-.executeWith(getIOProcessors())
-.stopOnFailure()
-.suppressExceptions(false)
-.run(this::rmTaskAttemptDir);
-getIOStatistics().aggregate((retrieveIOStatistics(dirs)));
-
-if (getLastDeleteException() != null) {
-  // one of the task attempts failed.
-  throw getLastDeleteException();
+
+  // parallel delete of task attempt dirs.
+
+  if (args.parallelDeleteAttemptBaseDeleteFirst) {
+// attempt to delete the base dir first.
+// This can reduce ABFS delete load but may time out
+// (which the fallback to parallel delete will handle).
+// on GCS it is slow.
+try (DurationInfo info = new DurationInfo(LOG, true,
+"Initial delete of %s", baseDir)) {
+  exception = deleteOneDir(baseDir);
+  if (exception == null) {

Review Comment:
   ooh, so it's going to be quite a long time to fall back.
   I'm going to make the option default to false for now.
   
   > AzureManifestCommitterFactory could probably set this config to 0 before 
FileSystem.get() call happens.
   
   it'll come from the cache, we don't want to set it for everything else, but 
a low MAX_RETRIES_RECURSIVE_DELETE might make sense everywhere. something to 
consider later. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838327#comment-17838327
 ] 

Ayush Saxena commented on HADOOP-19130:
---

Committed to trunk.

Thanx [~zhangjian16] for the contribution & [~hiwangzhihui] for the review!!!

 

Note: Added [~zhangjian16] as Hadoop Common Contributor to assign the ticket.

Welcome to Hadoop!!!

> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Assignee: shawn
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-27-09-59-12-381.png, 
> image-2024-03-28-09-58-19-721.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
>    When use fs shell to put/rename file in ftp server with full qualified 
> path , it always get "Input/output error"(eg. 
> [ftp://user:password@localhost/pathxxx]), the reason is that 
> changeWorkingDirectory command underneath is being passed a string with 
> [file://|file:///] uri prefix which will not be understand by ftp server
> !image-2024-03-27-09-59-12-381.png|width=948,height=156!
>  
> in our case, after 
> client.changeWorkingDirectory("ftp://mytest:myt...@10.5.xx.xx/files";)
> executed, the workingDirectory of ftp server is still "/", which is 
> incorrect(not understand by ftp server)
> !image-2024-03-28-09-58-19-721.png|width=745,height=431!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> [file://|file:///] uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I create this issue and add related unit test.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-04-17 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HADOOP-19130:
-

Assignee: shawn

> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Assignee: shawn
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-27-09-59-12-381.png, 
> image-2024-03-28-09-58-19-721.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
>    When use fs shell to put/rename file in ftp server with full qualified 
> path , it always get "Input/output error"(eg. 
> [ftp://user:password@localhost/pathxxx]), the reason is that 
> changeWorkingDirectory command underneath is being passed a string with 
> [file://|file:///] uri prefix which will not be understand by ftp server
> !image-2024-03-27-09-59-12-381.png|width=948,height=156!
>  
> in our case, after 
> client.changeWorkingDirectory("ftp://mytest:myt...@10.5.xx.xx/files";)
> executed, the workingDirectory of ftp server is still "/", which is 
> incorrect(not understand by ftp server)
> !image-2024-03-28-09-58-19-721.png|width=745,height=431!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> [file://|file:///] uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I create this issue and add related unit test.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18656) ABFS: Support for Pagination in Recursive Directory Delete

2024-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838336#comment-17838336
 ] 

ASF GitHub Bot commented on HADOOP-18656:
-

mukund-thakur commented on PR #6718:
URL: https://github.com/apache/hadoop/pull/6718#issuecomment-2061899177

   there are conflicts here after the test patch has been merged. 




> ABFS: Support for Pagination in Recursive Directory Delete 
> ---
>
> Key: HADOOP-18656
> URL: https://issues.apache.org/jira/browse/HADOOP-18656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.5
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Today, when a recursive delete is issued for a large directory in ADLS Gen2 
> (HNS) account, the directory deletion happens in O(1) but in backend ACL 
> Checks are done recursively for each object inside that directory which in 
> case of large directory could lead to request time out. Pagination is 
> introduced in the Azure Storage Backend for these ACL checks.
> More information on how pagination works can be found on public documentation 
> of [Azure Delete Path 
> API|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/delete?view=rest-storageservices-datalakestoragegen2-2019-12-12].
> This PR contains changes to support this from client side. To trigger 
> pagination, client needs to add a new query parameter "paginated" and set it 
> to true along with recursive set to true. In return if the directory is 
> large, server might return a continuation token back to the caller. If caller 
> gets back a continuation token, it has to call the delete API again with 
> continuation token along with recursive and pagination set to true. This is 
> similar to directory delete of FNS account.
> Pagination is available only in versions "2023-08-03" onwards.
> PR also contains functional tests to verify driver works well with different 
> combinations of recursive and pagination features for HNS.
> Full E2E testing of pagination requires large dataset to be created and hence 
> not added as part of driver test suite. But extensive E2E testing has been 
> performed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18656: [Backport to 3.4] [ABFS] Adding Support for Paginated Delete for Large Directories in HNS Account [hadoop]

2024-04-17 Thread via GitHub


mukund-thakur commented on PR #6718:
URL: https://github.com/apache/hadoop/pull/6718#issuecomment-2061899177

   there are conflicts here after the test patch has been merged. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569262867


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/CleanupJobStage.java:
##
@@ -142,64 +154,93 @@ protected Result executeStage(
 }
 
 Outcome outcome = null;
-IOException exception;
+IOException exception = null;
+boolean baseDirDeleted = false;
 
 
 // to delete.
 LOG.info("{}: Deleting job directory {}", getName(), baseDir);
 
 if (args.deleteTaskAttemptDirsInParallel) {
-  // Attempt to do a parallel delete of task attempt dirs;
-  // don't overreact if a delete fails, but stop trying
-  // to delete the others, and fall back to deleting the
-  // job dir.
-  Path taskSubDir
-  = getStageConfig().getJobAttemptTaskSubDir();
-  try (DurationInfo info = new DurationInfo(LOG,
-  "parallel deletion of task attempts in %s",
-  taskSubDir)) {
-RemoteIterator dirs =
-RemoteIterators.filteringRemoteIterator(
-listStatusIterator(taskSubDir),
-FileStatus::isDirectory);
-TaskPool.foreach(dirs)
-.executeWith(getIOProcessors())
-.stopOnFailure()
-.suppressExceptions(false)
-.run(this::rmTaskAttemptDir);
-getIOStatistics().aggregate((retrieveIOStatistics(dirs)));
-
-if (getLastDeleteException() != null) {
-  // one of the task attempts failed.
-  throw getLastDeleteException();
+
+  // parallel delete of task attempt dirs.
+
+  if (args.parallelDeleteAttemptBaseDeleteFirst) {
+// attempt to delete the base dir first.
+// This can reduce ABFS delete load but may time out
+// (which the fallback to parallel delete will handle).
+// on GCS it is slow.
+try (DurationInfo info = new DurationInfo(LOG, true,
+"Initial delete of %s", baseDir)) {
+  exception = deleteOneDir(baseDir);
+  if (exception == null) {
+// success: record this as the outcome, which
+// will skip the parallel delete.
+outcome = Outcome.DELETED;
+baseDirDeleted = true;
+  } else {
+// failure: log and continue
+LOG.warn("{}: Exception on initial attempt at deleting base dir 
{}\n"
++ "attempting parallel delete",
+getName(), baseDir, exception);
+  }
+}
+  }
+  if (!baseDirDeleted) {

Review Comment:
   it gets set on L180; will comment 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-17 Thread via GitHub


steveloughran commented on code in PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#discussion_r1569264943


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/CleanupJobStage.java:
##
@@ -142,64 +154,93 @@ protected Result executeStage(
 }
 
 Outcome outcome = null;
-IOException exception;
+IOException exception = null;
+boolean baseDirDeleted = false;
 
 
 // to delete.
 LOG.info("{}: Deleting job directory {}", getName(), baseDir);
 
 if (args.deleteTaskAttemptDirsInParallel) {
-  // Attempt to do a parallel delete of task attempt dirs;
-  // don't overreact if a delete fails, but stop trying
-  // to delete the others, and fall back to deleting the
-  // job dir.
-  Path taskSubDir
-  = getStageConfig().getJobAttemptTaskSubDir();
-  try (DurationInfo info = new DurationInfo(LOG,
-  "parallel deletion of task attempts in %s",
-  taskSubDir)) {
-RemoteIterator dirs =
-RemoteIterators.filteringRemoteIterator(
-listStatusIterator(taskSubDir),
-FileStatus::isDirectory);
-TaskPool.foreach(dirs)
-.executeWith(getIOProcessors())
-.stopOnFailure()
-.suppressExceptions(false)
-.run(this::rmTaskAttemptDir);
-getIOStatistics().aggregate((retrieveIOStatistics(dirs)));
-
-if (getLastDeleteException() != null) {
-  // one of the task attempts failed.
-  throw getLastDeleteException();
+
+  // parallel delete of task attempt dirs.
+
+  if (args.parallelDeleteAttemptBaseDeleteFirst) {
+// attempt to delete the base dir first.
+// This can reduce ABFS delete load but may time out
+// (which the fallback to parallel delete will handle).
+// on GCS it is slow.
+try (DurationInfo info = new DurationInfo(LOG, true,
+"Initial delete of %s", baseDir)) {
+  exception = deleteOneDir(baseDir);
+  if (exception == null) {
+// success: record this as the outcome, which
+// will skip the parallel delete.
+outcome = Outcome.DELETED;
+baseDirDeleted = true;
+  } else {
+// failure: log and continue
+LOG.warn("{}: Exception on initial attempt at deleting base dir 
{}\n"
++ "attempting parallel delete",
+getName(), baseDir, exception);
+  }
+}
+  }
+  if (!baseDirDeleted) {

Review Comment:
   note that the next stage will fail immediately on the list() call, which is 
caught as a FNFE and treated as success



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >