[jira] [Commented] (HADOOP-18582) No need to clean tmp files in distcp direct mode

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689585#comment-17689585
 ] 

ASF GitHub Bot commented on HADOOP-18582:
-

ayushtkn opened a new pull request, #5409:
URL: https://github.com/apache/hadoop/pull/5409

   ### Description of PR
   
   Don't skip cleaning of temp files, in case of Append mode
   
   ### How was this patch tested?
   
   UT
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> No need to clean tmp files in distcp direct mode
> 
>
> Key: HADOOP-18582
> URL: https://issues.apache.org/jira/browse/HADOOP-18582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.3.4
>Reporter: 1kang
>Assignee: 1kang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> it not necessary to do `cleanupTempFiles`  while ditcp commit job in direct  
> mode, because it there is no temp files in direct mode.
> This clean operation will increase the task execution time, because it will 
> get the list of files in the target path. When the number of files in the 
> target path is very large, this operation will be very slow.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn opened a new pull request, #5409: HADOOP-18582. Addendum: Skip unnecessary cleanup logic in DistCp.

2023-02-15 Thread via GitHub


ayushtkn opened a new pull request, #5409:
URL: https://github.com/apache/hadoop/pull/5409

   ### Description of PR
   
   Don't skip cleaning of temp files, in case of Append mode
   
   ### How was this patch tested?
   
   UT
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#issuecomment-1432641907

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 37s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 130 unchanged - 0 fixed = 132 total (was 
130)  |
   | +1 :green_heart: |  mvnsite  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 205m 59s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 452m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5397 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux c07fc2a67daf 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   |

[jira] [Commented] (HADOOP-18215) Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689581#comment-17689581
 ] 

ASF GitHub Bot commented on HADOOP-18215:
-

hadoop-yetus commented on PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#issuecomment-1432636826

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4215/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4215 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1bf9a9f6b9a6 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b28373311832847fb73deb1b47e4dea79748b812 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4215/8/testReport/ |
   | Max. process+thread count | 1641 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4215/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Enhance WritableName to be able to return aliases for class

[GitHub] [hadoop] hadoop-yetus commented on pull request #4215: HADOOP-18215. Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#issuecomment-1432636826

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4215/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4215 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1bf9a9f6b9a6 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b28373311832847fb73deb1b47e4dea79748b812 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4215/8/testReport/ |
   | Max. process+thread count | 1641 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4215/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

[jira] [Commented] (HADOOP-18635) Expose distcp counters to user via config parameter and distcp contants

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689579#comment-17689579
 ] 

ASF GitHub Bot commented on HADOOP-18635:
-

hadoop-yetus commented on PR #5402:
URL: https://github.com/apache/hadoop/pull/5402#issuecomment-1432634125

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 5 new + 29 unchanged - 0 
fixed = 34 total (was 29)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  15m 21s | 
[/patch-unit-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-distcp in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.TestExternalCall |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6e382f48f842 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2bce2f6885de6486f50e21fab63479fb77308225 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/jo

[GitHub] [hadoop] hadoop-yetus commented on pull request #5402: HADOOP-18635 : Expose distcp counters to user via new DistCpConstants "CONF_LABEL_DI…

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5402:
URL: https://github.com/apache/hadoop/pull/5402#issuecomment-1432634125

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 5 new + 29 unchanged - 0 
fixed = 34 total (was 29)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  15m 21s | 
[/patch-unit-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-distcp in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.TestExternalCall |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6e382f48f842 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2bce2f6885de6486f50e21fab63479fb77308225 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/testReport/ |
   | Max. process+thread count | 566 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5402/2/console |

[GitHub] [hadoop] hadoop-yetus commented on pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#issuecomment-1432623582

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 36s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/5/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 131 unchanged - 0 fixed = 133 total (was 
131)  |
   | +1 :green_heart: |  mvnsite  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 22s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 204m 26s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 450m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.namenode.TestAuditLogger |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5397 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux a01dfa687610 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3c9d4b3e3e3bcc0d61f2f1a21d3d2bb8dc59

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689576#comment-17689576
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108083553


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {

Review Comment:
   Are you referring to a check on the url.getQuery() or the timeout parameter 
itself? 





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: 

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108083553


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {

Review Comment:
   Are you referring to a check on the url.getQuery() or the timeout parameter 
itself? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additio

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689573#comment-17689573
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108081097


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");

Review Comment:
   Added the change.





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108081097


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");

Review Comment:
   Added the change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689567#comment-17689567
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108079007


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;

Review Comment:
   Setting request timeout (and all other timeouts) to -1 can be thought of as 
a flag value that is being used. Although the value for request timeout does 
not get checked, the other timeout values get checked (getReadTimeout and 
getConnTimeout calls). So to keep with the other timeouts initializations this 
is also set to -1. Would you suggest changing this in any way? 





> ABFS: Customize and optimize timeouts made based on each separate request
> --

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108079007


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;

Review Comment:
   Setting request timeout (and all other timeouts) to -1 can be thought of as 
a flag value that is being used. Although the value for request timeout does 
not get checked, the other timeout values get checked (getReadTimeout and 
getConnTimeout calls). So to keep with the other timeouts initializations this 
is also set to -1. Would you suggest changing this in any way? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubsc

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689563#comment-17689563
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108070379


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String ti

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108070379


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String timeout = "";
+if (opType == AbfsRestOperationType.CreateFileSystem) {
+timeout = 
abfsConfiguration.get(ConfigurationKeys.AZURE_CREATE_FS_REQUEST_TIMEOUT);
+}
+else if (opType == AbfsRestOperationType.GetFileSystemProperties) 

[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689556#comment-17689556
 ] 

ASF GitHub Bot commented on HADOOP-18399:
-

hadoop-yetus commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1432593453

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  46m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  30m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  23m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  29m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  29m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  25m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 51s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5054 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux ee67f21a53df 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 98c89bf744484b8e0eb14ff9f9250c686a02973a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/17/testReport/ |
   | Max. process+thread count | 2527 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https:

[GitHub] [hadoop] hadoop-yetus commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1432593453

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  46m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  30m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  23m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  29m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  29m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  25m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 51s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5054 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux ee67f21a53df 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 98c89bf744484b8e0eb14ff9f9250c686a02973a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/17/testReport/ |
   | Max. process+thread count | 2527 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/17/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
Th

[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

2023-02-15 Thread via GitHub


hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432581037

   > 
   
   
   > Thanks to involve me here. It is interesting issue. I am confused about 
some points of the description.
   > 
   > > dn3 is writting the blk_12345_002 , but dn2 is blocked by recoverClose 
method and does not send ack to client.
   > 
   > is this another injects or related this write flow?
   > 
   > > dn3 writes blk_12345_003 successfully.
   > > dn3 writes blk_12345_002 successfully and notifyNamenodeReceivedBlock.
   > 
   > Here dn3 writes the same block replica twice, is it expected?
   > 
   > Sorry didn't dig deeply this logic, will trace it for a while. 
@hfutatzhanghb Thanks again for your report and offer the solution.
   
   Hi, @Hexiaoqiao , thanks for your reply. 
   For the question 1:  dn2 is blocked by recoverClose() because of the 
datasetWriteLock acquire in branch-3.3.2
   For the question 2: yes, dn3 writes the same block replica twice, but the 
two replicas have different generation stamp. and when blk_12345_003 and 
blk_12345_002  are written in the same IBR interval, the 
IncrementalBlockReportManager#addRDBI will remove the report of blk_12345_003.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689550#comment-17689550
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108049244


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String ti

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108049244


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String timeout = "";
+if (opType == AbfsRestOperationType.CreateFileSystem) {
+timeout = 
abfsConfiguration.get(ConfigurationKeys.AZURE_CREATE_FS_REQUEST_TIMEOUT);
+}
+else if (opType == AbfsRestOperationType.GetFileSystemProperties) 

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689548#comment-17689548
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108048702


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String ti

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689547#comment-17689547
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108048351


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}

Review Comment:
   Are you suggesting moving just the if block code to the above else block? Or 
including the if check and the following code in the block together in the 
above else block? 





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108048702


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String timeout = "";
+if (opType == AbfsRestOperationType.CreateFileSystem) {

Review Comment:
   Can try to have an enum with the AbfsRestOperationType and corresponding 
ConfigurationKey



-- 
This is an automated message from the Apache Git Service.
To respo

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108048351


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}

Review Comment:
   Are you suggesting moving just the if block code to the above else block? Or 
including the if check and the following code in the block together in the 
above else block? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689541#comment-17689541
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108037646


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+Strin

[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108037646


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String timeout = "";
+if (opType == AbfsRestOperationType.CreateFileSystem) {
+timeout = 
abfsConfiguration.get(ConfigurationKeys.AZURE_CREATE_FS_REQUEST_TIMEOUT);
+}
+else if (opType == AbfsRestOperationType.GetFileSystemProperti

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689540#comment-17689540
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108037262


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+Strin

[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108037262


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String timeout = "";
+if (opType == AbfsRestOperationType.CreateFileSystem) {
+timeout = 
abfsConfiguration.get(ConfigurationKeys.AZURE_CREATE_FS_REQUEST_TIMEOUT);
+}
+else if (opType == AbfsRestOperationType.GetFileSystemProperti

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689537#comment-17689537
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108035699


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }

Review Comment:
   Line break.





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-

[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108035699


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }

Review Comment:
   Line break.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689535#comment-17689535
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108030121


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -117,9 +125,10 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   URL can be made to final in timeoutoptimizer also





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689536#comment-17689536
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108034619


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));

Review Comment:
   should we add a null check here as well or should we have default values for 
this as we are taking dependency on some config ?



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeo

[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108034619


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));

Review Comment:
   should we add a null check here as well or should we have default values for 
this as we are taking dependency on some config ?



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+

[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108030121


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -117,9 +125,10 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   URL can be made to final in timeoutoptimizer also



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689534#comment-17689534
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108034619


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));

Review Comment:
   should we add a null check here as well ?





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108034619


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));

Review Comment:
   should we add a null check here as well ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689531#comment-17689531
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108031419


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {

Review Comment:
   Add javadoc for the class and comments.





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108031419


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {

Review Comment:
   Add javadoc for the class and comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689528#comment-17689528
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108030121


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -117,9 +125,10 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   URL can be made to final in timeoutoptimizer also





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108030121


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -117,9 +125,10 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   URL can be made to final in timeoutoptimizer also



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5312: YARN-11375. [Federation] Support refreshAdminAcls、refreshServiceAcls API's for Federation.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5312:
URL: https://github.com/apache/hadoop/pull/5312#issuecomment-1432517990

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  6s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 57s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/11/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   3m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 32s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  cc  |   9m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  cc  |   9m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  3s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/11/artifact/out/blanks-eol.txt)
 |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  |  
hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 7 unchanged - 9 
fixed = 7 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   3m 36s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 53s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/11/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   2m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 43s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  98m 40s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689517#comment-17689517
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108012054


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -555,6 +560,7 @@ public static class AbfsHttpOperationWithFixedResult 
extends AbfsHttpOperation {
 public AbfsHttpOperationWithFixedResult(final URL url,
 final String method,
 final int httpStatus) {
+

Review Comment:
   Remove extra line.





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108012054


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -555,6 +560,7 @@ public static class AbfsHttpOperationWithFixedResult 
extends AbfsHttpOperation {
 public AbfsHttpOperationWithFixedResult(final URL url,
 final String method,
 final int httpStatus) {
+

Review Comment:
   Remove extra line.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689516#comment-17689516
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108011727


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -276,14 +280,15 @@ public AbfsHttpOperation(final URL url, final String 
method, final List ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


anmolanmol1234 commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1108011727


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -276,14 +280,15 @@ public AbfsHttpOperation(final URL url, final String 
method, final List

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689513#comment-17689513
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

pranavsaxena-microsoft commented on PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#issuecomment-1432508287

   Please add the test class in 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/pom.xml#L601-L608
 and 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/pom.xml#L644-L652,
 else it will break the runTest script runs.




> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


pranavsaxena-microsoft commented on PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#issuecomment-1432508287

   Please add the test class in 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/pom.xml#L601-L608
 and 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/pom.xml#L644-L652,
 else it will break the runTest script runs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689510#comment-17689510
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

pranavsaxena-microsoft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1107015958


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}

Review Comment:
   Lets add it inside else block above. Reason being, if block is always having 
this key false.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType 

[GitHub] [hadoop] pranavsaxena-microsoft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-15 Thread via GitHub


pranavsaxena-microsoft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1107015958


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}

Review Comment:
   Lets add it inside else block above. Reason being, if block is always having 
this key false.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeT

[GitHub] [hadoop] hadoop-yetus commented on pull request #5335: YARN-11426. Improve YARN NodeLabel Memory Display.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5335:
URL: https://github.com/apache/hadoop/pull/5335#issuecomment-1432505137

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 19s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m  1s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5335/8/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   9m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 56s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5335/8/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 11s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 100m  2s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 268m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5335/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5335 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 7521f5646ee8 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revi

[jira] [Commented] (HADOOP-18633) fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689506#comment-17689506
 ] 

ASF GitHub Bot commented on HADOOP-18633:
-

mehakmeet merged PR #5401:
URL: https://github.com/apache/hadoop/pull/5401




> fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip 
> --
>
> Key: HADOOP-18633
> URL: https://issues.apache.org/jira/browse/HADOOP-18633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>
> In the newly introduced test testDistCpUpdateCheckFileSkip, for the first 
> pass of "distcp -update", target file should not be present so that the copy 
> takes place and creates the target file. 
> Currently, we create both the source and target file with same block size 
> from the start which can lead to flakiness due to race condition causing the 
> modification time of the target file to be greater than/equal to the source 
> and not copy the file at all. This can be seen more in the 
> TestLocalContractDistCp due to no remote calls to create the target.
> {code:java}
> java.lang.AssertionError: Mismatch in COPY counter value expected:<1> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at 
> org.apache.hadoop.tools.contract.AbstractContractDistCpTest.verifySkipAndCopyCounter(AbstractContractDistCpTest.java:1000)
>   at 
> org.apache.hadoop.tools.contract.AbstractContractDistCpTest.testDistCpUpdateCheckFileSkip(AbstractContractDistCpTest.java:919)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #5408: HDFS-16898. Remove write lock for processCommandFromActor of DataNode to reduce impact on heartbeat.

2023-02-15 Thread via GitHub


Hexiaoqiao commented on PR #5408:
URL: https://github.com/apache/hadoop/pull/5408#issuecomment-1432502138

   Update title and let's wait what will Yetus say.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet merged pull request #5401: HADOOP-18633. fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip

2023-02-15 Thread via GitHub


mehakmeet merged PR #5401:
URL: https://github.com/apache/hadoop/pull/5401


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

2023-02-15 Thread via GitHub


Hexiaoqiao commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432500654

   addendum:
   
   > Requires a UT which can reproduce the said issue.
   
   Ayushtkn means here is that we should add new unit tests (source code for 
test, such as TestClientProtocolForPipelineRecovery at HDFS-16146 mentioned 
above.) Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

2023-02-15 Thread via GitHub


Hexiaoqiao commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432498891

   Thanks to involve me here. It is interesting issue. I am confused about some 
points of the description.
   
   > dn3 is writting the blk_12345_002 , but dn2 is blocked by recoverClose 
method and does not send ack to client.
   
   is this another injects or related this write flow?
   
   > dn3 writes blk_12345_003 successfully.
   > dn3 writes blk_12345_002 successfully and notifyNamenodeReceivedBlock.
   
   Here dn3 writes the same block replica twice, is it expected?
   
   Sorry didn't dig deeply this logic, will trace it for a while.
   @hfutatzhanghb Thanks again for your report and offer the solution. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5394: YARN-5604. [Federation] Add versioning for FederationStateStore.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5394:
URL: https://github.com/apache/hadoop/pull/5394#issuecomment-1432485473

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 48s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5394/2/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 40s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5394/2/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 10s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |  98m 43s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 235m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5394/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5394 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 678513de4457 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #5302: YARN-11221. [Federation] Add replaceLabelsOnNodes, replaceLabelsOnNode REST APIs for Router.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5302:
URL: https://github.com/apache/hadoop/pull/5302#issuecomment-1432483009

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 55s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/18/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  23m 26s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 42s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/18/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  98m 36s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 31s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5302 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 418329d3ea49 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5288: YARN-11394. Fix hadoop-yarn-server-resourcemanager module Java Doc Errors.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5288:
URL: https://github.com/apache/hadoop/pull/5288#issuecomment-1432466460

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  4s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 58s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5288/9/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 703 unchanged - 51 fixed = 703 total (was 754)  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04
 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 1 
unchanged - 100 fixed = 1 total (was 101)  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 1 
unchanged - 342 fixed = 1 total (was 343)  |
   | +1 :green_heart: |  spotbugs  |   2m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  98m 32s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 211m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5288/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5288 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ef8d4d4a4aed 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7b6f1ea9b4a49b5ec8fe6487e9ce911f9b66d8a4 |
   | Default Java | Private Build-1.8.0_352-8u352-ga

[jira] [Commented] (HADOOP-18215) Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689488#comment-17689488
 ] 

ASF GitHub Bot commented on HADOOP-18215:
-

bbeaudreault commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107974086


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   @ayushtkn @omalley 

> Enhance WritableName to be able to return aliases for classes that use 
> serializers
> --
>
> Key: HADOOP-18215
> URL: https://issues.apache.org/jira/browse/HADOOP-18215
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> WritableName allows users shim in aliases for writables, in the case where a 
> SequenceFile was written with a Writable class that has since been renamed or 
> moved to another package. However, this requires that the aliased class 
> extend Writable. 
> Separately it's possible to configure jobs with keys and values which don't 
> actually extend Writable. Instead they are meant to be 
> serialized/deserialized using the serialization classes defined in 
> {{io.serializations}} config.
> Unfortunately, the current implementation does not support these key/value 
> classes. All we need to do to support this is remove the 
> {{.asSubclass(Writable.class)}} as is already the case for the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bbeaudreault commented on a diff in pull request #4215: HADOOP-18215. Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread via GitHub


bbeaudreault commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107974086


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   @ayushtkn @omalley -- ok we're back to the original change here. Please give 
it another look and let me know if you'd like me to make any other changes.
   
   Thanks again



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hfutatzhanghb opened a new pull request, #5408: HDFS-16898.HDFS-16898. Remove write lock for processCommandFromActor of DataNode to reduce impact on heartbeat

2023-02-15 Thread via GitHub


hfutatzhanghb opened a new pull request, #5408:
URL: https://github.com/apache/hadoop/pull/5408

   https://github.com/apache/hadoop/pull/5330


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hfutatzhanghb commented on pull request #5330: HDFS-16898. Remove write lock for processCommandFromActor of DataNode to reduce impact on heartbeat

2023-02-15 Thread via GitHub


hfutatzhanghb commented on PR #5330:
URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1432454503

   > @hfutatzhanghb This PR could not cherrypick to branch-3.3 smoothly. Would 
you mind to submit another PR for branch-3.3?
   
   @Hexiaoqiao , done~, please have a look. thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18215) Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689483#comment-17689483
 ] 

ASF GitHub Bot commented on HADOOP-18215:
-

ayushtkn commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107967867


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   Go ahead





> Enhance WritableName to be able to return aliases for classes that use 
> serializers
> --
>
> Key: HADOOP-18215
> URL: https://issues.apache.org/jira/browse/HADOOP-18215
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> WritableName allows users shim in aliases for writables, in the case where a 
> SequenceFile was written with a Writable class that has since been renamed or 
> moved to another package. However, this requires that the aliased class 
> extend Writable. 
> Separately it's possible to configure jobs with keys and values which don't 
> actually extend Writable. Instead they are meant to be 
> serialized/deserialized using the serialization classes defined in 
> {{io.serializations}} config.
> Unfortunately, the current implementation does not support these key/value 
> classes. All we need to do to support this is remove the 
> {{.asSubclass(Writable.class)}} as is already the case for the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #4215: HADOOP-18215. Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread via GitHub


ayushtkn commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107967867


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   Go ahead



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5396: HDFS-16918. Optionally shut down datanode if it does not stay connected to active namenode

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5396:
URL: https://github.com/apache/hadoop/pull/5396#issuecomment-1432445119

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5396/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 325 unchanged 
- 0 fixed = 326 total (was 325)  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 53s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5396/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 251m 57s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5396/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 385m  0s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5396/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5396 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 5d0f90e11c93 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3400be46ce4cf29409a2b031a8860a80d61313df |
   | Default Java | Pr

[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

2023-02-15 Thread via GitHub


hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432442616

   @hi, @jojochuang @Hexiaoqiao @zhangshuyan0 , this pr is seems to be another 
supplement for [HDFS-16146](https://issues.apache.org/jira/browse/HDFS-16146), 
could you please take a look at this? thanks all.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5382: YARN-8972. [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5382:
URL: https://github.com/apache/hadoop/pull/5382#issuecomment-1432426079

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  2s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m  7s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   9m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 22s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5382/4/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m 37s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5382/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 166 unchanged 
- 0 fixed = 167 total (was 166)  |
   | +1 :green_heart: |  mvnsite  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 11s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 41s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 44s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 175m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5382/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5382 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux e7f373d07643 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d91d1615df9969333dfdf69381e8f7a3519d68bb |
   | Default Java | Private 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5403: YARN-11340. [Federation] Improve SQLFederationStateStore DataSource Config.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5403:
URL: https://github.com/apache/hadoop/pull/5403#issuecomment-1432417250

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   9m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 34s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged 
- 0 fixed = 165 total (was 164)  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 11s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 14s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 167m 29s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5403 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d6189a9b1ff8 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e016aea9ed51dd5616965bafe8585f9a4718c701 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd6

[GitHub] [hadoop] hadoop-yetus commented on pull request #5363: YARN-11424. [Federation] Router Supports DeregisterSubCluster.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5363:
URL: https://github.com/apache/hadoop/pull/5363#issuecomment-1432412633

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 10s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 55s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 55s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/4/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  javadoc  |   0m 34s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/4/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   4m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |  11m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 57s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 28s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/4/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 27s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/4/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt)
 |  hadoop-yarn-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 30s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/4/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt)
 |  hadoop-yarn-server-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 29s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/4/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 25s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/4/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt)
 |  hadoop-yarn-server-nodemanager in the patch failed.  |
   | -1 :x: | 

[jira] [Commented] (HADOOP-18629) Hadoop DistCp supports specifying favoredNodes for data copying

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689452#comment-17689452
 ] 

ASF GitHub Bot commented on HADOOP-18629:
-

zhuyaogai commented on PR #5391:
URL: https://github.com/apache/hadoop/pull/5391#issuecomment-1432392281

   @steveloughran hi, thanks for your suggestion:) I know what you mean, but I 
find that it also uses  the hdfs public/stable API in source code. 
   
https://github.com/apache/hadoop/blob/723535b788070f6b103be3bae621fefe3b753081/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java#L230
   I just refer to its practice in the if branch, and if you think my code 
changes affect too much, can I just change the else branch code and add 
favoredNodes option in it? Please correct me if I'm wrong. Thank you :)




> Hadoop DistCp supports specifying favoredNodes for data copying
> ---
>
> Key: HADOOP-18629
> URL: https://issues.apache.org/jira/browse/HADOOP-18629
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools/distcp
>Affects Versions: 3.3.4
>Reporter: zhuyaogai
>Priority: Major
>  Labels: pull-request-available
>
> When importing large scale data to HBase, we always generate the hfiles with 
> other Hadoop cluster, use the Distcp tool to copy the data to the HBase 
> cluster, and bulkload data to HBase table. However, the data locality is 
> rather low which may result in high query latency. After taking a compaction 
> it will recover. Therefore, we can increase the data locality by specifying 
> the favoredNodes in Distcp.
> Could I submit a pull request to optimize it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhuyaogai commented on pull request #5391: HADOOP-18629. Hadoop DistCp supports specifying favoredNodes for data copying

2023-02-15 Thread via GitHub


zhuyaogai commented on PR #5391:
URL: https://github.com/apache/hadoop/pull/5391#issuecomment-1432392281

   @steveloughran hi, thanks for your suggestion:) I know what you mean, but I 
find that it also uses  the hdfs public/stable API in source code. 
   
https://github.com/apache/hadoop/blob/723535b788070f6b103be3bae621fefe3b753081/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java#L230
   I just refer to its practice in the if branch, and if you think my code 
changes affect too much, can I just change the else branch code and add 
favoredNodes option in it? Please correct me if I'm wrong. Thank you :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5407: HDFS-16925. Fix regex pattern for namenode audit log tests

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5407:
URL: https://github.com/apache/hadoop/pull/5407#issuecomment-1432379608

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 59 unchanged - 1 
fixed = 59 total (was 60)  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 218m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5407/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 347m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5407/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5407 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux cbb1315f256b 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3bf75df6d997563e9aaea7af30c58dd9ae4729a8 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5407/1/testReport/ |
   | Max. process+thread count | 2440 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5407/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   Th

[GitHub] [hadoop] hadoop-yetus commented on pull request #5328: YARN-11222. [Federation] Add addToClusterNodeLabels, removeFromClusterNodeLabels REST APIs for Router.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5328:
URL: https://github.com/apache/hadoop/pull/5328#issuecomment-1432378139

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  30m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 123m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5328/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5328 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 569f988ecde4 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a7f2db2585788433b4b676483fbeb3f26664ad5d |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5328/11/testReport/ |
   | Max. process+thread count | 618 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5328/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apa

[GitHub] [hadoop] hadoop-yetus commented on pull request #5127: YARN-11239. Optimize FederationClientInterceptor audit log.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5127:
URL: https://github.com/apache/hadoop/pull/5127#issuecomment-1432364964

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 21s | 
[/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5127/4/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04
 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 1 new + 5 
unchanged - 0 fixed = 6 total (was 5)  |
   | -1 :x: |  javadoc  |   0m 21s | 
[/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5127/4/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 1 new + 5 
unchanged - 0 fixed = 6 total (was 5)  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 31s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5127/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5127 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dafdc119ac2c 4.15.0

[GitHub] [hadoop] tomscut commented on pull request #5390: HDFS-16761. Namenode UI for Datanodes page not loading if any data node is down

2023-02-15 Thread via GitHub


tomscut commented on PR #5390:
URL: https://github.com/apache/hadoop/pull/5390#issuecomment-1432358750

   Sorry for introducing this problem. Thank you all.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5405: YARN-11439. Fix Typo of hadoop-yarn-ui README.md.

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5405:
URL: https://github.com/apache/hadoop/pull/5405#issuecomment-1432357662

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  72m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5405/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5405 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint |
   | uname | Linux f334d567c952 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7bbb82bbae32dd8ed337cf3043a69b589c7dabc0 |
   | Max. process+thread count | 534 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5405/2/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #5390: HDFS-16761. Namenode UI for Datanodes page not loading if any data node is down

2023-02-15 Thread via GitHub


tasanuma commented on PR #5390:
URL: https://github.com/apache/hadoop/pull/5390#issuecomment-1432353003

   Thanks for merging it. The issue doesn't reproduce in branch-3.3. It seems 
to be caused by HDFS-16203, which is only in trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#issuecomment-1432344941

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  27m  9s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  22m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  25m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  22m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m  4s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 105 unchanged - 0 fixed = 107 total (was 
105)  |
   | +1 :green_heart: |  mvnsite  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  8s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 211m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 481m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5397 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux f16b3e615efa 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5263190be997752e37bc90e54358639fbd07

[GitHub] [hadoop] mccormickt12 commented on a diff in pull request #5322: HDFS-16896 clear ignoredNodes list when we clear deadnode list on ref…

2023-02-15 Thread via GitHub


mccormickt12 commented on code in PR #5322:
URL: https://github.com/apache/hadoop/pull/5322#discussion_r1107919545


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java:
##
@@ -1337,7 +1352,11 @@ private void hedgedFetchBlockByteRange(LocatedBlock 
block, long start,
 } catch (InterruptedException ie) {
   // Ignore and retry
 }
-if (refetch) {
+// if refetch is true then all nodes are in deadlist or ignorelist
+// we should loop through all futures and remove them so we do not

Review Comment:
   fixed comments. deadlist is actually deadNodes (I fixed that comment as 
well.)
   When connections fail (in both hedged and non hedged code path) nodes are 
added to the deadNodes collection to try other nodes. Once `chooseDataNode` 
returns `null` (or more accurately `getBestNodeDNAddrPair`) it calls 
`refetchLocations` which clears the deadNodes `clearLocalDeadNodes()` and now 
with my change, also clears the ignore list. 
   
   Note we have added an assumption to this method `refetchLocations`. The 
comment I added to `refetchLocations`
   ``` 
/**
  * RefetchLocations should only be called when there are no active requests
  * to datanodes. In the hedged read case this means futures should be empty
  */
  ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#issuecomment-1432335476

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  19m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  22m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  22m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 47s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 105 unchanged - 0 fixed = 107 total (was 
105)  |
   | +1 :green_heart: |  mvnsite  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 31s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 213m  8s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 473m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5397 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux cc608c0a57d1 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5263190be997752e37bc90e54358639fbd07

[jira] [Commented] (HADOOP-18215) Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689430#comment-17689430
 ] 

ASF GitHub Bot commented on HADOOP-18215:
-

bbeaudreault commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107912320


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   Sounds good, thanks for looking!
   
   @ayushtkn if you're ok with that I can just revert my last commit 





> Enhance WritableName to be able to return aliases for classes that use 
> serializers
> --
>
> Key: HADOOP-18215
> URL: https://issues.apache.org/jira/browse/HADOOP-18215
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> WritableName allows users shim in aliases for writables, in the case where a 
> SequenceFile was written with a Writable class that has since been renamed or 
> moved to another package. However, this requires that the aliased class 
> extend Writable. 
> Separately it's possible to configure jobs with keys and values which don't 
> actually extend Writable. Instead they are meant to be 
> serialized/deserialized using the serialization classes defined in 
> {{io.serializations}} config.
> Unfortunately, the current implementation does not support these key/value 
> classes. All we need to do to support this is remove the 
> {{.asSubclass(Writable.class)}} as is already the case for the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bbeaudreault commented on a diff in pull request #4215: HADOOP-18215. Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread via GitHub


bbeaudreault commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107912320


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   Sounds good, thanks for looking!
   
   @ayushtkn if you're ok with that I can just revert my last commit 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #5381: HDFS-16914. Add some logs for updateBlockForPipeline RPC.

2023-02-15 Thread via GitHub


tomscut commented on PR #5381:
URL: https://github.com/apache/hadoop/pull/5381#issuecomment-1432330074

   Thanks @hfutatzhanghb for your contribution! And Thanks @slfan1989 for your 
review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut merged pull request #5381: HDFS-16914. Add some logs for updateBlockForPipeline RPC.

2023-02-15 Thread via GitHub


tomscut merged PR #5381:
URL: https://github.com/apache/hadoop/pull/5381


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#issuecomment-1432323560

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  23m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 35s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 6 new + 105 unchanged - 0 fixed = 111 total (was 
105)  |
   | +1 :green_heart: |  mvnsite  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 19s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 208m 23s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 462m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5397/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5397 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 454c2cca2afd 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   |

[GitHub] [hadoop] xinglin commented on a diff in pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


xinglin commented on code in PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#discussion_r1107896933


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1936,4 +1936,17 @@ public static boolean isParentEntry(final String path, 
final String parent) {
 return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
 || parent.equals(Path.SEPARATOR);
   }
+
+  /**
+   * Calculate the transfer rate in megabytes/second.
+   * @param bytes bytes
+   * @param durationMS duration in milliseconds
+   * @return the number of megabytes/second of the transfer rate
+  */
+  public static long transferRateMBs(long bytes, long durationMS) {
+if (durationMS == 0) {

Review Comment:
   can we specify our function as: "we expect both inputs to be positive. 
Otherwise, this function will return -1". 
   
   Then returning -1 is a clear signal we don't know how to handle such inputs. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689418#comment-17689418
 ] 

ASF GitHub Bot commented on HADOOP-18399:
-

hadoop-yetus commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1432303988

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   0m 34s | 
[/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/16/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |  28m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 17s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  |   0m 34s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/16/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 248m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5054 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 96cd292cf557 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 98c89bf744484b8e0eb14ff9f9250c686a02973a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64

[GitHub] [hadoop] hadoop-yetus commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-02-15 Thread via GitHub


hadoop-yetus commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1432303988

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   0m 34s | 
[/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/16/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |  28m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 17s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  |   0m 34s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/16/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 248m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5054 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 96cd292cf557 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 98c89bf744484b8e0eb14ff9f9250c686a02973a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/16/testReport/ |
   | Max. process+thread count | 2649 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-co

[GitHub] [hadoop] rdingankar commented on a diff in pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


rdingankar commented on code in PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#discussion_r1107892591


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1936,4 +1936,17 @@ public static boolean isParentEntry(final String path, 
final String parent) {
 return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
 || parent.equals(Path.SEPARATOR);
   }
+
+  /**
+   * Calculate the transfer rate in megabytes/second.
+   * @param bytes bytes
+   * @param durationMS duration in milliseconds
+   * @return the number of megabytes/second of the transfer rate
+  */
+  public static long transferRateMBs(long bytes, long durationMS) {
+if (durationMS == 0) {

Review Comment:
   I dont feel we should handle other cases. This is a Utils method and any 
unexpected data should be left for the client to interpret. For some clients 
the negative values might even make sense.
   The idea behind handling for durationMS = 0 is to take care of DivideByZero 
for cases when data transfer did not happen.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] rdingankar commented on a diff in pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


rdingankar commented on code in PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#discussion_r1107889758


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java:
##
@@ -61,6 +61,8 @@ public class DataNodeMetrics {
   @Metric MutableCounterLong bytesRead;
   @Metric("Milliseconds spent reading")
   MutableCounterLong totalReadTime;
+  @Metric MutableRate bytesReadTransferRate;

Review Comment:
   updated



##
hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md:
##
@@ -370,6 +370,7 @@ Each metrics record contains tags such as SessionId and 
Hostname as additional i
 |: |: |
 | `BytesWritten` | Total number of bytes written to DataNode |
 | `BytesRead` | Total number of bytes read from DataNode |
+| `BytesReadTransferRate`*num*`s(50/75/90/95/99)thPercentileRate` | The 
50/75/90/95/99th percentile of the transfer rate of bytes read from the 
DataNode. The transfer rate is measured in megabytes per second. |

Review Comment:
   updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18215) Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689411#comment-17689411
 ] 

ASF GitHub Bot commented on HADOOP-18215:
-

omalley commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107882426


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   I'd propose that we unconditionally remove the check for Writable in 
getClass. My thought is:
   * Users can always enforce the constraint later if they want to.
   * All uses of the method with Hadoop's code base don't want to limit the 
output.
   * The check isn't consistent. (It is only applied for aliases, not natural 
class names.)
   * Removing the check won't break any callers since they couldn't get 
non-Writables before.





> Enhance WritableName to be able to return aliases for classes that use 
> serializers
> --
>
> Key: HADOOP-18215
> URL: https://issues.apache.org/jira/browse/HADOOP-18215
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> WritableName allows users shim in aliases for writables, in the case where a 
> SequenceFile was written with a Writable class that has since been renamed or 
> moved to another package. However, this requires that the aliased class 
> extend Writable. 
> Separately it's possible to configure jobs with keys and values which don't 
> actually extend Writable. Instead they are meant to be 
> serialized/deserialized using the serialization classes defined in 
> {{io.serializations}} config.
> Unfortunately, the current implementation does not support these key/value 
> classes. All we need to do to support this is remove the 
> {{.asSubclass(Writable.class)}} as is already the case for the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] omalley commented on a diff in pull request #4215: HADOOP-18215. Enhance WritableName to be able to return aliases for classes that use serializers

2023-02-15 Thread via GitHub


omalley commented on code in PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#discussion_r1107882426


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableName.java:
##
@@ -79,20 +79,42 @@ public static synchronized String getName(Class 
writableClass) {
 return writableClass.getName();
   }
 
+  /**
+   * Return the class for a name. Requires the class for name to extend 
Writable.
+   * See {@link #getClass(String, Configuration, boolean)} if class doesn't 
extend Writable.
+   * Default is {@link Class#forName(String)}.
+   *
+   * @param name input name.
+   * @param conf input configuration.
+   * @return class for a name.
+   * @throws IOException raised on errors performing I/O.
+   */
+  public static synchronized Class getClass(String name, Configuration conf)
+  throws IOException {
+return getClass(name, conf, true);
+  }
+
   /**
* Return the class for a name.
* Default is {@link Class#forName(String)}.
*
* @param name input name.
* @param conf input configuration.
+   * @param requireWritable if true, require the class for name to extend 
Writable
* @return class for a name.
* @throws IOException raised on errors performing I/O.
*/
-  public static synchronized Class getClass(String name, Configuration conf
-) throws IOException {
+  public static synchronized Class getClass(String name, Configuration conf,
+  boolean requireWritable) throws IOException {

Review Comment:
   I'd propose that we unconditionally remove the check for Writable in 
getClass. My thought is:
   * Users can always enforce the constraint later if they want to.
   * All uses of the method with Hadoop's code base don't want to limit the 
output.
   * The check isn't consistent. (It is only applied for aliases, not natural 
class names.)
   * Removing the check won't break any callers since they couldn't get 
non-Writables before.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


xinglin commented on code in PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#discussion_r1107877375


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java:
##
@@ -61,6 +61,8 @@ public class DataNodeMetrics {
   @Metric MutableCounterLong bytesRead;
   @Metric("Milliseconds spent reading")
   MutableCounterLong totalReadTime;
+  @Metric MutableRate bytesReadTransferRate;

Review Comment:
   nit: rename to readTransferRateMBs?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


xinglin commented on code in PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#discussion_r1107877662


##
hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md:
##
@@ -370,6 +370,7 @@ Each metrics record contains tags such as SessionId and 
Hostname as additional i
 |: |: |
 | `BytesWritten` | Total number of bytes written to DataNode |
 | `BytesRead` | Total number of bytes read from DataNode |
+| `BytesReadTransferRate`*num*`s(50/75/90/95/99)thPercentileRate` | The 
50/75/90/95/99th percentile of the transfer rate of bytes read from the 
DataNode. The transfer rate is measured in megabytes per second. |

Review Comment:
   nit: rename to ReadTransferRateMBs?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-15 Thread via GitHub


xinglin commented on code in PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#discussion_r1107876562


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1936,4 +1936,17 @@ public static boolean isParentEntry(final String path, 
final String parent) {
 return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
 || parent.equals(Path.SEPARATOR);
   }
+
+  /**
+   * Calculate the transfer rate in megabytes/second.
+   * @param bytes bytes
+   * @param durationMS duration in milliseconds
+   * @return the number of megabytes/second of the transfer rate
+  */
+  public static long transferRateMBs(long bytes, long durationMS) {
+if (durationMS == 0) {

Review Comment:
   if it is <= 0, just return -1? Let's add a check for bytes as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5381: HDFS-16914. Add some logs for updateBlockForPipeline RPC.

2023-02-15 Thread via GitHub


slfan1989 commented on code in PR #5381:
URL: https://github.com/apache/hadoop/pull/5381#discussion_r1107866913


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -5943,6 +5943,8 @@ LocatedBlock bumpBlockGenerationStamp(ExtendedBlock block,
 }
 // Ensure we record the new generation stamp
 getEditLog().logSync();
+LOG.info("bumpBlockGenerationStamp({}, client={}) success",
+locatedBlock.getBlock(), clientName);

Review Comment:
   @hfutatzhanghb @tomscut Thanks for the information!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5405: YARN-11439. Fix Typo of hadoop-yarn-ui README.md.

2023-02-15 Thread via GitHub


slfan1989 commented on PR #5405:
URL: https://github.com/apache/hadoop/pull/5405#issuecomment-1432246379

   @tomscut Can you help review this pr? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5326: YARN-11425. [Federation] Router Supports SubClusterCleaner.

2023-02-15 Thread via GitHub


slfan1989 commented on PR #5326:
URL: https://github.com/apache/hadoop/pull/5326#issuecomment-1432175499

   @goiri Thank you very much for your help in reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5244: YARN-11349. [Federation] Router Support DelegationToken With SQL.

2023-02-15 Thread via GitHub


slfan1989 commented on PR #5244:
URL: https://github.com/apache/hadoop/pull/5244#issuecomment-1432175334

   @goiri Thank you very much for your help in reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >