[jira] [Created] (HADOOP-18298) Hadoop AWS | Staging committer Multipartupload not implemented properly
Ayush Goyal created HADOOP-18298: Summary: Hadoop AWS | Staging committer Multipartupload not implemented properly Key: HADOOP-18298 URL: https://issues.apache.org/jira/browse/HADOOP-18298 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 3.3.1 Reporter: Ayush Goyal In Hadoop aws staging committer(org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter), Committer uploads files from local to s3(method- commitTaskInternal) which calls uploadFileToPendingCommit of CommitOperation to upload file using multipart upload. Multipart upload consists of three steps- 1)Initialise multipartupload. 2) Breaks the file to part and upload Parts. 3) Merge all the parts of files and finalize multipart. In the implementation of uploadFileToPendingCommit, first 2 steps are implemented. However, 3rd part is missing which leads to uploading the parts file but because it is not merged at the end of job no files are there in destination directory. S3 logs before implement 3rd steps- {code:java} 2022-05-30T13:49:31:000 [200 OK] s3.NewMultipartUpload localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/part-0-ce0a965f-622a-4950-bb4b-550470883134-c000-b552fb34-6156-4aa8-9085-679ad14fab6e.snappy.parquet?uploads 240b:c1d1:123:664f:c5d2:2:: 8.677ms ↑ 137 B ↓ 724 B 2022-05-30T13:49:31:000 [200 OK] s3.PutObjectPart localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/part-0-ce0a965f-622a-4950-bb4b-550470883134-c000-b552fb34-6156-4aa8-9085-679ad14fab6e.snappy.parquet?uploadId=f3beae8e-3001-48be-9bc4-306b71940e50=1 240b:c1d1:123:664f:c5d2:2:: 443.156ms ↑ 51 KiB ↓ 325 B 2022-05-30T13:49:32:000 [200 OK] s3.ListObjectsV2 localhost:9000/minio-feature-testing/?list-type=2=%2F=2=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F_SUCCESS%2F=false 240b:c1d1:123:664f:c5d2:2:: 3.414ms ↑ 137 B ↓ 646 B 2022-05-30T13:49:32:000 [200 OK] s3.PutObject localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/_SUCCESS 240b:c1d1:123:664f:c5d2:2:: 52.734ms ↑ 8.7 KiB ↓ 380 B 2022-05-30T13:49:32:000 [200 OK] s3.DeleteMultipleObjects localhost:9000/minio-feature-testing/?delete 240b:c1d1:123:664f:c5d2:2:: 73.954ms ↑ 350 B ↓ 432 B 2022-05-30T13:49:32:000 [404 Not Found] s3.HeadObject localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/_temporary 240b:c1d1:123:664f:c5d2:2:: 2.658ms ↑ 137 B ↓ 291 B 2022-05-30T13:49:32:000 [200 OK] s3.ListObjectsV2 localhost:9000/minio-feature-testing/?list-type=2=%2F=2=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F_temporary%2F=false 240b:c1d1:123:664f:c5d2:2:: 4.807ms ↑ 137 B ↓ 648 B 2022-05-30T13:49:32:000 [200 OK] s3.ListMultipartUploads localhost:9000/minio-feature-testing/?uploads=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F 240b:c0e0:102:553e:b4c2:2:: 1.081ms ↑ 137 B ↓ 776 B 2022-05-30T13:49:32:000 [404 Not Found] s3.HeadObject localhost:9000/minio-feature-testing/spark-job/processed/output-parquet-staging-7/.spark-staging-ce0a965f-622a-4950-bb4b-550470883134 240b:c1d1:123:664f:c5d2:2:: 5.68ms ↑ 137 B ↓ 291 B 2022-05-30T13:49:32:000 [200 OK] s3.ListObjectsV2 localhost:9000/minio-feature-testing/?list-type=2=%2F=2=spark-job%2Fprocessed%2Foutput-parquet-staging-7%2F.spark-staging-ce0a965f-622a-4950-bb4b-550470883134%2F=false 240b:c1d1:123:664f:c5d2:2:: 2.452ms ↑ 137 B ↓ 689 B {code} Here , After s3.PutObjectPart there is no completeMultipartupload call for 3rd step. S3 logs after implement 3rd steps- {code:java} 2022-06-17T10:56:12:000 [200 OK] s3.NewMultipartUpload localhost:9000/minio-feature-testing/spark-job/pm-processed/output-parquet-staging-39/day%3D23/hour%3D16/quarter%3D0/part-4-d0b529ca-112f-43f2-a7dd-44de4db6aa7f-dffa7213-d492-48f9-9e6a-fb08bc81ceeb.c000.snappy.parquet?uploads 240b:c1d1:123:664f:c5d2:2:: 9.116ms ↑ 137 B ↓ 750 B 2022-06-17T10:56:12:000 [200 OK] s3.NewMultipartUpload localhost:9000/minio-feature-testing/spark-job/pm-processed/output-parquet-staging-39/day%3D23/hour%3D15/quarter%3D45/part-4-d0b529ca-112f-43f2-a7dd-44de4db6aa7f-dffa7213-d492-48f9-9e6a-fb08bc81ceeb.c000.snappy.parquet?uploads 240b:c1d1:123:664f:c5d2:2:: 9.416ms ↑ 137 B ↓ 751 B 2022-06-17T10:56:12:000 [200 OK] s3.NewMultipartUpload localhost:9000/minio-feature-testing/spark-job/pm-processed/output-parquet-staging-39/day%3D23/hour%3D16/quarter%3D45/part-4-d0b529ca-112f-43f2-a7dd-44de4db6aa7f-dffa7213-d492-48f9-9e6a-fb08bc81ceeb.c000.snappy.parquet?uploads 240b:c1d1:123:664f:c5d2:2:: 8.506ms ↑ 137 B ↓ 751 B
[jira] [Closed] (HADOOP-12446) Undeprecate createNonRecursive()
[ https://issues.apache.org/jira/browse/HADOOP-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Kyle Purtell closed HADOOP-12446. > Undeprecate createNonRecursive() > > > Key: HADOOP-12446 > URL: https://issues.apache.org/jira/browse/HADOOP-12446 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 2.4.0 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Labels: hbase > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: hdfs-6264-v1.txt, hdfs-6264-v2.txt, hdfs-6264-v3.txt > > > FileSystem#createNonRecursive() is deprecated. > However, there is no DistributedFileSystem#create() implementation which > throws exception if parent directory doesn't exist. > This limits clients' migration away from the deprecated method. > For HBase, IO fencing relies on the behavior of > FileSystem#createNonRecursive(). > Variant of create() method should be added which throws exception if parent > directory doesn't exist. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4435: YARN-11178. Avoid CPU busy idling and resource wasting in DelegationTokenRenewerPoolTracker thread
slfan1989 commented on PR #4435: URL: https://github.com/apache/hadoop/pull/4435#issuecomment-1158503105 @LennonChin LGTM. Thanks for your contribution, it looks like it's working well, let's wait for other buddies' suggestions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] LennonChin commented on pull request #4435: YARN-11178. Avoid CPU busy idling and resource wasting in DelegationTokenRenewerPoolTracker thread
LennonChin commented on PR #4435: URL: https://github.com/apache/hadoop/pull/4435#issuecomment-1158471384 > > @slfan1989 @dineshchitlangia @brumi1024 @9uapaw @ashutoshcipher Could you please to help review the code? > > I understand your change, and it seems feasible, can we give the data of the cpu before and after the pr change? Hello, I added some screenshots and CPU profile as following: > all screenshots images and CPU profile report HTML files were attached into issue: [YARN-11178](https://issues.apache.org/jira/browse/YARN-11178) before optimized the **ACTIVE** ResourceManager process will occupy CPU core 100% continuous: ![YARN-11178.CPU idling busy 100% before optimized](https://issues.apache.org/jira/secure/attachment/13045187/YARN-11178.CPU%20idling%20busy%20100%25%20before%20optimized.png) after optimized the code, CPU occupation decreased to normal: ![YARN-11178.CPU normal after optimized](https://issues.apache.org/jira/secure/attachment/13045189/YARN-11178.CPU%20normal%20after%20optimized.png) for CPU profile, before optimized: ![YARN-11178.CPU profile for idling busy 100% before optimized](https://issues.apache.org/jira/secure/attachment/13045191/YARN-11178.CPU%20profile%20for%20idling%20busy%20100%25%20before%20optimized.png) after optimized: ![YARN-11178.CPU profile for normal after optimized](https://issues.apache.org/jira/secure/attachment/13045192/YARN-11178.CPU%20profile%20for%20normal%20after%20optimized.png) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation
hadoop-yetus commented on PR #4127: URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1158450996 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 40m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 57s | | trunk passed | | +1 :green_heart: | compile | 23m 4s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 33s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 7m 50s | | trunk passed | | -1 :x: | javadoc | 1m 45s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/15/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 6m 45s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 12m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 10s | | the patch passed | | +1 :green_heart: | compile | 22m 13s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 13s | | the patch passed | | +1 :green_heart: | compile | 20m 29s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 29s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/15/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 20s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/15/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 339 unchanged - 1 fixed = 342 total (was 340) | | +1 :green_heart: | mvnsite | 7m 40s | | the patch passed | | -1 :x: | javadoc | 1m 45s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/15/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 6m 44s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 12m 56s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 33s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 16s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 413m 33s | | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 2m 11s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch failed. | | +1 :green_heart: | asflicense | 2m 0s | | The patch does not generate ASF License warnings. | | | | 724m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782233=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782233 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 17/Jun/22 03:08 Start Date: 17/Jun/22 03:08 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158442052 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 10s | | trunk passed | | +1 :green_heart: | compile | 24m 46s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 36s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 1s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 24m 10s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 10s | | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 2878 unchanged - 8 fixed = 2878 total (was 2886) | | +1 :green_heart: | compile | 21m 41s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 41s | | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 2672 unchanged - 8 fixed = 2672 total (was 2680) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 59s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 0s | | hadoop-kms in the patch passed. | | +1 :green_heart: | asflicense | 1m 15s | | The patch does not generate ASF License warnings. | | | | 202m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4433 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 6cde3ec3640a 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7fb5e413d9e04954dce6859e7dac5bdd66972386 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results
[GitHub] [hadoop] hadoop-yetus commented on pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
hadoop-yetus commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158442052 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 10s | | trunk passed | | +1 :green_heart: | compile | 24m 46s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 36s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 1s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 24m 10s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 10s | | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 2878 unchanged - 8 fixed = 2878 total (was 2886) | | +1 :green_heart: | compile | 21m 41s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 41s | | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 2672 unchanged - 8 fixed = 2672 total (was 2680) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 59s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 0s | | hadoop-kms in the patch passed. | | +1 :green_heart: | asflicense | 1m 15s | | The patch does not generate ASF License warnings. | | | | 202m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4433 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 6cde3ec3640a 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7fb5e413d9e04954dce6859e7dac5bdd66972386 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/6/testReport/ | | Max. process+thread count | 593 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0
[GitHub] [hadoop] slfan1989 commented on pull request #4422: HADOOP-18284. Remove Unnecessary semicolon ';'
slfan1989 commented on PR #4422: URL: https://github.com/apache/hadoop/pull/4422#issuecomment-1158438583 @ayushtkn please help me to review this pr, thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18284) Remove Unnecessary semicolon ';'
[ https://issues.apache.org/jira/browse/HADOOP-18284?focusedWorklogId=782230=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782230 ] ASF GitHub Bot logged work on HADOOP-18284: --- Author: ASF GitHub Bot Created on: 17/Jun/22 03:02 Start Date: 17/Jun/22 03:02 Worklog Time Spent: 10m Work Description: slfan1989 commented on PR #4422: URL: https://github.com/apache/hadoop/pull/4422#issuecomment-1158438583 @ayushtkn please help me to review this pr, thank you very much! Issue Time Tracking --- Worklog Id: (was: 782230) Time Spent: 1.5h (was: 1h 20m) > Remove Unnecessary semicolon ';' > - > > Key: HADOOP-18284 > URL: https://issues.apache.org/jira/browse/HADOOP-18284 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > while reading the code, I found a very tiny optimization point, part of the > code contains 2 semicolons at the end, I will fix it. Because this change is > simple, I fixed it in One JIRA. > {code:java} > private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();; > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy closed pull request #4391: HDFS-16613. EC: Improve performance of decommissioning dn with many ec blocks
lfxy closed pull request #4391: HDFS-16613. EC: Improve performance of decommissioning dn with many ec blocks URL: https://github.com/apache/hadoop/pull/4391 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on pull request #4398: HDFS-16613. EC: Improve performance of decommissioning dn with many ec blocks
lfxy commented on PR #4398: URL: https://github.com/apache/hadoop/pull/4398#issuecomment-1158429085 @hi-adachi OK, I see, thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18106) Handle memory fragmentation in S3 Vectored IO implementation.
[ https://issues.apache.org/jira/browse/HADOOP-18106?focusedWorklogId=782221=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782221 ] ASF GitHub Bot logged work on HADOOP-18106: --- Author: ASF GitHub Bot Created on: 17/Jun/22 02:03 Start Date: 17/Jun/22 02:03 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4445: URL: https://github.com/apache/hadoop/pull/4445#issuecomment-1158395112 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ feature-vectored-io Compile Tests _ | | +0 :ok: | mvndep | 14m 32s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 19s | | feature-vectored-io passed | | +1 :green_heart: | compile | 25m 9s | | feature-vectored-io passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 34s | | feature-vectored-io passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 28s | | feature-vectored-io passed | | +1 :green_heart: | mvnsite | 4m 10s | | feature-vectored-io passed | | +1 :green_heart: | javadoc | 3m 20s | | feature-vectored-io passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 58s | | feature-vectored-io passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 43s | | feature-vectored-io passed | | +1 :green_heart: | shadedclient | 24m 23s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 50s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 53s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 27s | | the patch passed | | +1 :green_heart: | compile | 24m 17s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 17s | | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 2895 unchanged - 3 fixed = 2895 total (was 2898) | | +1 :green_heart: | compile | 21m 40s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 40s | | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 2691 unchanged - 3 fixed = 2691 total (was 2694) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4445/2/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 29s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4445/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 71 unchanged - 0 fixed = 73 total (was 71) | | +1 :green_heart: | mvnsite | 4m 8s | | the patch passed | | +1 :green_heart: | javadoc | 3m 11s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 59s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 42s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 19s | | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 0m 54s | | hadoop-benchmark in the patch passed. | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4445: HADOOP-18106: Handle memory fragmentation in S3A Vectored IO.
hadoop-yetus commented on PR #4445: URL: https://github.com/apache/hadoop/pull/4445#issuecomment-1158395112 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ feature-vectored-io Compile Tests _ | | +0 :ok: | mvndep | 14m 32s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 19s | | feature-vectored-io passed | | +1 :green_heart: | compile | 25m 9s | | feature-vectored-io passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 34s | | feature-vectored-io passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 28s | | feature-vectored-io passed | | +1 :green_heart: | mvnsite | 4m 10s | | feature-vectored-io passed | | +1 :green_heart: | javadoc | 3m 20s | | feature-vectored-io passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 58s | | feature-vectored-io passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 43s | | feature-vectored-io passed | | +1 :green_heart: | shadedclient | 24m 23s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 50s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 53s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 27s | | the patch passed | | +1 :green_heart: | compile | 24m 17s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 17s | | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 2895 unchanged - 3 fixed = 2895 total (was 2898) | | +1 :green_heart: | compile | 21m 40s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 40s | | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 2691 unchanged - 3 fixed = 2691 total (was 2694) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4445/2/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 29s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4445/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 71 unchanged - 0 fixed = 73 total (was 71) | | +1 :green_heart: | mvnsite | 4m 8s | | the patch passed | | +1 :green_heart: | javadoc | 3m 11s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 59s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 42s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 19s | | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 0m 54s | | hadoop-benchmark in the patch passed. | | +1 :green_heart: | asflicense | 1m 18s | | The patch does not generate ASF License warnings. | | | | 259m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4445/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4445 | | Optional
[GitHub] [hadoop] slfan1989 commented on pull request #4440: MAPREDUCE-7389. Fix typo in description of property
slfan1989 commented on PR #4440: URL: https://github.com/apache/hadoop/pull/4440#issuecomment-1158394168 @ayushtkn Please help review this pr, thank you very much! ayushtkn is a very good mentor. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18277) Remove org.apache.hadoop.test#Whitebox Deprecated Annotation
[ https://issues.apache.org/jira/browse/HADOOP-18277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555349#comment-17555349 ] fanshilun commented on HADOOP-18277: [~weichiu] Thank you very much for helping to review the code, I hope to continue to refactor the junit test in hadoop-common, the new junit test will completely remove the WhiteBox. cc:[~aajisaka] > Remove org.apache.hadoop.test#Whitebox Deprecated Annotation > > > Key: HADOOP-18277 > URL: https://issues.apache.org/jira/browse/HADOOP-18277 > Project: Hadoop Common > Issue Type: Improvement >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > > org.apache.hadoop.test#Whitebox is marked as deprecated, I personally feel > that this is unnecessary, which leads to a large number of junit test code > appearing deprecated, and a large number of Warnings appear in the > compilation, I checked the code and think the deprecated mark is unreasonable. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4354: YARN-6539. Create SecureLogin inside Router.
slfan1989 commented on PR #4354: URL: https://github.com/apache/hadoop/pull/4354#issuecomment-1158381593 LGTM +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4437: YARN-11179. Show more detailed info when container token is expired
slfan1989 commented on PR #4437: URL: https://github.com/apache/hadoop/pull/4437#issuecomment-1158380856 @zuston It seems that the plan is reasonable, can we also put the user information contained in the token in the messageBuilder? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4435: YARN-11178. Avoid CPU busy idling and resource wasting in DelegationTokenRenewerPoolTracker thread
slfan1989 commented on PR #4435: URL: https://github.com/apache/hadoop/pull/4435#issuecomment-1158379831 > @slfan1989 @dineshchitlangia @brumi1024 @9uapaw @ashutoshcipher Could you please to help review the code? I understand your change, and it seems feasible, can we give the data of the cpu before and after the pr change? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1
[ https://issues.apache.org/jira/browse/HADOOP-18297?focusedWorklogId=782208=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782208 ] ASF GitHub Bot logged work on HADOOP-18297: --- Author: ASF GitHub Bot Created on: 17/Jun/22 01:33 Start Date: 17/Jun/22 01:33 Worklog Time Spent: 10m Work Description: ashutoshcipher opened a new pull request, #4449: URL: https://github.com/apache/hadoop/pull/4449 ### Description of PR Upgrade dependency-check-maven to 7.1.1 The OWASP dependency-check-maven Plugin version has corrected various false positives in 7.1.1. We can upgrade to it. https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1 * JIRA: HADOOP-18297 - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? Issue Time Tracking --- Worklog Id: (was: 782208) Remaining Estimate: 0h Time Spent: 10m > Upgrade dependency-check-maven to 7.1.1 > --- > > Key: HADOOP-18297 > URL: https://issues.apache.org/jira/browse/HADOOP-18297 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.3.3 >Reporter: Ashutosh Gupta >Assignee: Ashutosh Gupta >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > The OWASP dependency-check-maven Plugin version has corrected various false > positives in 7.1.1. We can upgrade to it. > https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1 -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1
[ https://issues.apache.org/jira/browse/HADOOP-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18297: Labels: pull-request-available (was: ) > Upgrade dependency-check-maven to 7.1.1 > --- > > Key: HADOOP-18297 > URL: https://issues.apache.org/jira/browse/HADOOP-18297 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.3.3 >Reporter: Ashutosh Gupta >Assignee: Ashutosh Gupta >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The OWASP dependency-check-maven Plugin version has corrected various false > positives in 7.1.1. We can upgrade to it. > https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1 -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #4449: HADOOP-18297. Upgrade dependency-check-maven to 7.1.1
ashutoshcipher opened a new pull request, #4449: URL: https://github.com/apache/hadoop/pull/4449 ### Description of PR Upgrade dependency-check-maven to 7.1.1 The OWASP dependency-check-maven Plugin version has corrected various false positives in 7.1.1. We can upgrade to it. https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1 * JIRA: HADOOP-18297 - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4435: YARN-11178. Avoid CPU busy idling and resource wasting in DelegationTokenRenewerPoolTracker thread
slfan1989 commented on PR #4435: URL: https://github.com/apache/hadoop/pull/4435#issuecomment-1158378152 > @slfan1989 @dineshchitlangia @brumi1024 @9uapaw @ashutoshcipher Could you please to help review the code? I have understood your changes, but can the token also add some user information? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782206=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782206 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 17/Jun/22 01:31 Start Date: 17/Jun/22 01:31 Worklog Time Spent: 10m Work Description: slfan1989 commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158377242 @jojochuang Thank you very much for your help reviewing the code! Issue Time Tracking --- Worklog Id: (was: 782206) Time Spent: 2.5h (was: 2h 20m) > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1
[ https://issues.apache.org/jira/browse/HADOOP-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Gupta updated HADOOP-18297: Description: The OWASP dependency-check-maven Plugin version has corrected various false positives in 7.1.1. We can upgrade to it. https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1 was:The OWASP dependency-check-maven Plugin version has corrected various false positives in 7.1.1. We can upgrade to it. > Upgrade dependency-check-maven to 7.1.1 > --- > > Key: HADOOP-18297 > URL: https://issues.apache.org/jira/browse/HADOOP-18297 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.3.3 >Reporter: Ashutosh Gupta >Assignee: Ashutosh Gupta >Priority: Minor > > The OWASP dependency-check-maven Plugin version has corrected various false > positives in 7.1.1. We can upgrade to it. > https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1 -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
slfan1989 commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158377242 @jojochuang Thank you very much for your help reviewing the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1
Ashutosh Gupta created HADOOP-18297: --- Summary: Upgrade dependency-check-maven to 7.1.1 Key: HADOOP-18297 URL: https://issues.apache.org/jira/browse/HADOOP-18297 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.3.3 Reporter: Ashutosh Gupta Assignee: Ashutosh Gupta The OWASP dependency-check-maven Plugin version has corrected various false positives in 7.1.1. We can upgrade to it. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555341#comment-17555341 ] fanshilun commented on HADOOP-18289: [~weichiu] Thank you very much for your help reviewing the code! > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HADOOP-18289. -- Fix Version/s: 3.4.0 Resolution: Fixed Done. Thanks [~slfan1989] > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782201=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782201 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 17/Jun/22 01:13 Start Date: 17/Jun/22 01:13 Worklog Time Spent: 10m Work Description: jojochuang merged PR #4433: URL: https://github.com/apache/hadoop/pull/4433 Issue Time Tracking --- Worklog Id: (was: 782201) Time Spent: 2h 10m (was: 2h) > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782202=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782202 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 17/Jun/22 01:13 Start Date: 17/Jun/22 01:13 Worklog Time Spent: 10m Work Description: jojochuang commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158366547 +1 Issue Time Tracking --- Worklog Id: (was: 782202) Time Spent: 2h 20m (was: 2h 10m) > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani opened a new pull request, #4448: HDFS-16634. Dynamically adjust slow peer report size on JMX metrics
virajjasani opened a new pull request, #4448: URL: https://github.com/apache/hadoop/pull/4448 ### Description of PR On a busy cluster, sometimes it takes bit of time for deleted node(from the cluster)'s "slow node report" to get removed from slow peer json report on Namenode JMX metrics. In the meantime, user should be able to browse through more entries in the report by adjusting i.e. reconfiguring "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted without user having to bounce active Namenode just for this purpose. ### How was this patch tested? Dev cluster and using UT. ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
jojochuang commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158366547 +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
jojochuang merged PR #4433: URL: https://github.com/apache/hadoop/pull/4433 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4311: HDFS-13522: IPC changes to support observer reads through routers.
hadoop-yetus commented on PR #4311: URL: https://github.com/apache/hadoop/pull/4311#issuecomment-1158361679 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 44m 39s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 44s | | trunk passed | | +1 :green_heart: | compile | 22m 49s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 27s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 7m 44s | | trunk passed | | -1 :x: | javadoc | 1m 45s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/8/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 6m 50s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 12m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 10s | | the patch passed | | +1 :green_heart: | compile | 22m 3s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 3s | | the patch passed | | +1 :green_heart: | compile | 20m 26s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 26s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/8/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 13s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/8/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 198 unchanged - 1 fixed = 201 total (was 199) | | +1 :green_heart: | mvnsite | 7m 42s | | the patch passed | | -1 :x: | javadoc | 1m 45s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/8/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 6m 49s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 12m 54s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 42s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 14s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 256m 19s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 23m 12s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 2m 0s | | The patch does not generate ASF License warnings. | | | | 590m 36s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.cli.TestHDFSCLI | |
[GitHub] [hadoop] hi-adachi commented on pull request #4398: HDFS-16613. EC: Improve performance of decommissioning dn with many ec blocks
hi-adachi commented on PR #4398: URL: https://github.com/apache/hadoop/pull/4398#issuecomment-1158359483 @lfxy The PR was merged, this is just FYI, the contribution guide says as follows. Thank you for your contribution. > https://cwiki.apache.org/confluence/display/hadoop/how+to+contribute > Once a "+1" comment is received from the automated patch testing system and a code reviewer has set the Reviewed flag on the issue's Jira, a committer should then evaluate it within a few days and either: commit it; or reject it with an explanation. > > Please be patient. Committers are busy people too. If no one responds to your patch after a few days, please make friendly reminders. Please incorporate other's suggestions into your patch if you think they're reasonable. Finally, remember that even a patch that is not committed is useful to the community. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782189=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782189 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 16/Jun/22 23:43 Start Date: 16/Jun/22 23:43 Worklog Time Spent: 10m Work Description: slfan1989 commented on code in PR #4433: URL: https://github.com/apache/hadoop/pull/4433#discussion_r899644835 ## hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java: ## @@ -40,6 +41,7 @@ import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; +import org.mockito.internal.util.reflection.FieldReader; Review Comment: Thanks for your help reviewing the code, I will fix it. Issue Time Tracking --- Worklog Id: (was: 782189) Time Spent: 2h (was: 1h 50m) > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
slfan1989 commented on code in PR #4433: URL: https://github.com/apache/hadoop/pull/4433#discussion_r899644835 ## hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java: ## @@ -40,6 +41,7 @@ import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; +import org.mockito.internal.util.reflection.FieldReader; Review Comment: Thanks for your help reviewing the code, I will fix it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4426: YARN-10883. [Router] Router Audit Log Add Client IP Address.
slfan1989 commented on PR #4426: URL: https://github.com/apache/hadoop/pull/4426#issuecomment-1158271661 @goiri Please help me to review the code again, Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782185=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782185 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 16/Jun/22 23:12 Start Date: 16/Jun/22 23:12 Worklog Time Spent: 10m Work Description: jojochuang commented on code in PR #4433: URL: https://github.com/apache/hadoop/pull/4433#discussion_r899633528 ## hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java: ## @@ -40,6 +41,7 @@ import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; +import org.mockito.internal.util.reflection.FieldReader; Review Comment: ```suggestion ``` unused Issue Time Tracking --- Worklog Id: (was: 782185) Time Spent: 1h 50m (was: 1h 40m) > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a diff in pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
jojochuang commented on code in PR #4433: URL: https://github.com/apache/hadoop/pull/4433#discussion_r899633528 ## hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java: ## @@ -40,6 +41,7 @@ import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; +import org.mockito.internal.util.reflection.FieldReader; Review Comment: ```suggestion ``` unused -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18294) Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums
[ https://issues.apache.org/jira/browse/HADOOP-18294?focusedWorklogId=782184=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782184 ] ASF GitHub Bot logged work on HADOOP-18294: --- Author: ASF GitHub Bot Created on: 16/Jun/22 23:09 Start Date: 16/Jun/22 23:09 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4446: URL: https://github.com/apache/hadoop/pull/4446#issuecomment-1158246105 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 23s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 27s | | hadoop-maven-plugins in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 98m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4446 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux ba6b479dd289 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 609bb2196a43c061209fa4a7bdf3b8e01eb8ba67 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/2/testReport/ | | Max. process+thread count | 524 (vs. ulimit of 5500) | | modules | C: hadoop-maven-plugins U: hadoop-maven-plugins
[GitHub] [hadoop] hadoop-yetus commented on pull request #4446: HADOOP-18294.Ensure build folder exists before writing checksum file.…
hadoop-yetus commented on PR #4446: URL: https://github.com/apache/hadoop/pull/4446#issuecomment-1158246105 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 23s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 27s | | hadoop-maven-plugins in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 98m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4446 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux ba6b479dd289 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 609bb2196a43c061209fa4a7bdf3b8e01eb8ba67 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/2/testReport/ | | Max. process+thread count | 524 (vs. ulimit of 5500) | | modules | C: hadoop-maven-plugins U: hadoop-maven-plugins | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to
[GitHub] [hadoop] jojochuang commented on a diff in pull request #4252: HDFS-16566 Erasure Coding: Recovery may causes excess replicas when busy DN exsits
jojochuang commented on code in PR #4252: URL: https://github.com/apache/hadoop/pull/4252#discussion_r899627843 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java: ## @@ -1040,11 +1040,16 @@ public static BlockECReconstructionInfo convertBlockECReconstructionInfo( byte[] liveBlkIndices = blockEcReconstructionInfoProto.getLiveBlockIndices() .toByteArray(); +byte[] excludeReconstructedIndices = Review Comment: Please check and make sure ExcludeReconstructedIndices is filled. ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto: ## @@ -108,6 +108,7 @@ message BlockECReconstructionInfoProto { required StorageTypesProto targetStorageTypes = 5; required bytes liveBlockIndices = 6; required ErasureCodingPolicyProto ecPolicy = 7; + required bytes excludeReconstructedIndices = 8; Review Comment: ```suggestion optional bytes excludeReconstructedIndices = 8; ``` Make it optional to ensure backward compatibility. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782182 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 16/Jun/22 22:54 Start Date: 16/Jun/22 22:54 Worklog Time Spent: 10m Work Description: slfan1989 commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158239124 @jojochuang Please help me to review the code again, thank you very much! Issue Time Tracking --- Worklog Id: (was: 782182) Time Spent: 1h 40m (was: 1.5h) > Remove WhiteBox in hadoop-kms module. > - > > Key: HADOOP-18289 > URL: https://issues.apache.org/jira/browse/HADOOP-18289 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > WhiteBox is deprecated, try to remove this method in hadoop-kms. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
slfan1989 commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158239124 @jojochuang Please help me to review the code again, thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs
[ https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=782178=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782178 ] ASF GitHub Bot logged work on HADOOP-18258: --- Author: ASF GitHub Bot Created on: 16/Jun/22 22:15 Start Date: 16/Jun/22 22:15 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4383: URL: https://github.com/apache/hadoop/pull/4383#issuecomment-1158185507 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 33s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 2s | | trunk passed | | +1 :green_heart: | compile | 23m 0s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 35s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 46s | | trunk passed | | +1 :green_heart: | javadoc | 3m 1s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 43s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 8s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 22m 41s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 40s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | compile | 22m 7s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 7s | | the patch passed | | -1 :x: | compile | 19m 36s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | -1 :x: | javac | 19m 36s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 15s | | the patch passed | | -1 :x: | mvnsite | 1m 22s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shellcheck | 0m 8s | | No new issues. | | +1 :green_heart: | javadoc | 2m 54s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 43s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 1m 27s | [/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shadedclient | 23m 29s | | patch has no errors when building and testing our client artifacts. | _ Other
[GitHub] [hadoop] hadoop-yetus commented on pull request #4383: HADOOP-18258. Merging of S3A Audit Logs
hadoop-yetus commented on PR #4383: URL: https://github.com/apache/hadoop/pull/4383#issuecomment-1158185507 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 33s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 2s | | trunk passed | | +1 :green_heart: | compile | 23m 0s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 35s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 46s | | trunk passed | | +1 :green_heart: | javadoc | 3m 1s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 43s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 8s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 22m 41s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 40s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | compile | 22m 7s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 7s | | the patch passed | | -1 :x: | compile | 19m 36s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | -1 :x: | javac | 19m 36s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 15s | | the patch passed | | -1 :x: | mvnsite | 1m 22s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shellcheck | 0m 8s | | No new issues. | | +1 :green_heart: | javadoc | 2m 54s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 43s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 1m 27s | [/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shadedclient | 23m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 10s | | hadoop-common in the patch passed. | | -1 :x: | unit | 1m 28s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/5/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | asflicense | 1m 36s | | The patch does not generate ASF License
[jira] [Commented] (HADOOP-18295) Add S3A configuration property for `no_proxy` hosts
[ https://issues.apache.org/jira/browse/HADOOP-18295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555317#comment-17555317 ] Mukund Thakur commented on HADOOP-18295: I don't understand the use case for this. Could you please explain more. > Add S3A configuration property for `no_proxy` hosts > --- > > Key: HADOOP-18295 > URL: https://issues.apache.org/jira/browse/HADOOP-18295 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Sam Kramer >Priority: Minor > > Seeing as there are configuration options for proxy host, port, username, and > password, there should also be an option to be able to provide to the S3 > client a list of hosts to not use the proxy for (i.e. `no_proxy`) > > I'm happy to contribute the code, but figured I'd file a ticket first to see > if this the hadoop community would be open to this idea or have any desire > for this feature. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4447: HDFS-16591. Setup JaasConfiguration in ZKCuratorManager when SASL is …
hadoop-yetus commented on PR #4447: URL: https://github.com/apache/hadoop/pull/4447#issuecomment-1158170997 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 41s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 49s | | trunk passed | | +1 :green_heart: | compile | 22m 56s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 37s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 50s | | trunk passed | | +1 :green_heart: | javadoc | 4m 21s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 3m 48s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 7s | | the patch passed | | +1 :green_heart: | compile | 21m 58s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 21m 58s | | the patch passed | | +1 :green_heart: | compile | 20m 33s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 49s | [/results-checkstyle-hadoop-common-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4447/1/artifact/out/results-checkstyle-hadoop-common-project.txt) | hadoop-common-project: The patch generated 5 new + 203 unchanged - 2 fixed = 208 total (was 205) | | +1 :green_heart: | mvnsite | 4m 31s | | the patch passed | | +1 :green_heart: | javadoc | 4m 8s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 3m 43s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 1m 39s | [/new-spotbugs-hadoop-common-project_hadoop-auth.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4447/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-auth.html) | hadoop-common-project/hadoop-auth generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 21m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 0s | | hadoop-auth in the patch passed. | | +1 :green_heart: | unit | 18m 28s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 57s | | hadoop-registry in the patch passed. | | +1 :green_heart: | asflicense | 1m 34s | | The patch does not generate ASF License warnings. | | | | 247m 46s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-auth | | | Write to static field org.apache.hadoop.security.authentication.util.JaasConfiguration.entry from instance method new org.apache.hadoop.security.authentication.util.JaasConfiguration(String, String, String) At JaasConfiguration.java:from instance method new org.apache.hadoop.security.authentication.util.JaasConfiguration(String, String, String) At JaasConfiguration.java:[line 57] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4447/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4447 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle
[GitHub] [hadoop] hadoop-yetus commented on pull request #4410: HDFS-16064. Determine when to invalidate corrupt replicas based on number of usable replicas
hadoop-yetus commented on PR #4410: URL: https://github.com/apache/hadoop/pull/4410#issuecomment-1158158938 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 25s | | trunk passed | | +1 :green_heart: | compile | 1m 39s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 21s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | -1 :x: | javadoc | 1m 20s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 44s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 30s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 2s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | -1 :x: | javadoc | 1m 0s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 381m 44s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 498m 26s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4410 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux efcbee072994 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9dd26601ec0cb25a1de4f772e6bff084141bbfb5 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
[GitHub] [hadoop] ashutoshcipher commented on pull request #4377: YARN-11115. Add configuration to globally disable AM preemption for capacity scheduler
ashutoshcipher commented on PR #4377: URL: https://github.com/apache/hadoop/pull/4377#issuecomment-1158140767 @aajisaka - Can you please review this PR and can I add a test case as part of a separate JIRA? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #4377: YARN-11115. Add configuration to disable AM preemption for capacity scheduler
ashutoshcipher commented on PR #4377: URL: https://github.com/apache/hadoop/pull/4377#issuecomment-1158139739 I tried running failed test `TestClientRMTokens` multiple times in my local - working fine. Test failure can be ignored. ``` [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.18 s - in org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens [INFO] [INFO] Results: [INFO] [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 37.110 s [INFO] Finished at: 2022-06-16T22:17:27+01:00 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18296) Memory fragmentation in ChecksumFileSystem Vectored IO implementation.
[ https://issues.apache.org/jira/browse/HADOOP-18296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555298#comment-17555298 ] Mukund Thakur commented on HADOOP-18296: Fixing this is challenging as checksum matching code is old and complex. Also neither orc nor parquet uses direct buffers as of now, keeping this open just to track a known issue. > Memory fragmentation in ChecksumFileSystem Vectored IO implementation. > -- > > Key: HADOOP-18296 > URL: https://issues.apache.org/jira/browse/HADOOP-18296 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Affects Versions: 3.4.0 >Reporter: Mukund Thakur >Priority: Minor > Labels: fs > > As we have implemented merging of ranges in the ChecksumFSInputChecker > implementation of vectored IO api, it can lead to memory fragmentation. Let > me explain by example. > > Suppose client requests for 3 ranges. > 0-500, 700-1000 and 1200-1500. > Now because of merging, all the above ranges will get merged into one and we > will allocate a big byte buffer of 0-1500 size but return sliced byte buffers > for the desired ranges. > Now once the client is done reading all the ranges, it will only be able to > free the memory for requested ranges and memory of the gaps will never be > released for eg here (500-700 and 1000-1200). > > Note this only happens for direct byte buffers. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18296) Memory fragmentation in ChecksumFileSystem Vectored IO implementation.
[ https://issues.apache.org/jira/browse/HADOOP-18296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukund Thakur updated HADOOP-18296: --- Labels: fs (was: ) > Memory fragmentation in ChecksumFileSystem Vectored IO implementation. > -- > > Key: HADOOP-18296 > URL: https://issues.apache.org/jira/browse/HADOOP-18296 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Affects Versions: 3.4.0 >Reporter: Mukund Thakur >Priority: Minor > Labels: fs > > As we have implemented merging of ranges in the ChecksumFSInputChecker > implementation of vectored IO api, it can lead to memory fragmentation. Let > me explain by example. > > Suppose client requests for 3 ranges. > 0-500, 700-1000 and 1200-1500. > Now because of merging, all the above ranges will get merged into one and we > will allocate a big byte buffer of 0-1500 size but return sliced byte buffers > for the desired ranges. > Now once the client is done reading all the ranges, it will only be able to > free the memory for requested ranges and memory of the gaps will never be > released for eg here (500-700 and 1000-1200). > > Note this only happens for direct byte buffers. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18296) Memory fragmentation in ChecksumFileSystem Vectored IO implementation.
Mukund Thakur created HADOOP-18296: -- Summary: Memory fragmentation in ChecksumFileSystem Vectored IO implementation. Key: HADOOP-18296 URL: https://issues.apache.org/jira/browse/HADOOP-18296 Project: Hadoop Common Issue Type: Sub-task Components: common Affects Versions: 3.4.0 Reporter: Mukund Thakur As we have implemented merging of ranges in the ChecksumFSInputChecker implementation of vectored IO api, it can lead to memory fragmentation. Let me explain by example. Suppose client requests for 3 ranges. 0-500, 700-1000 and 1200-1500. Now because of merging, all the above ranges will get merged into one and we will allocate a big byte buffer of 0-1500 size but return sliced byte buffers for the desired ranges. Now once the client is done reading all the ranges, it will only be able to free the memory for requested ranges and memory of the gaps will never be released for eg here (500-700 and 1000-1200). Note this only happens for direct byte buffers. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17833) Improve Magic Committer Performance
[ https://issues.apache.org/jira/browse/HADOOP-17833?focusedWorklogId=782154=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782154 ] ASF GitHub Bot logged work on HADOOP-17833: --- Author: ASF GitHub Bot Created on: 16/Jun/22 20:21 Start Date: 16/Jun/22 20:21 Worklog Time Spent: 10m Work Description: mukund-thakur commented on PR #3289: URL: https://github.com/apache/hadoop/pull/3289#issuecomment-1158095301 LGTM +1 Thanks @steveloughran, there are 3 pending checkstyle issue though. Issue Time Tracking --- Worklog Id: (was: 782154) Time Spent: 12.5h (was: 12h 20m) > Improve Magic Committer Performance > --- > > Key: HADOOP-17833 > URL: https://issues.apache.org/jira/browse/HADOOP-17833 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Time Spent: 12.5h > Remaining Estimate: 0h > > Magic committer tasks can be slow because every file created with > overwrite=false triggers a HEAD (verify there's no file) and a LIST (that > there's no dir). And because of delayed manifestations, it may not behave as > expected. > ParquetOutputFormat is one example of a library which does this. > we could fix parquet to use overwrite=true, but (a) there may be surprises in > other uses (b) it'd still leave the list and (c) do nothing for other formats > call > Proposed: createFile() under a magic path to skip all probes for file/dir at > end of path > Only a single task attempt Will be writing to that directory and it should > know what it is doing. If there is conflicting file names and parts across > tasks that won't even get picked up at this point. Oh and none of the > committers ever check for this: you'll get the last file manifested (s3a) or > renamed (file) > If we skip the checks we will save 2 HTTP requests/file. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on pull request #3289: HADOOP-17833. Improve Magic Committer performance
mukund-thakur commented on PR #3289: URL: https://github.com/apache/hadoop/pull/3289#issuecomment-1158095301 LGTM +1 Thanks @steveloughran, there are 3 pending checkstyle issue though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18227) Add IOstats and auditing for vectored IO api.
[ https://issues.apache.org/jira/browse/HADOOP-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukund Thakur updated HADOOP-18227: --- Summary: Add IOstats and auditing for vectored IO api. (was: Add IOstats for vectored IO api.) > Add IOstats and auditing for vectored IO api. > - > > Key: HADOOP-18227 > URL: https://issues.apache.org/jira/browse/HADOOP-18227 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18105) Implement a variant of ElasticByteBufferPool which uses weak references for garbage collection.
[ https://issues.apache.org/jira/browse/HADOOP-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukund Thakur resolved HADOOP-18105. Resolution: Fixed Merged in feature branch [https://github.com/apache/hadoop/commits/feature-vectored-io] > Implement a variant of ElasticByteBufferPool which uses weak references for > garbage collection. > --- > > Key: HADOOP-18105 > URL: https://issues.apache.org/jira/browse/HADOOP-18105 > Project: Hadoop Common > Issue Type: Sub-task > Components: common, fs >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Time Spent: 4h > Remaining Estimate: 0h > > Currently in hadoop codebase, we have two classes which implements byte > buffers pooling. > One is ElasticByteBufferPool which doesn't use weak references and thus could > cause memory leaks in production environment. > Other is DirectBufferPool which uses weak references but doesn't support > caller's preference for either on-heap or off-heap buffers. > > The idea is to create an improved version of ElasticByteBufferPool by > subclassing it ( as it is marked as public and stable and used widely in hdfs > ) with essential functionalities required for effective buffer pooling. This > is important for the parent Vectored IO work. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18107) Vectored IO support for large S3 files.
[ https://issues.apache.org/jira/browse/HADOOP-18107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukund Thakur resolved HADOOP-18107. Resolution: Fixed Merged in feature branch [https://github.com/apache/hadoop/commits/feature-vectored-io] > Vectored IO support for large S3 files. > > > Key: HADOOP-18107 > URL: https://issues.apache.org/jira/browse/HADOOP-18107 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Time Spent: 4h > Remaining Estimate: 0h > > This effort would mostly be adding more tests for large files under scale > tests and see if any new issue surfaces. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-18103) High performance vectored read API in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-18103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-18103 started by Mukund Thakur. -- > High performance vectored read API in Hadoop > > > Key: HADOOP-18103 > URL: https://issues.apache.org/jira/browse/HADOOP-18103 > Project: Hadoop Common > Issue Type: New Feature > Components: common, fs, fs/adl, fs/s3 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: perfomance > Attachments: Vectored Read API for Hadoop FS_INCOMPLETE.pdf > > > Add support for multiple ranged vectored read api in PositionedReadable. The > default iterates through the ranges to read each synchronously, but the > intent is that FSDataInputStream subclasses can make more efficient readers > especially object stores implementation. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18289) Remove WhiteBox in hadoop-kms module.
[ https://issues.apache.org/jira/browse/HADOOP-18289?focusedWorklogId=782149=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782149 ] ASF GitHub Bot logged work on HADOOP-18289: --- Author: ASF GitHub Bot Created on: 16/Jun/22 19:27 Start Date: 16/Jun/22 19:27 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158054642 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 28s | | trunk passed | | +1 :green_heart: | compile | 24m 56s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 1s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 24m 13s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 13s | | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 2878 unchanged - 8 fixed = 2878 total (was 2886) | | +1 :green_heart: | compile | 21m 39s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 39s | | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 2672 unchanged - 8 fixed = 2672 total (was 2680) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 51s | [/results-checkstyle-hadoop-common-project_hadoop-kms.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/5/artifact/out/results-checkstyle-hadoop-common-project_hadoop-kms.txt) | hadoop-common-project/hadoop-kms: The patch generated 1 new + 50 unchanged - 0 fixed = 51 total (was 50) | | +1 :green_heart: | mvnsite | 0m 59s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 57s | | hadoop-kms in the patch passed. | | +1 :green_heart: | asflicense | 1m 16s | | The patch does not generate ASF License warnings. | | | | 203m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4433 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux e7cceed5c9a1 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 07d4eb00159a00ad4607d5ace91fb3cdde523019 | | Default
[GitHub] [hadoop] hadoop-yetus commented on pull request #4433: HADOOP-18289. Remove WhiteBox in hadoop-kms module.
hadoop-yetus commented on PR #4433: URL: https://github.com/apache/hadoop/pull/4433#issuecomment-1158054642 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 28s | | trunk passed | | +1 :green_heart: | compile | 24m 56s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 1s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 24m 13s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 13s | | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 2878 unchanged - 8 fixed = 2878 total (was 2886) | | +1 :green_heart: | compile | 21m 39s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 39s | | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 2672 unchanged - 8 fixed = 2672 total (was 2680) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 51s | [/results-checkstyle-hadoop-common-project_hadoop-kms.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/5/artifact/out/results-checkstyle-hadoop-common-project_hadoop-kms.txt) | hadoop-common-project/hadoop-kms: The patch generated 1 new + 50 unchanged - 0 fixed = 51 total (was 50) | | +1 :green_heart: | mvnsite | 0m 59s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 57s | | hadoop-kms in the patch passed. | | +1 :green_heart: | asflicense | 1m 16s | | The patch does not generate ASF License warnings. | | | | 203m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4433 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux e7cceed5c9a1 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 07d4eb00159a00ad4607d5ace91fb3cdde523019 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4433/5/testReport/ | | Max. process+thread count | 600 (vs. ulimit of 5500) | |
[GitHub] [hadoop] jojochuang merged pull request #4398: HDFS-16613. EC: Improve performance of decommissioning dn with many ec blocks
jojochuang merged PR #4398: URL: https://github.com/apache/hadoop/pull/4398 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4439: YARN-11182. Refactor TestAggregatedLogDeletionService: 2nd phase
hadoop-yetus commented on PR #4439: URL: https://github.com/apache/hadoop/pull/4439#issuecomment-1157965886 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 9 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 33s | | trunk passed | | +1 :green_heart: | compile | 1m 1s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 55s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 51s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 30s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 25m 52s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 0m 46s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | -1 :x: | javac | 0m 46s | [/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4439/5/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 19 new + 63 unchanged - 0 fixed = 82 total (was 63) | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 0m 41s | [/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4439/5/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 19 new + 56 unchanged - 0 fixed = 75 total (was 56) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4439/5/artifact/out/blanks-eol.txt) | The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 30s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4439/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 71 new + 2 unchanged - 4 fixed = 73 total (was 6) | | +1 :green_heart: | mvnsite | 0m 45s | | the patch passed | | +1 :green_heart: | javadoc | 0m 43s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 42s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 43s | | hadoop-yarn-common in the patch passed. | | -1 :x: | asflicense | 0m 42s |
[GitHub] [hadoop] hchaverri opened a new pull request, #4447: HDFS-16591. Setup JaasConfiguration in ZKCuratorManager when SASL is …
hchaverri opened a new pull request, #4447: URL: https://github.com/apache/hadoop/pull/4447 …enabled ### Description of PR Setting up the JaasConfiguration when creating a new ZKCuratorManager, to allow ZK connections via SASL. Also removing duplicated classes of JaasConfiguration. ### How was this patch tested? Ran the following unit tests: TestJaasConfiguration TestZKCuratorManager TestZKSignerSecretProvider TestZKDelegationTokenSecretManager TestMicroZookeeperService Created a TestDelegationTokenSecretManager to replace the default ZKDelegationTokenSecretManagerImpl and deployed to an RBF router. Without these changes, the router initialization will fail with the error described on HDFS-16591. Initialization succeeds with this patch. ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4377: YARN-11115. Add configuration to disable AM preemption for capacity scheduler
hadoop-yetus commented on PR #4377: URL: https://github.com/apache/hadoop/pull/4377#issuecomment-1157923214 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 29s | | trunk passed | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 1m 8s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 11s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 27m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | compile | 0m 55s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 55s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 46s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 58s | | the patch passed | | +1 :green_heart: | javadoc | 0m 45s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 41s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 103m 14s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4377/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 212m 43s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4377/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4377 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1d75cb74617f 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5e06fc7c6cac8c373a274de264196b293147e08f | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4377/2/testReport/ | | Max. process+thread count | 911 (vs. ulimit of 5500) | | modules | C:
[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs
[ https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=782102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782102 ] ASF GitHub Bot logged work on HADOOP-18258: --- Author: ASF GitHub Bot Created on: 16/Jun/22 16:19 Start Date: 16/Jun/22 16:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4383: URL: https://github.com/apache/hadoop/pull/4383#issuecomment-1157867827 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 38s | | trunk passed | | +1 :green_heart: | compile | 24m 57s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 38s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 14s | | trunk passed | | +1 :green_heart: | javadoc | 2m 24s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 20s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 46s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 27s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | compile | 24m 14s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 14s | | the patch passed | | -1 :x: | compile | 20m 35s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | -1 :x: | javac | 20m 35s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 20s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | mvnsite | 1m 2s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shellcheck | 0m 8s | | No new issues. | | +1 :green_heart: | javadoc | 2m 17s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 5s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 0m 59s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4383: HADOOP-18258. Merging of S3A Audit Logs
hadoop-yetus commented on PR #4383: URL: https://github.com/apache/hadoop/pull/4383#issuecomment-1157867827 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 38s | | trunk passed | | +1 :green_heart: | compile | 24m 57s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 38s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 14s | | trunk passed | | +1 :green_heart: | javadoc | 2m 24s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 20s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 46s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 27s | [/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | compile | 24m 14s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 14s | | the patch passed | | -1 :x: | compile | 20m 35s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | -1 :x: | javac | 20m 35s | [/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in the patch failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 20s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | mvnsite | 1m 2s | [/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shellcheck | 0m 8s | | No new issues. | | +1 :green_heart: | javadoc | 2m 17s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 5s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 0m 59s | [/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/4/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +1 :green_heart: | shadedclient | 25m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 18s | | hadoop-common in the patch passed. | | -1 :x: | unit | 1m 1s |
[GitHub] [hadoop] slfan1989 commented on pull request #4375: HDFS-16605. Improve Code With Lambda in hadoop-hdfs-rbf moudle.
slfan1989 commented on PR #4375: URL: https://github.com/apache/hadoop/pull/4375#issuecomment-1157862822 @goiri Can you help me merge this pr to trunk branch? Thanks for helping me review the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4421: YARN-10122. Support signalToContainer API for Federation.
slfan1989 commented on PR #4421: URL: https://github.com/apache/hadoop/pull/4421#issuecomment-1157860955 @goiri Can you help me merge this pr to trunk branch? thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on pull request #4439: YARN-11182. Refactor TestAggregatedLogDeletionService: 2nd phase
szilard-nemeth commented on PR #4439: URL: https://github.com/apache/hadoop/pull/4439#issuecomment-1157840884 Will wait for Jenkins to check how many Checkstyle and other issues remained after refactoring the builder vs. the actual testcase logic. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on pull request #4441: HDFS-13522. IPC changes to support observer reads through routers.
simbadzina commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1157792454 @ZanderXu is https://github.com/apache/hadoop/pull/4127 I have configurations on both the router and client side. Consistency is also guaranteed because the router always does an msync. The reason for the client side configuration is for latency sensitive clients that just want one call between the router and the namenodes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18294) Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums
[ https://issues.apache.org/jira/browse/HADOOP-18294?focusedWorklogId=782074=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782074 ] ASF GitHub Bot logged work on HADOOP-18294: --- Author: ASF GitHub Bot Created on: 16/Jun/22 15:18 Start Date: 16/Jun/22 15:18 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4446: URL: https://github.com/apache/hadoop/pull/4446#issuecomment-1157784882 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 11s | | trunk passed | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 0m 48s | [/new-spotbugs-hadoop-maven-plugins.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/1/artifact/out/new-spotbugs-hadoop-maven-plugins.html) | hadoop-maven-plugins generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 23m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 28s | | hadoop-maven-plugins in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 100m 16s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-maven-plugins | | | Exceptional return value of java.io.File.mkdirs() ignored in org.apache.hadoop.maven.plugin.protoc.ProtocRunner$ChecksumComparator.writeChecksums() At ProtocRunner.java:ignored in org.apache.hadoop.maven.plugin.protoc.ProtocRunner$ChecksumComparator.writeChecksums() At ProtocRunner.java:[line 175] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4446 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 08b594513c8d 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4446: HADOOP-18294.Ensure build folder exists before writing checksum file.…
hadoop-yetus commented on PR #4446: URL: https://github.com/apache/hadoop/pull/4446#issuecomment-1157784882 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 11s | | trunk passed | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 0m 48s | [/new-spotbugs-hadoop-maven-plugins.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/1/artifact/out/new-spotbugs-hadoop-maven-plugins.html) | hadoop-maven-plugins generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 23m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 28s | | hadoop-maven-plugins in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 100m 16s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-maven-plugins | | | Exceptional return value of java.io.File.mkdirs() ignored in org.apache.hadoop.maven.plugin.protoc.ProtocRunner$ChecksumComparator.writeChecksums() At ProtocRunner.java:ignored in org.apache.hadoop.maven.plugin.protoc.ProtocRunner$ChecksumComparator.writeChecksums() At ProtocRunner.java:[line 175] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4446/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4446 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 08b594513c8d 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 10cefe10221c3fbbd22ccb4a3dc5f4297d26c37f | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results |
[GitHub] [hadoop] simbadzina commented on pull request #4441: HDFS-13522. IPC changes to support observer reads through routers.
simbadzina commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1157784867 > Thanks @zhengchenyu and @simbadzina . > > > I think config in client side may be more flexible. > > This is a very meaningful topic. If only the client controls whether or not to enable ObserverRead will be more difficult for Admin to control, because it is very difficult to upgrade the HDFS client in full. In other words: If RBF controls whether the ObserverRead is enabled, the Admin will be very convenient to control the ObserverRead of the entire cluster, and even dynamically control whether the ObserverRead of a single NS or the entire cluster is enabled. But there may be some special Client that do not want to enable ObserverRead, so RBF should identify those requests and proxy them to the Active Namenode. > > @simbadzina This is why dynamic updates are required, so that when Admin finds that there are some abnormal Observer NameNodes, he/she can quickly disable the ObserverRead of one NS or even all NSs. > > > In our draft design, after apply [HDFS-13522](https://issues.apache.org/jira/browse/HDFS-13522).002.patch, I wanna proxy client's state id. > > Proxying client's state id to the NameNode by RBF will be very complicated. > > * A DFSClient may read or write some paths of different NameServices, and the stateID of different NS may be different. > * The client does not know the Nameservice to which the reading or writing path belong, so it cannot pass the state id to RBF. @ZanderXu in my full PR, https://github.com/apache/hadoop/pull/4127, I do also allow routers to enable and disable observer reads. The difference being that it requires a router restart. Since routers are stateless this is a quick operation. At most one minute. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZanderXu commented on pull request #4441: HDFS-13522. IPC changes to support observer reads through routers.
ZanderXu commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1157696846 > We know observer can not guarantee strong consistency, maybe some use have high demand, they wanna disable observe read, though few user have this demand. Only a very small number of users have high demand, and in most cases, the client enables ObserverRead default. In other words: In most cases, there is no need for client to pass the ObserverRead enable flag to RBF. So only a very small number of requests need to carry a specific flag bit to RBF, so that the RBF can force an msync to ensure the consistency before proxying the request. There are serval methods for the client side to carry the force consistency flag to RBF: 1. Carry a special StateID to RBF, such as -100 (Client Process level) 2. Carry a special filed attributes to RBF through CallerContext (single RPC level) 3. etc.. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18295) Add S3A configuration property for `no_proxy` hosts
Sam Kramer created HADOOP-18295: --- Summary: Add S3A configuration property for `no_proxy` hosts Key: HADOOP-18295 URL: https://issues.apache.org/jira/browse/HADOOP-18295 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Reporter: Sam Kramer Seeing as there are configuration options for proxy host, port, username, and password, there should also be an option to be able to provide to the S3 client a list of hosts to not use the proxy for (i.e. `no_proxy`) I'm happy to contribute the code, but figured I'd file a ticket first to see if this the hadoop community would be open to this idea or have any desire for this feature. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhengchenyu commented on pull request #4441: HDFS-13522. IPC changes to support observer reads through routers.
zhengchenyu commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1157672472 > Thanks @zhengchenyu and @simbadzina . > > > I think config in client side may be more flexible. > > This is a very meaningful topic. If only the client controls whether or not to enable ObserverRead will be more difficult for Admin to control, because it is very difficult to upgrade the HDFS client in full. In other words: If RBF controls whether the ObserverRead is enabled, the Admin will be very convenient to control the ObserverRead of the entire cluster, and even dynamically control whether the ObserverRead of a single NS or the entire cluster is enabled. But there may be some special Client that do not want to enable ObserverRead, so RBF should identify those requests and proxy them to the Active Namenode. > > @simbadzina This is why dynamic updates are required, so that when Admin finds that there are some abnormal Observer NameNodes, he/she can quickly disable the ObserverRead of one NS or even all NSs. > > > In our draft design, after apply [HDFS-13522](https://issues.apache.org/jira/browse/HDFS-13522).002.patch, I wanna proxy client's state id. > > Proxying client's state id to the NameNode by RBF will be very complicated. > > * A DFSClient may read or write some paths of different NameServices, and the stateID of different NS may be different. > * The client does not know the Nameservice to which the reading or writing path belong, so it cannot pass the state id to RBF. Yes, you are right in some condition. If all client are common user, for hive and mr application, it is right. We know observer can not guarantee strong consistency, maybe some use have high demand, they could wanna disable observe read, though few user have this demand. Maybe we can reserve configuration both on router side and client side. Yes, Proxying client's state id is complicated. I don't know whether it is necessary or not. So just delay it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18294) Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums
[ https://issues.apache.org/jira/browse/HADOOP-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18294: Labels: pull-request-available (was: ) > Ensure build folder exists before writing checksum > file.ProtocRunner#writeChecksums > --- > > Key: HADOOP-18294 > URL: https://issues.apache.org/jira/browse/HADOOP-18294 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.3 >Reporter: Ashutosh Gupta >Assignee: Ashutosh Gupta >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Ensure build folder exists before writing checksum > file.ProtocRunner#writeChecksums -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18294) Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums
[ https://issues.apache.org/jira/browse/HADOOP-18294?focusedWorklogId=782048=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782048 ] ASF GitHub Bot logged work on HADOOP-18294: --- Author: ASF GitHub Bot Created on: 16/Jun/22 13:36 Start Date: 16/Jun/22 13:36 Worklog Time Spent: 10m Work Description: ashutoshcipher opened a new pull request, #4446: URL: https://github.com/apache/hadoop/pull/4446 ### Description of PR Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums * JIRA: HADOOP-18294 - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? Issue Time Tracking --- Worklog Id: (was: 782048) Remaining Estimate: 0h Time Spent: 10m > Ensure build folder exists before writing checksum > file.ProtocRunner#writeChecksums > --- > > Key: HADOOP-18294 > URL: https://issues.apache.org/jira/browse/HADOOP-18294 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.3 >Reporter: Ashutosh Gupta >Assignee: Ashutosh Gupta >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Ensure build folder exists before writing checksum > file.ProtocRunner#writeChecksums -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #4446: HADOOP-18294.Ensure build folder exists before writing checksum file.…
ashutoshcipher opened a new pull request, #4446: URL: https://github.com/apache/hadoop/pull/4446 ### Description of PR Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums * JIRA: HADOOP-18294 - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18294) Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums
Ashutosh Gupta created HADOOP-18294: --- Summary: Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums Key: HADOOP-18294 URL: https://issues.apache.org/jira/browse/HADOOP-18294 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.3.3 Reporter: Ashutosh Gupta Assignee: Ashutosh Gupta Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] KevinWikant commented on pull request #4410: HDFS-16064. Determine when to invalidate corrupt replicas based on number of usable replicas
KevinWikant commented on PR #4410: URL: https://github.com/apache/hadoop/pull/4410#issuecomment-1157654335 @ashutoshcipher @aajisaka @ZanderXu really appreciate the reviews on this PR, thank you! @aajisaka I have removed the unused imports, please let me know if you have any other comments/concerns -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18288) Total requests and total requests per sec served by RPC servers
[ https://issues.apache.org/jira/browse/HADOOP-18288?focusedWorklogId=782037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782037 ] ASF GitHub Bot logged work on HADOOP-18288: --- Author: ASF GitHub Bot Created on: 16/Jun/22 13:03 Start Date: 16/Jun/22 13:03 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4431: URL: https://github.com/apache/hadoop/pull/4431#issuecomment-1157635614 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 27s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 17s | | trunk passed | | +1 :green_heart: | compile | 22m 51s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 36s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 29s | | trunk passed | | -1 :x: | javadoc | 1m 45s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4431/6/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 3m 32s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 7m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 28s | | the patch passed | | +1 :green_heart: | compile | 22m 4s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 4s | | the patch passed | | +1 :green_heart: | compile | 20m 39s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 4m 28s | | the patch passed | | -1 :x: | javadoc | 1m 33s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4431/6/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 3m 38s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 7m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 28s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 30s | | hadoop-common in the patch passed. | | -1 :x: | unit | 418m 17s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4431/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 58s | | The patch does not generate ASF License warnings. | | | | 665m 43s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[GitHub] [hadoop] hadoop-yetus commented on pull request #4431: HADOOP-18288. Total requests and total requests per sec served by RPC servers
hadoop-yetus commented on PR #4431: URL: https://github.com/apache/hadoop/pull/4431#issuecomment-1157635614 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 27s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 17s | | trunk passed | | +1 :green_heart: | compile | 22m 51s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 36s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 29s | | trunk passed | | -1 :x: | javadoc | 1m 45s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4431/6/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 3m 32s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 7m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 28s | | the patch passed | | +1 :green_heart: | compile | 22m 4s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 4s | | the patch passed | | +1 :green_heart: | compile | 20m 39s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 4m 28s | | the patch passed | | -1 :x: | javadoc | 1m 33s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4431/6/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 3m 38s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 7m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 28s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 30s | | hadoop-common in the patch passed. | | -1 :x: | unit | 418m 17s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4431/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 58s | | The patch does not generate ASF License warnings. | | | | 665m 43s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4431/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4431 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux 878492ceb621 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64
[GitHub] [hadoop] tomscut commented on pull request #4321: HDFS-16581. Print node status when executing printTopology
tomscut commented on PR #4321: URL: https://github.com/apache/hadoop/pull/4321#issuecomment-1157563468 Thanks @jianghuazhu for your contribution. Thanks @virajjasani for your review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut merged pull request #4321: HDFS-16581. Print node status when executing printTopology
tomscut merged PR #4321: URL: https://github.com/apache/hadoop/pull/4321 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZanderXu commented on pull request #4441: HDFS-13522. IPC changes to support observer reads through routers.
ZanderXu commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1157530100 As in my draft PR above, RBF always updates lastSeenTxid from Active and saves. When an NS enable ObserverRead, RBF will set the stored lastSeenTxid of this NS to the RPC header and bring it to the Observer NameNode; if the NS disable ObserverRead, RBF will not set the stated id in RPC header, so even if the request is passed to the Observer, the Observer will also returns StandbyException. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZanderXu commented on pull request #4441: HDFS-13522. IPC changes to support observer reads through routers.
ZanderXu commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1157525096 Thanks @zhengchenyu and @simbadzina . > I think config in client side may be more flexible. This is a very meaningful topic. If only the client controls whether or not to enable ObserverRead will be more difficult for Admin to control, because it is very difficult to upgrade the HDFS client in full. In other words: If RBF controls whether the ObserverRead is enabled, the Admin will be very convenient to control the ObserverRead of the entire cluster, and even dynamically control whether the ObserverRead of a single NS or the entire cluster is enabled. But there may be some special Client that do not want to enable ObserverRead, so RBF should identify those requests and proxy them to the Active Namenode. @simbadzina This is why dynamic updates are required, so that when Admin finds that there are some abnormal Observer NameNodes, he/she can quickly disable the ObserverRead of one NS or even all NSs. > In our draft design, after apply [HDFS-13522](https://issues.apache.org/jira/browse/HDFS-13522).002.patch, I wanna proxy client's state id. Proxying client's state id to the NameNode by RBF will be very complicated. - A DFSClient may read or write some paths of different NameServices, and the stateID of different NS may be different. - The client does not know the Nameservice to which the reading or writing path belong, so it cannot pass the state id to RBF. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation
hadoop-yetus commented on PR #4127: URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1157494963 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 19s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 12s | | trunk passed | | +1 :green_heart: | compile | 24m 51s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 26m 45s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 5m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 47s | | trunk passed | | -1 :x: | javadoc | 1m 31s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/14/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 5m 42s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 11m 51s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 5s | | the patch passed | | +1 :green_heart: | compile | 24m 3s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 3s | | the patch passed | | +1 :green_heart: | compile | 21m 30s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 30s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/14/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 26s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/14/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 339 unchanged - 1 fixed = 342 total (was 340) | | +1 :green_heart: | mvnsite | 6m 36s | | the patch passed | | -1 :x: | javadoc | 1m 30s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/14/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 5m 44s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 12m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 10s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 54s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 362m 44s | | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 34m 3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 35s | | The patch does not generate ASF License warnings. | | | | 688m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4367: HDFS-16600. Fix deadlock on DataNode side.
hadoop-yetus commented on PR #4367: URL: https://github.com/apache/hadoop/pull/4367#issuecomment-1157483601 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 39m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 26s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 43s | | trunk passed | | +1 :green_heart: | spotbugs | 3m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 258m 9s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | | The patch does not generate ASF License warnings. | | | | 400m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4367/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4367 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c020b276eba7 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f08e25d23aa96705511da6358769b81a4a711080 | | Default Java | Red Hat, Inc.-1.8.0_332-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4367/8/testReport/ | | Max. process+thread count | 3757 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4367/8/console | | versions | git=2.9.5 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18293) Release Hadoop 3.3.4 critical fix update
[ https://issues.apache.org/jira/browse/HADOOP-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18293: Description: Create a new release off the branch-3.3.3 line with a few more changes * wrap up of security changes * cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a client...reinstate once the updated jar is tested * try to get an arm build out tool was: Create a new release off the branch-3.3.3 line with a few more changes * wrap up of security changes * cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a client...reinstate once the updated jar is tested > Release Hadoop 3.3.4 critical fix update > > > Key: HADOOP-18293 > URL: https://issues.apache.org/jira/browse/HADOOP-18293 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Create a new release off the branch-3.3.3 line with a few more changes > * wrap up of security changes > * cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a > client...reinstate once the updated jar is tested > * try to get an arm build out tool -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18293) Release Hadoop 3.3.4 critical fix update
[ https://issues.apache.org/jira/browse/HADOOP-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554997#comment-17554997 ] Steve Loughran commented on HADOOP-18293: - follow on to HADOOP-18198 > Release Hadoop 3.3.4 critical fix update > > > Key: HADOOP-18293 > URL: https://issues.apache.org/jira/browse/HADOOP-18293 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Create a new release off the branch-3.3.3 line with a few more changes > * wrap up of security changes > * cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a > client...reinstate once the updated jar is tested -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18293) Release Hadoop 3.3.4 critical fix update
Steve Loughran created HADOOP-18293: --- Summary: Release Hadoop 3.3.4 critical fix update Key: HADOOP-18293 URL: https://issues.apache.org/jira/browse/HADOOP-18293 Project: Hadoop Common Issue Type: Task Components: build Reporter: Steve Loughran Assignee: Steve Loughran Create a new release off the branch-3.3.3 line with a few more changes * wrap up of security changes * cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a client...reinstate once the updated jar is tested -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18028) High performance S3A input stream with prefetching & caching
[ https://issues.apache.org/jira/browse/HADOOP-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554991#comment-17554991 ] Steve Loughran commented on HADOOP-18028: - update on this: I can see that some things are going to take time to complete/stabilize (file caching) but I do want to fork off a new release off branch-3.3 soon. # what is needed to include this as a preview feature where the default settings (caching, buffer sizes etc) are going to be usable but low risk? # what can we/should we do without and just document as "not yet -please help"? # what do we need in terms of docs? # what broader testing has anyone done? > High performance S3A input stream with prefetching & caching > > > Key: HADOOP-18028 > URL: https://issues.apache.org/jira/browse/HADOOP-18028 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Bhalchandra Pandit >Assignee: Bhalchandra Pandit >Priority: Major > Labels: pull-request-available > Time Spent: 13h 50m > Remaining Estimate: 0h > > I work for Pinterest. I developed a technique for vastly improving read > throughput when reading from the S3 file system. It not only helps the > sequential read case (like reading a SequenceFile) but also significantly > improves read throughput of a random access case (like reading Parquet). This > technique has been very useful in significantly improving efficiency of the > data processing jobs at Pinterest. > > I would like to contribute that feature to Apache Hadoop. More details on > this technique are available in this blog I wrote recently: > [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0] > -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on pull request #4398: HDFS-16613. EC: Improve performance of decommissioning dn with many ec blocks
lfxy commented on PR #4398: URL: https://github.com/apache/hadoop/pull/4398#issuecomment-1157448224 @hi-adachi . Excuse me, what is the next process? Will this PR be merged into the trunk branch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4311: HDFS-13522: IPC changes to support observer reads through routers.
hadoop-yetus commented on PR #4311: URL: https://github.com/apache/hadoop/pull/4311#issuecomment-1157387445 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 39s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 42s | | trunk passed | | +1 :green_heart: | compile | 26m 34s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 23m 28s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 7m 39s | | trunk passed | | -1 :x: | javadoc | 1m 36s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/7/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 6m 37s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 12m 53s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 30s | | the patch passed | | +1 :green_heart: | compile | 24m 50s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 50s | | the patch passed | | +1 :green_heart: | compile | 22m 57s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 57s | | the patch passed | | -1 :x: | blanks | 0m 1s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/7/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 18s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/7/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 198 unchanged - 1 fixed = 201 total (was 199) | | +1 :green_heart: | mvnsite | 7m 20s | | the patch passed | | -1 :x: | javadoc | 1m 41s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/7/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 6m 28s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 13m 28s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 25s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 9s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 258m 53s | | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 23m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 2m 0s | | The patch does not generate ASF License warnings. | | | | 580m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests |
[GitHub] [hadoop] Samrat002 commented on pull request #4400: HDFS-16616. remove use of org.apache.hadoop.util.Sets
Samrat002 commented on PR #4400: URL: https://github.com/apache/hadoop/pull/4400#issuecomment-1157313860 - hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. failed for `trunk` and the `patch` - All Test passed . Please review the pr Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4400: HDFS-16616. remove use of org.apache.hadoop.util.Sets
hadoop-yetus commented on PR #4400: URL: https://github.com/apache/hadoop/pull/4400#issuecomment-1157289843 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 10 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 8s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 32s | | trunk passed | | +1 :green_heart: | compile | 7m 29s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 8m 38s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 0s | | trunk passed | | -1 :x: | javadoc | 2m 1s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4400/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 2m 56s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 57s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 13s | | the patch passed | | +1 :green_heart: | compile | 7m 27s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 7m 27s | | the patch passed | | +1 :green_heart: | compile | 7m 28s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 7m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 35s | | hadoop-hdfs-project: The patch generated 0 new + 385 unchanged - 1 fixed = 385 total (was 386) | | +1 :green_heart: | mvnsite | 2m 47s | | the patch passed | | -1 :x: | javadoc | 1m 18s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4400/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 2m 42s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 57s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 456m 39s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 37m 7s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 18s | | The patch does not generate ASF License warnings. | | | | 660m 25s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4400/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4400 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1a3d66fc3631 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 856d72fd5476cd51074d9ec6298cafa45a79f7f2 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private