[GitHub] [hadoop] iwasakims merged pull request #2139: MAPREDUCE-7285. Junit class missing from hadoop-mapreduce-client-jobclient-*-tests jar.
iwasakims merged pull request #2139: URL: https://github.com/apache/hadoop/pull/2139 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2139: MAPREDUCE-7285. Junit class missing from hadoop-mapreduce-client-jobclient-*-tests jar.
iwasakims commented on pull request #2139: URL: https://github.com/apache/hadoop/pull/2139#issuecomment-658571410 Thanks, @aajisaka. I merged this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2139: MAPREDUCE-7285. Junit class missing from hadoop-mapreduce-client-jobclient-*-tests jar.
aajisaka commented on pull request #2139: URL: https://github.com/apache/hadoop/pull/2139#issuecomment-658557347 +1, JUnit 3 dependency must be removed. hadoop.mapreduce.lib.input.TestCombineFileInputFormat is fixed by #2136 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2139: MAPREDUCE-7285. Junit class missing from hadoop-mapreduce-client-jobclient-*-tests jar.
hadoop-yetus commented on pull request #2139: URL: https://github.com/apache/hadoop/pull/2139#issuecomment-658554971 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 31s | trunk passed | | +1 :green_heart: | compile | 0m 33s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 29s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | trunk passed | | +1 :green_heart: | shadedclient | 16m 31s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 21s | hadoop-mapreduce-client-jobclient in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 20s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 42s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 39s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 18s | the patch passed | | +1 :green_heart: | mvnsite | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 16s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 19s | hadoop-mapreduce-client-jobclient in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 17s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 45s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 125m 12s | hadoop-mapreduce-client-jobclient in the patch passed. | | -1 :x: | asflicense | 0m 37s | The patch generated 1 ASF License warnings. | | | | 188m 52s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.mapreduce.lib.input.TestCombineFileInputFormat | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2139/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2139 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d1451cfbc80a 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 317fe4584a5 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2139/1/artifact/out/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2139/1/artifact/out/patch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-2139/1/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2139/1/testReport/ | | asflicense | https://builds.apache.org/job/hadoop-multibranch/job/PR-2139/1/artifact/out/p
[jira] [Assigned] (HADOOP-16915) ABFS: Test failure ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance
[ https://issues.apache.org/jira/browse/HADOOP-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan reassigned HADOOP-16915: -- Assignee: Bilahari T H > ABFS: Test failure > ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance > --- > > Key: HADOOP-16915 > URL: https://issues.apache.org/jira/browse/HADOOP-16915 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Major > Labels: abfsactive > > Ref: https://issues.apache.org/jira/browse/HADOOP-16890 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2136: MAPREDUCE-7284. TestCombineFileInputFormat#testMissingBlocks fails
aajisaka commented on pull request #2136: URL: https://github.com/apache/hadoop/pull/2136#issuecomment-658542941 Thank you @iwasakims This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #2136: MAPREDUCE-7284. TestCombineFileInputFormat#testMissingBlocks fails
aajisaka merged pull request #2136: URL: https://github.com/apache/hadoop/pull/2136 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2051: HDFS-15385 Upgrade boost library
aajisaka commented on pull request #2051: URL: https://github.com/apache/hadoop/pull/2051#issuecomment-658541750 Thank you @GauthamBanasandra for your work. I really appreciate it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #2051: HDFS-15385 Upgrade boost library
aajisaka merged pull request #2051: URL: https://github.com/apache/hadoop/pull/2051 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17099: --- Attachment: HADOOP-17099.007.patch > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch, HADOOP-17099.005.patch, > HADOOP-17099.006.patch, HADOOP-17099.007.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2139: MAPREDUCE-7285. Junit class missing from hadoop-mapreduce-client-jobclient-*-tests jar.
iwasakims commented on pull request #2139: URL: https://github.com/apache/hadoop/pull/2139#issuecomment-658510694 The fix worked on my local. ``` [centos@centos7 hadoop-3.4.0-SNAPSHOT]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-*-tests.jar sleep -mt 1 -rt 1 -m 1 -r 1 2020-07-15 11:12:13,657 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2020-07-15 11:12:14,670 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032 2020-07-15 11:12:15,259 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/centos/.staging/job_1594777964233_0002 2020-07-15 11:12:16,806 INFO mapreduce.JobSubmitter: number of splits:1 2020-07-15 11:12:17,456 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1594777964233_0002 2020-07-15 11:12:17,456 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2020-07-15 11:12:17,713 INFO conf.Configuration: resource-types.xml not found 2020-07-15 11:12:17,713 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2020-07-15 11:12:17,790 INFO impl.YarnClientImpl: Submitted application application_1594777964233_0002 2020-07-15 11:12:17,832 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1594777964233_0002/ 2020-07-15 11:12:17,833 INFO mapreduce.Job: Running job: job_1594777964233_0002 2020-07-15 11:12:26,114 INFO mapreduce.Job: Job job_1594777964233_0002 running in uber mode : false 2020-07-15 11:12:26,115 INFO mapreduce.Job: map 0% reduce 0% 2020-07-15 11:12:32,220 INFO mapreduce.Job: map 100% reduce 0% 2020-07-15 11:12:37,281 INFO mapreduce.Job: map 100% reduce 100% 2020-07-15 11:12:38,326 INFO mapreduce.Job: Job job_1594777964233_0002 completed successfully ... ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims opened a new pull request #2139: MAPREDUCE-7285. Junit class missing from hadoop-mapreduce-client-jobclient-*-tests jar.
iwasakims opened a new pull request #2139: URL: https://github.com/apache/hadoop/pull/2139 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157802#comment-17157802 ] Hadoop QA commented on HADOOP-17099: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 23m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 44s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 26s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 48s{color} | {color:orange} root: The patch generated 2 new + 99 unchanged - 6 fixed = 101 total (was 105) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hadoop-build-tools in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 26s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 20s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green}
[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157789#comment-17157789 ] Hadoop QA commented on HADOOP-17099: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 26m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 4s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 43s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 12s{color} | {color:orange} root: The patch generated 2 new + 99 unchanged - 6 fixed = 101 total (was 105) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hadoop-build-tools in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 30s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 20s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 44s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 30s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 5s
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157781#comment-17157781 ] Ahmed Hussein commented on HADOOP-17101: Thanks [~jeagles] for the valuable feedback. [^HADOOP-17101.008.patch] should be ready to merge. I checked all UTs that failed during the submissions of the patches. |TestHDFSContractMultipartUploader|Broken on Trunk. HDFS-15471.|Broken| |TestBlockTokenWithDFSStriped|Filed Jira HDFS-15459|Fixed| |TestCacheDirectives| |Fixed| |TestCheckpointsWithSnapshots| |Fixed| |TestDataNodeMXBean| |Fixed| |TestEditLog| |Fixed| |TestFSImage| |Fixed| |TestNameEditsConfigs| |Fixed| |TestPersistentStoragePolicySatisfier| |Fixed| |TestRollingUpgrade| |Fixed| |TestSecondaryNameNodeUpgrade| |Fixed| |TestStartup| |Fixed| |TestStorageRestore| |Fixed| |TestUnderReplicatedBlocks| |Fixed| |TestBPOfferService| |Flaky| |TestDFSInotifyEventInputStreamKerberized| |Flaky| |TestExternalStoragePolicySatisfier|filed HDFS-15456|Flaky| |TestFileChecksum|for HADOOP-17101, filed HDFS-15461|Flaky| |TestFileCreation|filed HDFS-15460|Flaky| |TestFsDatasetImpl|filed HDFS-15457|Flaky| |TestGetFileChecksum|for HADOOP-17101, Filed HDFS-1546. an old jira exist HDFS-4723|Flaky| |TestGroupsCaching| |Flaky| |TestJournalNodeSync| |Flaky| |TestNameNodeRetryCacheMetrics|filed a jira HDFS-1548|Flaky| |TestPipelineFailover| |Flaky| |TestSafeModeWithStripedFileWithRandomECPolicy| |Flaky| |TestStripedFileAppend| |Flaky| |TestWebHDFS| |Flaky| > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch, > HADOOP-17101.008.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2051: HDFS-15385 Upgrade boost library
hadoop-yetus commented on pull request #2051: URL: https://github.com/apache/hadoop/pull/2051#issuecomment-658465685 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 22m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 9s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 3s | trunk passed | | +1 :green_heart: | compile | 19m 22s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 17m 8s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | mvnsite | 17m 36s | trunk passed | | +1 :green_heart: | shadedclient | 14m 8s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 40s | root in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 5m 40s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -0 :warning: | patch | 20m 50s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 19m 56s | the patch passed | | +1 :green_heart: | compile | 19m 56s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | -1 :x: | cc | 19m 56s | root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 26 new + 136 unchanged - 26 fixed = 162 total (was 162) | | +1 :green_heart: | golang | 19m 56s | the patch passed | | +1 :green_heart: | javac | 19m 56s | the patch passed | | +1 :green_heart: | compile | 22m 35s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | cc | 22m 35s | root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 13 new + 149 unchanged - 13 fixed = 162 total (was 162) | | +1 :green_heart: | golang | 22m 35s | the patch passed | | +1 :green_heart: | javac | 22m 35s | the patch passed | | +1 :green_heart: | hadolint | 0m 5s | There were no new hadolint issues. | | +1 :green_heart: | mvnsite | 22m 59s | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 18s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 17m 33s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 32s | root in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 6m 55s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | ||| _ Other Tests _ | | -1 :x: | unit | 578m 11s | root in the patch passed. | | -1 :x: | asflicense | 1m 51s | The patch generated 1 ASF License warnings. | | | | 816m 25s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.applications.distributedshell.TestDistributedShell | | | hadoop.yarn.server.resourcemanager.placement.TestUserGroupMappingPlacementRule | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.sls.TestReservationSystemInvariants | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.qjournal.client.TestQuorumJournalManager | | | hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem | | | hadoop.mapreduce.lib.input.TestCombineFileInputFormat | | Subs
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157747#comment-17157747 ] Hadoop QA commented on HADOOP-17101: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 4s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 29m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 29s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 42s{color} | {color:green} root: The patch generated 0 new + 67 unchanged - 3 fixed = 67 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-build-tools in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 18s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 10s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 4s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 55s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157716#comment-17157716 ] Hadoop QA commented on HADOOP-17101: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 43s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 21s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 51s{color} | {color:green} root: The patch generated 0 new + 67 unchanged - 3 fixed = 67 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 39s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s{color} | {color:green} hadoop-build-tools in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}142m 40s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 10s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 18s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}
[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17099: --- Attachment: HADOOP-17099.006.patch > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch, HADOOP-17099.005.patch, > HADOOP-17099.006.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157641#comment-17157641 ] Ahmed Hussein commented on HADOOP-17099: Thanks [~jeagles]! I believe that using {{stream.filter}} then {{findFirst}} would imply iterating on the entire collection. Perhaps, a {{for-loop}} that returns on the first element that satisfies the {{Predicate}} should do the job. I checked the Unit tests: |TestHDFSContractMultipartUploader|Broken on Trunk. HDFS-15471.|Broken| |TestCapacityOverTimePolicy| |Flaky| |TestDecommission| |Flaky| |TestDecommissionWithStripedBackoffMonitor| |Flaky| |TestDFSStripedOutputStreamWithRandomECPolicy| |Flaky| |TestExternalStoragePolicySatisfier|filed HDFS-15456|Flaky| |TestFileChecksum|for HADOOP-17101, filed HDFS-15461|Flaky| |TestFixKerberosTicketOrder| |Flaky| |TestHDFSFileSystemContract| |Flaky| |TestMaintenanceState| |Flaky| |TestQuota| |Flaky| |TestRaceWhenRelogin| |Flaky| |TestSafeModeWithStripedFile| |Flaky| |TestBlockTokenWithDFSStriped|Filed Jira HDFS-15459|Flaky| |TestDFSInotifyEventInputStreamKerberized| |Flaky| |TestNameNodeRetryCacheMetrics|filed a jira HDFS-1548|Flaky| |TestDFSUpgradeWithHA| |Flaky| > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch, HADOOP-17099.005.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException
[ https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157637#comment-17157637 ] Ayush Saxena commented on HADOOP-16998: --- Seems this is in trunk as well, [~ste...@apache.org] Should we add 3.4.0 as well in the fix version? > WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException > -- > > Key: HADOOP-16998 > URL: https://issues.apache.org/jira/browse/HADOOP-16998 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Fix For: 3.3.1 > > Attachments: HADOOP-16998.patch > > > During HFile create, at the end when called close() on the OutputStream, > there is some pending data to get flushed. When this flush happens, an > Exception is thrown back from Storage. The Azure-storage SDK layer will throw > back IOE. (Even if it is a StorageException thrown from the Storage, the SDK > converts it to IOE.) But at HBase, we end up getting IllegalArgumentException > which causes the RS to get aborted. If we get back IOE, the flush will get > retried instead of aborting RS. > The reason is this > NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. > But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream > which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream > calls close on SyncableDataOutputStream and it uses below method from > FilterOutputStream > {code} > public void close() throws IOException { > try (OutputStream ostream = out) { > flush(); > } > } > {code} > Here the flush call caused an IOE to be thrown to here. The finally will > issue close call on ostream (Which is an instance of BlobOutputStreamInternal) > When BlobOutputStreamInternal#close() is been called, if there was any > exception already occured on that Stream, it will throw back the same > Exception > {code} > public synchronized void close() throws IOException { > try { > // if the user has already closed the stream, this will throw a > STREAM_CLOSED exception > // if an exception was thrown by any thread in the > threadExecutor, realize it now > this.checkStreamState(); > ... > } > private void checkStreamState() throws IOException { > if (this.lastError != null) { > throw this.lastError; > } > } > {code} > So here both try and finally block getting Exceptions and Java uses > Throwable#addSuppressed() > Within this method if both Exceptions are same objects, it throws back > IllegalArgumentException > {code} > public final synchronized void addSuppressed(Throwable exception) { > if (exception == this) > throw new > IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception); > > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17099: --- Attachment: HADOOP-17099.005.patch > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch, HADOOP-17099.005.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157624#comment-17157624 ] Jim Brennan commented on HADOOP-17127: -- [~xkrogen] I have verified that the trunk patch applies for branch-3.3, and I have added patches for branch-3.2, branch-3.1, and branch-2.10. Can we get this pulled back to those branches? > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.4.0 > > Attachments: HADOOP-17127-branch-2.10.001.patch, > HADOOP-17127-branch-3.1.001.patch, HADOOP-17127-branch-3.2.001.patch, > HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-17127: - Attachment: HADOOP-17127-branch-2.10.001.patch > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.4.0 > > Attachments: HADOOP-17127-branch-2.10.001.patch, > HADOOP-17127-branch-3.1.001.patch, HADOOP-17127-branch-3.2.001.patch, > HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-17127: - Attachment: HADOOP-17127-branch-3.1.001.patch > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.4.0 > > Attachments: HADOOP-17127-branch-3.1.001.patch, > HADOOP-17127-branch-3.2.001.patch, HADOOP-17127.001.patch, > HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-17127: - Attachment: HADOOP-17127-branch-3.2.001.patch > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.4.0 > > Attachments: HADOOP-17127-branch-3.1.001.patch, > HADOOP-17127-branch-3.2.001.patch, HADOOP-17127.001.patch, > HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157572#comment-17157572 ] Jim Brennan commented on HADOOP-17127: -- Thanks [~xkrogen]! I am testing out patches for the other branches and I will put them up once they are ready. > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.4.0 > > Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157567#comment-17157567 ] Hudson commented on HADOOP-17127: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18435 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18435/]) HADOOP-17127. Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and (xkrogen: rev 317fe4584a51cfe553e4098d48170cd2898b9732) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcScheduler.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.4.0 > > Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157562#comment-17157562 ] Jonathan Turner Eagles edited comment on HADOOP-17099 at 7/14/20, 6:26 PM: --- {code:title= MetricsRecords.java} I'm not that familiar with all stream feature set, but could new helper function getFirstFromIterableOrDefault be eliminated but using the Stream.findFirst api. It seems that it could prevent full evaluation of the list with this short circuit method. {code} was (Author: jeagles): {noformat:title= MetricsRecords.java} I'm not that familiar with all stream feature set, but could new helper function getFirstFromIterableOrDefault be eliminated but using the Stream.findFirst api. It seems that it could prevent full evaluation of the list with this short circuit method. {noformat} > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157562#comment-17157562 ] Jonathan Turner Eagles commented on HADOOP-17099: - {noformat:title= MetricsRecords.java} I'm not that familiar with all stream feature set, but could new helper function getFirstFromIterableOrDefault be eliminated but using the Stream.findFirst api. It seems that it could prevent full evaluation of the list with this short circuit method. {noformat} > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HADOOP-17127: - Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.4.0 > > Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157561#comment-17157561 ] Erik Krogen commented on HADOOP-17127: -- LGTM, thanks [~Jim_Brennan]! This looks good from a consistency standpoint and has no user-facing impact. I just committed this to {{trunk}}. > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: (was: HADOOP-17101.007.patch) > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch, > HADOOP-17101.008.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: HADOOP-17101.008.patch > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch, > HADOOP-17101.008.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157557#comment-17157557 ] Ahmed Hussein commented on HADOOP-17101: |TestHDFSContractMultipartUploader|Broken on trunk| |TestCacheDirectives|Fixed| |TestCheckpointsWithSnapshots|Fixed| |TestDataNodeMXBean|Fixed| |TestEditLog|Fixed| |TestFSImage|Fixed| |TestNameEditsConfigs|Fixed| |TestPersistentStoragePolicySatisfier|Fixed| |TestRollingUpgrade|Fixed| |TestSecondaryNameNodeUpgrade|Fixed| |TestStartup|Fixed| |TestStorageRestore|Fixed| |TestUnderReplicatedBlocks|Fixed| |TestBlockTokenWithDFSStriped|Fixed| |TestBPOfferService|Flaky| |TestExternalStoragePolicySatisfier|Flaky| |TestFileChecksum|Flaky| |TestFileCreation|Flaky| |TestFsDatasetImpl|Flaky| |TestGetFileChecksum|Flaky| |TestGroupsCaching|Flaky| |TestJournalNodeSync|Flaky| |TestNameNodeRetryCacheMetrics|Flaky| |TestPipelineFailover|Flaky| |TestSafeModeWithStripedFileWithRandomECPolicy|Flaky| |TestWebHDFS|Flaky| > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch, > HADOOP-17101.007.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: HADOOP-17101.007.patch > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch, > HADOOP-17101.007.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
hadoop-yetus commented on pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#issuecomment-658307095 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 19s | trunk passed | | +1 :green_heart: | compile | 0m 37s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 33s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 26s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 14m 44s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 30s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 28s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 55s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 53s | trunk passed | | -0 :warning: | patch | 1m 12s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | -0 :warning: | checkstyle | 0m 17s | hadoop-tools/hadoop-azure: The patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 8s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 26s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 60m 50s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2123 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 89297a2d6123 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4647a604301 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/7/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/7/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/7/testReport/ | | Max. process+thread count | 421 (vs. ulimit of 5500) | | modules | C: ha
[GitHub] [hadoop] hadoop-yetus commented on pull request #2138: HDFS-15469. Dynamically configure the size of PacketReceiver#MAX_PACKET_SIZE.
hadoop-yetus commented on pull request #2138: URL: https://github.com/apache/hadoop/pull/2138#issuecomment-658299456 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 21m 45s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 6s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 14s | trunk passed | | +1 :green_heart: | compile | 3m 54s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 3m 31s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 59s | trunk passed | | +1 :green_heart: | mvnsite | 2m 12s | trunk passed | | +1 :green_heart: | shadedclient | 17m 7s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 40s | hadoop-hdfs-client in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 37s | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 23s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 2m 57s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 8s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | the patch passed | | +1 :green_heart: | compile | 3m 45s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 3m 45s | the patch passed | | +1 :green_heart: | compile | 3m 27s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 3m 27s | the patch passed | | -0 :warning: | checkstyle | 0m 51s | hadoop-hdfs-project: The patch generated 2 new + 35 unchanged - 0 fixed = 37 total (was 35) | | +1 :green_heart: | mvnsite | 1m 57s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 43s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 34s | hadoop-hdfs-client in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 32s | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 15s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 5m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 1s | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 94m 41s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | The patch does not generate ASF License warnings. | | | | 209m 46s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.TestGetFileChecksum | | | hadoop.hdfs.TestStripedFileAppend | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2138/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2138 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux ee8c074113cd 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / bdce75d737b | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Buil
[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
hadoop-yetus commented on pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#issuecomment-658296547 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 24s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 30s | trunk passed | | +1 :green_heart: | compile | 0m 37s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 29s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 33s | trunk passed | | +1 :green_heart: | shadedclient | 16m 56s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 26s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 56s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 54s | trunk passed | | -0 :warning: | patch | 1m 11s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 30s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 30s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 15s | hadoop-tools/hadoop-azure: The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 44s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 22s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 68m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2123 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 6bde8d0399ff 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4647a604301 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/6/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/6/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/6/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/6/testReport/ | | Max. process+thread count | 311 (vs. ulimit of 5500) | | modules | C:
[GitHub] [hadoop] bilaharith commented on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
bilaharith commented on pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#issuecomment-658292508 **Driver test results using accounts in Central India** mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify **Account with HNS Support** [INFO] Tests run: 65, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 74 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 **Account without HNS support** [INFO] Tests run: 65, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 248 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157504#comment-17157504 ] Ahmed Hussein commented on HADOOP-17101: Thanks [~jeagles] for your feedback. I changed the implementation of {{HostSet.toString()}} and uploaded a new patch. > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: HADOOP-17101.006.patch > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
bilaharith commented on a change in pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#discussion_r454469979 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java ## @@ -77,11 +77,14 @@ private AzureADAuthenticator() { * @param clientId the client ID (GUID) of the client web app * btained from Azure Active Directory configuration * @param clientSecret the secret key of the client web app + * @param tokenFetchRetryPolicy retry policy to be used for token fetch AAD + * calls. * @return {@link AzureADToken} obtained using the creds * @throws IOException throws IOException if there is a failure in connecting to Azure AD */ public static AzureADToken getTokenUsingClientCreds(String authEndpoint, - String clientId, String clientSecret) + String clientId, + String clientSecret, ExponentialRetryPolicy tokenFetchRetryPolicy) Review comment: Done ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java ## @@ -275,21 +287,24 @@ public UnexpectedResponseException(final int httpErrorCode, } private static AzureADToken getTokenCall(String authEndpoint, String body, - Hashtable headers, String httpMethod) throws IOException { -return getTokenCall(authEndpoint, body, headers, httpMethod, false); + Hashtable headers, String httpMethod, + ExponentialRetryPolicy tokenFetchRetryPolicy) throws IOException { +return getTokenCall(authEndpoint, body, headers, httpMethod, false, +tokenFetchRetryPolicy); } private static AzureADToken getTokenCall(String authEndpoint, String body, - Hashtable headers, String httpMethod, boolean isMsi) + Hashtable headers, String httpMethod, boolean isMsi, + ExponentialRetryPolicy tokenFetchRetryPolicy) throws IOException { AzureADToken token = null; -ExponentialRetryPolicy retryPolicy -= new ExponentialRetryPolicy(3, 0, 1000, 2); int httperror = 0; IOException ex = null; boolean succeeded = false; int retryCount = 0; +boolean shouldRetry; +LOG.debug("First execution of REST operation getTokenSingleCall"); Review comment: Done ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java ## @@ -275,21 +287,24 @@ public UnexpectedResponseException(final int httpErrorCode, } private static AzureADToken getTokenCall(String authEndpoint, String body, - Hashtable headers, String httpMethod) throws IOException { -return getTokenCall(authEndpoint, body, headers, httpMethod, false); + Hashtable headers, String httpMethod, + ExponentialRetryPolicy tokenFetchRetryPolicy) throws IOException { +return getTokenCall(authEndpoint, body, headers, httpMethod, false, +tokenFetchRetryPolicy); } private static AzureADToken getTokenCall(String authEndpoint, String body, - Hashtable headers, String httpMethod, boolean isMsi) + Hashtable headers, String httpMethod, boolean isMsi, + ExponentialRetryPolicy tokenFetchRetryPolicy) throws IOException { AzureADToken token = null; -ExponentialRetryPolicy retryPolicy Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
bilaharith commented on a change in pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#discussion_r454469249 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java ## @@ -275,21 +287,24 @@ public UnexpectedResponseException(final int httpErrorCode, } private static AzureADToken getTokenCall(String authEndpoint, String body, - Hashtable headers, String httpMethod) throws IOException { -return getTokenCall(authEndpoint, body, headers, httpMethod, false); + Hashtable headers, String httpMethod, + ExponentialRetryPolicy tokenFetchRetryPolicy) throws IOException { +return getTokenCall(authEndpoint, body, headers, httpMethod, false, +tokenFetchRetryPolicy); } private static AzureADToken getTokenCall(String authEndpoint, String body, - Hashtable headers, String httpMethod, boolean isMsi) + Hashtable headers, String httpMethod, boolean isMsi, + ExponentialRetryPolicy tokenFetchRetryPolicy) throws IOException { AzureADToken token = null; -ExponentialRetryPolicy retryPolicy Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157475#comment-17157475 ] Jim Brennan commented on HADOOP-17127: -- [~cgregori], [~xkrogen] can you please review? > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157471#comment-17157471 ] Hadoop QA commented on HADOOP-17127: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 16s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 46s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/17041/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17127 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13007628/HADOOP-17127.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a329c7ab1ba3 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / bdce75d737b | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/17041/testReport/ | | Max. process+thread count | 3170 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https:
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157465#comment-17157465 ] Jonathan Turner Eagles commented on HADOOP-17101: - Thanks for the updated patch, [~ahussein]. {code:title=HostSet.java} String sep = ""; while (iter.hasNext()) { InetSocketAddress addr = iter.next(); sb.append(sep + addr.getAddress().getHostAddress() + ":" + addr.getPort()); sep = ","; } {code} I wasn't clear in my comment from before regarding unrolling this loop, but this is very close. By unrolling this loop, we can add each substring individually and prevent intermediate string creation. Also making the separator and ":" a char (not string) is slightly more efficient in general, but probably won't make a huge difference in this code. {code:title=suggestion} sb.append(sep); sb.append(addr.getAddress().getHostAddress()); // Notice single quote below for slightly more efficient char add sb.append(':'); sb.append(addr.getPort()); {code} Everything else looks great. > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: (was: HADOOP-17101.001.patch) > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: (was: HADOOP-17101.002.patch) > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: (was: HADOOP-17101.003.patch) > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: (was: HADOOP-17101.004.patch) > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.005.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17099: --- Attachment: (was: HADOOP-17099.001.patch) > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17099: --- Attachment: (was: HADOOP-17099.003.patch) > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17099: --- Attachment: (was: HADOOP-17099.002.patch) > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.004.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17022) Tune listFiles() api.
[ https://issues.apache.org/jira/browse/HADOOP-17022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157433#comment-17157433 ] Hudson commented on HADOOP-17022: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18434 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18434/]) HADOOP-17022. Tune S3AFileSystem.listFiles() API. (stevel: rev 4647a60430136aa4abc18d5112b93a8b927dbd1f) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java > Tune listFiles() api. > - > > Key: HADOOP-17022 > URL: https://issues.apache.org/jira/browse/HADOOP-17022 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > > Similar optimisation which was done for listLocatedSttaus() > https://issues.apache.org/jira/browse/HADOOP-16465 can done for listFiles() > and listStatus() api as well. > This is going to reduce the number of remote calls in case of directory > listing. > > CC [~ste...@apache.org] [~shwethags] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException
[ https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16998. - Fix Version/s: 3.3.1 Resolution: Fixed Fixed in Hadoop 3.3.1 > WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException > -- > > Key: HADOOP-16998 > URL: https://issues.apache.org/jira/browse/HADOOP-16998 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Fix For: 3.3.1 > > Attachments: HADOOP-16998.patch > > > During HFile create, at the end when called close() on the OutputStream, > there is some pending data to get flushed. When this flush happens, an > Exception is thrown back from Storage. The Azure-storage SDK layer will throw > back IOE. (Even if it is a StorageException thrown from the Storage, the SDK > converts it to IOE.) But at HBase, we end up getting IllegalArgumentException > which causes the RS to get aborted. If we get back IOE, the flush will get > retried instead of aborting RS. > The reason is this > NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. > But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream > which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream > calls close on SyncableDataOutputStream and it uses below method from > FilterOutputStream > {code} > public void close() throws IOException { > try (OutputStream ostream = out) { > flush(); > } > } > {code} > Here the flush call caused an IOE to be thrown to here. The finally will > issue close call on ostream (Which is an instance of BlobOutputStreamInternal) > When BlobOutputStreamInternal#close() is been called, if there was any > exception already occured on that Stream, it will throw back the same > Exception > {code} > public synchronized void close() throws IOException { > try { > // if the user has already closed the stream, this will throw a > STREAM_CLOSED exception > // if an exception was thrown by any thread in the > threadExecutor, realize it now > this.checkStreamState(); > ... > } > private void checkStreamState() throws IOException { > if (this.lastError != null) { > throw this.lastError; > } > } > {code} > So here both try and finally block getting Exceptions and Java uses > Throwable#addSuppressed() > Within this method if both Exceptions are same objects, it throws back > IllegalArgumentException > {code} > public final synchronized void addSuppressed(Throwable exception) { > if (exception == this) > throw new > IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception); > > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
hadoop-yetus commented on pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#issuecomment-658208537 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 29m 43s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 26m 0s | trunk passed | | +1 :green_heart: | compile | 0m 41s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 36s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | +1 :green_heart: | shadedclient | 19m 12s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 32s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 25s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 3s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 0s | trunk passed | | -0 :warning: | patch | 1m 20s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 35s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 35s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 17m 30s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 26s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 5s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 1m 13s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 104m 45s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.azure.TestClientThrottlingAnalyzer | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2123 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux d04a7a6370a8 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 48f90115b5e | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/5/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/5/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/5/testReport/ | | Max. process+thread count | 332 (vs. ulimit of
[GitHub] [hadoop] hadoop-yetus commented on pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…
hadoop-yetus commented on pull request #2073: URL: https://github.com/apache/hadoop/pull/2073#issuecomment-658200563 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 21s | trunk passed | | +1 :green_heart: | compile | 0m 37s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 32s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | trunk passed | | +1 :green_heart: | shadedclient | 14m 59s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 29s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 55s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 53s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 41s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 25s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 28s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 60m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2073 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b147b6dfa804 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 48f90115b5e | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/6/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/6/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/6/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/6/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please
[jira] [Updated] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-17127: - Attachment: HADOOP-17127.002.patch > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
[ https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157397#comment-17157397 ] Jim Brennan commented on HADOOP-17127: -- I've submitted patch 002 to fix the checkstyle issue. > Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime > -- > > Key: HADOOP-17127 > URL: https://issues.apache.org/jira/browse/HADOOP-17127 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch > > > While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of > {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to > modify this code in DecayRpcScheduler.addResponseTime() to initialize > {{queueTime}} and {{processingTime}} with the correct units. > {noformat} > long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS); > long processingTime = details.get(Timing.PROCESSING, > TimeUnit.MILLISECONDS); > {noformat} > If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler. > We also found one test case in TestRPC that was assuming the units were > milliseconds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #2032: HDFS-15371. Nonstandard characters exist in NameNode.java
steveloughran merged pull request #2032: URL: https://github.com/apache/hadoop/pull/2032 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #2002: HADOOP-17020. RawFileSystem could localize default block size to avoi…
steveloughran closed pull request #2002: URL: https://github.com/apache/hadoop/pull/2002 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1905: Merge pull request #1 from apache/trunk
steveloughran closed pull request #1905: URL: https://github.com/apache/hadoop/pull/1905 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1993: HADOOP-17021. Add concat fs command
steveloughran commented on a change in pull request #1993: URL: https://github.com/apache/hadoop/pull/1993#discussion_r454350102 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestFsShellConcat.java ## @@ -0,0 +1,152 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.shell; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsShell; +import org.apache.hadoop.fs.LocalFileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.io.IOUtils; +import org.apache.hadoop.test.GenericTestUtils; +import org.junit.Before; +import org.junit.Test; +import org.mockito.Mockito; + +import java.io.ByteArrayOutputStream; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.IOException; +import java.io.PrintStream; +import java.net.URI; +import java.util.Random; + +import static org.mockito.ArgumentMatchers.any; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +/** + * Test Concat. + */ +public class TestFsShellConcat { + + private static Configuration conf; + private static FsShell shell; + private static LocalFileSystem lfs; + private static Path testRootDir; + private static Path dstPath; + + @Before + public void before() throws IOException { +conf = new Configuration(); +shell = new FsShell(conf); +lfs = FileSystem.getLocal(conf); +testRootDir = lfs.makeQualified(new Path(GenericTestUtils.getTempPath( +"testFsShellCopy"))); + +if (lfs.exists(testRootDir)) { + lfs.delete(testRootDir, true); +} +lfs.mkdirs(testRootDir); +lfs.setWorkingDirectory(testRootDir); +dstPath = new Path(testRootDir, "dstFile"); +lfs.create(dstPath).close(); + +Random random = new Random(); +for (int i = 0; i < 10; i++) { + OutputStream out = lfs.create(new Path(testRootDir, "file-" + i)); + out.write(random.nextInt()); + out.close(); +} + } + + @Test + public void testConcat() throws Exception { +FileSystem mockFs = Mockito.mock(FileSystem.class); +Mockito.doAnswer(invocation -> { + Object[] args = invocation.getArguments(); + Path target = (Path)args[0]; + Path[] src = (Path[]) args[1]; + mockConcat(target, src); + return null; +}).when(mockFs).concat(any(Path.class), any(Path[].class)); +Concat.setTstFs(mockFs); +shellRun(0, "-concat", dstPath.toString(), testRootDir+"/file-*"); + +assertTrue(lfs.exists(dstPath)); Review comment: use the relevant ContractTestUtils assertion here, for better error reporting. ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestFsShellConcat.java ## @@ -0,0 +1,152 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.shell; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsShell; +import org.apache.hadoop.fs.LocalFileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.io.IOUtils; +import org.apache.hadoop.test.GenericTestUtils; +import org.junit.Before; +import org.junit.Test; +import org.mockito.Mockito; + +import java.io.ByteArrayOutputStream; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.IOException; +import j
[jira] [Commented] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy
[ https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157363#comment-17157363 ] Swaminathan Balachandran commented on HADOOP-17122: --- [~mahadev] Please take a look at this. > Bug in preserving Directory Attributes in DistCp with Atomic Copy > - > > Key: HADOOP-17122 > URL: https://issues.apache.org/jira/browse/HADOOP-17122 > Project: Hadoop Common > Issue Type: Bug >Reporter: Swaminathan Balachandran >Priority: Major > Attachments: HADOOP-17122.001.patch, Screenshot 2020-07-11 at > 10.26.30 AM.png > > > Description: > In case of Atomic Copy the copied data is commited and post that the preserve > directory attributes runs. Preserving directory attributes is done over work > path and not final path. I have fixed the base directory to point towards > final path. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException
[ https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157359#comment-17157359 ] Hudson commented on HADOOP-16998: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18432 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18432/]) HADOOP-16998. WASB : NativeAzureFsOutputStream#close() throwing (github: rev 380e0f4506a818d6337271ae6d996927f70b601b) * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SyncableDataOutputStream.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestSyncableDataOutputStream.java > WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException > -- > > Key: HADOOP-16998 > URL: https://issues.apache.org/jira/browse/HADOOP-16998 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Attachments: HADOOP-16998.patch > > > During HFile create, at the end when called close() on the OutputStream, > there is some pending data to get flushed. When this flush happens, an > Exception is thrown back from Storage. The Azure-storage SDK layer will throw > back IOE. (Even if it is a StorageException thrown from the Storage, the SDK > converts it to IOE.) But at HBase, we end up getting IllegalArgumentException > which causes the RS to get aborted. If we get back IOE, the flush will get > retried instead of aborting RS. > The reason is this > NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. > But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream > which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream > calls close on SyncableDataOutputStream and it uses below method from > FilterOutputStream > {code} > public void close() throws IOException { > try (OutputStream ostream = out) { > flush(); > } > } > {code} > Here the flush call caused an IOE to be thrown to here. The finally will > issue close call on ostream (Which is an instance of BlobOutputStreamInternal) > When BlobOutputStreamInternal#close() is been called, if there was any > exception already occured on that Stream, it will throw back the same > Exception > {code} > public synchronized void close() throws IOException { > try { > // if the user has already closed the stream, this will throw a > STREAM_CLOSED exception > // if an exception was thrown by any thread in the > threadExecutor, realize it now > this.checkStreamState(); > ... > } > private void checkStreamState() throws IOException { > if (this.lastError != null) { > throw this.lastError; > } > } > {code} > So here both try and finally block getting Exceptions and Java uses > Throwable#addSuppressed() > Within this method if both Exceptions are same objects, it throws back > IllegalArgumentException > {code} > public final synchronized void addSuppressed(Throwable exception) { > if (exception == this) > throw new > IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception); > > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…
steveloughran commented on pull request #2073: URL: https://github.com/apache/hadoop/pull/2073#issuecomment-658169917 merged to trunk; rebuilding branch 3.3 with the patch cherrypicked.. not quite set up to run wasb tests there so just validating that the patch went in OK This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu opened a new pull request #2138: HDFS-15469. Dynamically configure the size of PacketReceiver#MAX_PACKET_SIZE.
jianghuazhu opened a new pull request #2138: URL: https://github.com/apache/hadoop/pull/2138 …ET_SIZE. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…
steveloughran merged pull request #2073: URL: https://github.com/apache/hadoop/pull/2073 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…
hadoop-yetus removed a comment on pull request #2073: URL: https://github.com/apache/hadoop/pull/2073#issuecomment-647086228 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException
[ https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16998: Summary: WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException (was: WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted) > WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException > -- > > Key: HADOOP-16998 > URL: https://issues.apache.org/jira/browse/HADOOP-16998 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Attachments: HADOOP-16998.patch > > > During HFile create, at the end when called close() on the OutputStream, > there is some pending data to get flushed. When this flush happens, an > Exception is thrown back from Storage. The Azure-storage SDK layer will throw > back IOE. (Even if it is a StorageException thrown from the Storage, the SDK > converts it to IOE.) But at HBase, we end up getting IllegalArgumentException > which causes the RS to get aborted. If we get back IOE, the flush will get > retried instead of aborting RS. > The reason is this > NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. > But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream > which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream > calls close on SyncableDataOutputStream and it uses below method from > FilterOutputStream > {code} > public void close() throws IOException { > try (OutputStream ostream = out) { > flush(); > } > } > {code} > Here the flush call caused an IOE to be thrown to here. The finally will > issue close call on ostream (Which is an instance of BlobOutputStreamInternal) > When BlobOutputStreamInternal#close() is been called, if there was any > exception already occured on that Stream, it will throw back the same > Exception > {code} > public synchronized void close() throws IOException { > try { > // if the user has already closed the stream, this will throw a > STREAM_CLOSED exception > // if an exception was thrown by any thread in the > threadExecutor, realize it now > this.checkStreamState(); > ... > } > private void checkStreamState() throws IOException { > if (this.lastError != null) { > throw this.lastError; > } > } > {code} > So here both try and finally block getting Exceptions and Java uses > Throwable#addSuppressed() > Within this method if both Exceptions are same objects, it throws back > IllegalArgumentException > {code} > public final synchronized void addSuppressed(Throwable exception) { > if (exception == this) > throw new > IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception); > > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1861: HADOOP-13230. Optionally retain directory markers
hadoop-yetus removed a comment on pull request #1861: URL: https://github.com/apache/hadoop/pull/1861#issuecomment-628608819 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 13 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 53s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 18m 50s | trunk passed | | +1 :green_heart: | compile | 17m 4s | trunk passed | | +1 :green_heart: | checkstyle | 2m 35s | trunk passed | | +1 :green_heart: | mvnsite | 2m 21s | trunk passed | | +1 :green_heart: | shadedclient | 19m 28s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 46s | trunk passed | | +0 :ok: | spotbugs | 1m 13s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 17s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 23s | the patch passed | | +1 :green_heart: | compile | 16m 26s | the patch passed | | +1 :green_heart: | javac | 16m 26s | the patch passed | | -0 :warning: | checkstyle | 2m 40s | root: The patch generated 33 new + 64 unchanged - 1 fixed = 97 total (was 65) | | +1 :green_heart: | mvnsite | 2m 23s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 7s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 48s | the patch passed | | -1 :x: | findbugs | 1m 22s | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 16s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 37s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | The patch does not generate ASF License warnings. | | | | 120m 53s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Dead store to leafMarkers in org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, int, boolean, StoreContext, OperationCallbacks) At MarkerTool.java:org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, int, boolean, StoreContext, OperationCallbacks) At MarkerTool.java:[line 187] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1861 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 4672cf5466b0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1958cb7c2be | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/16/artifact/out/diff-checkstyle-root.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/16/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/16/testReport/ | | Max. process+thread count | 3022 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/16/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1861: HADOOP-13230. Optionally retain directory markers
hadoop-yetus removed a comment on pull request #1861: URL: https://github.com/apache/hadoop/pull/1861#issuecomment-640569965 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 23s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 13 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 17s | trunk passed | | +1 :green_heart: | compile | 24m 29s | trunk passed | | +1 :green_heart: | checkstyle | 3m 25s | trunk passed | | +1 :green_heart: | mvnsite | 2m 18s | trunk passed | | +1 :green_heart: | shadedclient | 21m 50s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 30s | trunk passed | | +0 :ok: | spotbugs | 1m 8s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 17s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 25s | the patch passed | | +1 :green_heart: | compile | 17m 32s | the patch passed | | +1 :green_heart: | javac | 17m 32s | the patch passed | | -0 :warning: | checkstyle | 2m 54s | root: The patch generated 33 new + 64 unchanged - 1 fixed = 97 total (was 65) | | +1 :green_heart: | mvnsite | 2m 10s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 17s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 27s | the patch passed | | -1 :x: | findbugs | 1m 18s | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 32s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 22s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | The patch does not generate ASF License warnings. | | | | 142m 43s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Dead store to leafMarkers in org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, int, boolean, StoreContext, OperationCallbacks) At MarkerTool.java:org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, int, boolean, StoreContext, OperationCallbacks) At MarkerTool.java:[line 187] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1861 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 190cc53f2ddb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a8610c15c49 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/artifact/out/diff-checkstyle-root.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/testReport/ | | Max. process+thread count | 3298 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --
[jira] [Created] (HADOOP-17128) double buffer memory size hard code
David Wei created HADOOP-17128: -- Summary: double buffer memory size hard code Key: HADOOP-17128 URL: https://issues.apache.org/jira/browse/HADOOP-17128 Project: Hadoop Common Issue Type: Improvement Components: hdfs-client Affects Versions: 2.7.7 Environment: D:\hadoop-2.7.0-src\hadoop-hdfs-project\hadoop-hdfs\src\main\java\org\apache\hadoop\hdfs\qjournal\client\QuorumJournalManager.java Reporter: David Wei Fix For: site private int outputBufferCapacity = 512 * 1024; -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2051: HDFS-15385 Upgrade boost library
hadoop-yetus commented on pull request #2051: URL: https://github.com/apache/hadoop/pull/2051#issuecomment-658094213 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/hadoop-multibranch/job/PR-2051/14/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on pull request #2051: HDFS-15385 Upgrade boost library
GauthamBanasandra commented on pull request #2051: URL: https://github.com/apache/hadoop/pull/2051#issuecomment-658088447 @aajisaka I've now added `# hadolint ignore=DL3003` to suppress the hadolint warnings. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2051: HDFS-15385 Upgrade boost library
aajisaka commented on pull request #2051: URL: https://github.com/apache/hadoop/pull/2051#issuecomment-658068100 Would you add `# hadolint ignore=DL3003` to ignore the following hadolint warnings? https://builds.apache.org/job/hadoop-multibranch/job/PR-2051/13/artifact/out/diff-patch-hadolint.txt ``` dev-support/docker/Dockerfile:98 DL3003 Use WORKDIR to switch to a directory dev-support/docker/Dockerfile_aarch64:101 DL3003 Use WORKDIR to switch to a directory ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16679) Switch to okhttp3
[ https://issues.apache.org/jira/browse/HADOOP-16679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157237#comment-17157237 ] lindongdong commented on HADOOP-16679: -- okhttp 2.7.5 does not support IPv6 address, I think it is necessary to update its version. [https://github.com/square/okhttp/issues/2618] > Switch to okhttp3 > - > > Key: HADOOP-16679 > URL: https://issues.apache.org/jira/browse/HADOOP-16679 > Project: Hadoop Common > Issue Type: Improvement > Components: common, fs/azure >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > > Switch from okhttp 2.7.5 to 3.* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16679) Switch to okhttp3
[ https://issues.apache.org/jira/browse/HADOOP-16679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157233#comment-17157233 ] lindongdong commented on HADOOP-16679: -- any update? > Switch to okhttp3 > - > > Key: HADOOP-16679 > URL: https://issues.apache.org/jira/browse/HADOOP-16679 > Project: Hadoop Common > Issue Type: Improvement > Components: common, fs/azure >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > > Switch from okhttp 2.7.5 to 3.* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.
mukund-thakur commented on a change in pull request #2038: URL: https://github.com/apache/hadoop/pull/2038#discussion_r454200493 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -4181,79 +4181,126 @@ public LocatedFileStatus next() throws IOException { Path path = qualify(f); LOG.debug("listFiles({}, {})", path, recursive); try { - // if a status was given, that is used, otherwise - // call getFileStatus, which triggers an existence check - final S3AFileStatus fileStatus = status != null - ? status - : (S3AFileStatus) getFileStatus(path); - if (fileStatus.isFile()) { + // if a status was given and it is a file. + if (status != null && status.isFile()) { // simple case: File LOG.debug("Path is a file"); return new Listing.SingleStatusRemoteIterator( -toLocatedFileStatus(fileStatus)); - } else { -// directory: do a bulk operation -String key = maybeAddTrailingSlash(pathToKey(path)); -String delimiter = recursive ? null : "/"; -LOG.debug("Requesting all entries under {} with delimiter '{}'", -key, delimiter); -final RemoteIterator cachedFilesIterator; -final Set tombstones; -boolean allowAuthoritative = allowAuthoritative(f); -if (recursive) { - final PathMetadata pm = metadataStore.get(path, true); - // shouldn't need to check pm.isDeleted() because that will have - // been caught by getFileStatus above. - MetadataStoreListFilesIterator metadataStoreListFilesIterator = - new MetadataStoreListFilesIterator(metadataStore, pm, - allowAuthoritative); - tombstones = metadataStoreListFilesIterator.listTombstones(); - // if all of the below is true - // - authoritative access is allowed for this metadatastore for this directory, - // - all the directory listings are authoritative on the client - // - the caller does not force non-authoritative access - // return the listing without any further s3 access - if (!forceNonAuthoritativeMS && - allowAuthoritative && - metadataStoreListFilesIterator.isRecursivelyAuthoritative()) { -S3AFileStatus[] statuses = S3Guard.iteratorToStatuses( -metadataStoreListFilesIterator, tombstones); -cachedFilesIterator = listing.createProvidedFileStatusIterator( -statuses, ACCEPT_ALL, acceptor); -return listing.createLocatedFileStatusIterator(cachedFilesIterator); - } - cachedFilesIterator = metadataStoreListFilesIterator; -} else { - DirListingMetadata meta = - S3Guard.listChildrenWithTtl(metadataStore, path, ttlTimeProvider, - allowAuthoritative); - if (meta != null) { -tombstones = meta.listTombstones(); - } else { -tombstones = null; - } - cachedFilesIterator = listing.createProvidedFileStatusIterator( - S3Guard.dirMetaToStatuses(meta), ACCEPT_ALL, acceptor); - if (allowAuthoritative && meta != null && meta.isAuthoritative()) { -// metadata listing is authoritative, so return it directly -return listing.createLocatedFileStatusIterator(cachedFilesIterator); - } +toLocatedFileStatus(status)); + } + // Assuming the path to be a directory + // do a bulk operation. + RemoteIterator listFilesAssumingDir = + getListFilesAssumingDir(path, + recursive, + acceptor, + collectTombstones, + forceNonAuthoritativeMS); + // If there are no list entries present, we + // fallback to file existence check as the path + // can be a file or empty directory. + if (!listFilesAssumingDir.hasNext()) { +// If file status was already passed, reuse it. +final S3AFileStatus fileStatus = status != null +? status +: (S3AFileStatus) getFileStatus(path); Review comment: This won't work as explained in https://github.com/apache/hadoop/pull/2038#discussion_r450676043 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2051: HDFS-15385 Upgrade boost library
hadoop-yetus commented on pull request #2051: URL: https://github.com/apache/hadoop/pull/2051#issuecomment-658017562 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 23m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 7s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 4s | trunk passed | | +1 :green_heart: | compile | 20m 37s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 17m 32s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | mvnsite | 18m 8s | trunk passed | | +1 :green_heart: | shadedclient | 15m 46s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 31s | root in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 5m 35s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -0 :warning: | patch | 22m 13s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 21m 54s | the patch passed | | +1 :green_heart: | compile | 20m 17s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | -1 :x: | cc | 20m 17s | root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 9 new + 153 unchanged - 9 fixed = 162 total (was 162) | | +1 :green_heart: | golang | 20m 17s | the patch passed | | +1 :green_heart: | javac | 20m 17s | the patch passed | | +1 :green_heart: | compile | 17m 43s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | cc | 17m 43s | root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 27 new + 135 unchanged - 27 fixed = 162 total (was 162) | | +1 :green_heart: | golang | 17m 43s | the patch passed | | +1 :green_heart: | javac | 17m 43s | the patch passed | | -1 :x: | hadolint | 0m 4s | The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 18m 2s | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 13s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 35s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 31s | root in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 5m 30s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | ||| _ Other Tests _ | | -1 :x: | unit | 595m 42s | root in the patch passed. | | -1 :x: | asflicense | 2m 28s | The patch generated 1 ASF License warnings. | | | | 831m 59s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.mapreduce.lib.input.TestCombineFileInputFormat | | | hadoop.yarn.sls.appmaster.TestAMSimulator | | | hadoop.yarn.applications.distributedshell.TestDistributedShell | | | hadoop.yarn.server.resourcemanager.placement.TestUserGroupMappingPlacementRule | | | hadoop.security.TestFixKerberosTicketOrder | | | hadoop.security.TestRaceWhenRelogin | | | hadoop.hdfs.server.federation.router.TestRouterRpc | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2051/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2051 | | Optional Tests | dupname asflicense hadolint shellch