[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154237#comment-17154237 ] Hadoop QA commented on HADOOP-16492: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 21 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 1s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 28s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} branch/hadoop-tools/hadoop-tools-dist no findbugs output file (findbugsXml.xml) {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} branch/hadoop-cloud-storage-project/hadoop-cloud-storage no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 13s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 48s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} hadoop-project has no data from findbugs {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} hadoop-tools/hadoop-tools-dist has no data from findbugs {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} hadoop-cloud-storage-project/hadoop-cloud-storage has no data from findbugs {color} | || || || || {color:brown} Other Tests {col
[GitHub] [hadoop] aajisaka commented on pull request #2126: YARN-10344. Sync netty versions in hadoop-yarn-csi.
aajisaka commented on pull request #2126: URL: https://github.com/apache/hadoop/pull/2126#issuecomment-655921558 Thank you! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17113) Adding ReadAhead Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mehakmeet Singh updated HADOOP-17113: - Description: Adding ReadAheads Counters in ABFS to track the behavior of the ReadAhead feature in ABFS. This would include 2 counters: |READ_AHEAD_BYTES_READ|number of bytes read by readAhead| |READ_AHEAD_REMOTE_BYTES_READ|number of bytes not used after readAhead was used| was: Adding ReadAheads Counters in ABFS to track the behavior of the ReadAhead feature in ABFS. This would include 2 counters: |READ_AHEAD_REQUESTED_BYTES|number of bytes read by readAhead| |READ_AHEAD_REMOTE_BYTES|number of bytes not used after readAhead was used| > Adding ReadAhead Counters in ABFS > - > > Key: HADOOP-17113 > URL: https://issues.apache.org/jira/browse/HADOOP-17113 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > > Adding ReadAheads Counters in ABFS to track the behavior of the ReadAhead > feature in ABFS. This would include 2 counters: > |READ_AHEAD_BYTES_READ|number of bytes read by readAhead| > |READ_AHEAD_REMOTE_BYTES_READ|number of bytes not used after readAhead was > used| -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brahmareddybattula merged pull request #2126: YARN-10344. Sync netty versions in hadoop-yarn-csi.
brahmareddybattula merged pull request #2126: URL: https://github.com/apache/hadoop/pull/2126 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brahmareddybattula commented on pull request #2126: YARN-10344. Sync netty versions in hadoop-yarn-csi.
brahmareddybattula commented on pull request #2126: URL: https://github.com/apache/hadoop/pull/2126#issuecomment-655900211 +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on pull request #2131: HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml
umamaheswararao commented on pull request #2131: URL: https://github.com/apache/hadoop/pull/2131#issuecomment-655899012 Thanks @smengcl for reporting it. Yes, when we added other fss in config, ofs branch was not merged. Thanks for the fix. All good. Consider @bharatviswa504 vote, go ahead merging. Thanks @bharatviswa504 for review. Pending jenkins. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl opened a new pull request #2131: HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml
smengcl opened a new pull request #2131: URL: https://github.com/apache/hadoop/pull/2131 https://issues.apache.org/jira/browse/HDFS-15462 HDFS-15394 added existing FS implementations in core-default.xml except ofs. Let's add ofs to core-default as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error
[ https://issues.apache.org/jira/browse/HADOOP-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-17120: -- Fix Version/s: 3.1.5 3.3.1 2.10.1 3.2.2 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed this to branch-3.3, branch-3.2, brach-3.1 and branch-2.10. > Fix failure of docker image creation due to pip2 install error > -- > > Key: HADOOP-17120 > URL: https://issues.apache.org/jira/browse/HADOOP-17120 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.4 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Fix For: 3.2.2, 2.10.1, 3.3.1, 3.1.5 > > > {noformat} > The command '/bin/sh -c pip2 install configparser==4.0.2 > pylint==1.9.2' returned a non-zero code: 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154162#comment-17154162 ] Brahma Reddy Battula commented on HADOOP-17119: --- [~BilwaST] thanks for reporting. we no need check the cause for bind exception. whatever the exception just we want to ignore and try for another port,so HADOOP-17119.001.patch should fine I feel. > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhongjun updated HADOOP-16492: -- Attachment: HADOOP-16492.013.patch > Support HuaweiCloud Object Storage as a Hadoop Backend File System > -- > > Key: HADOOP-16492 > URL: https://issues.apache.org/jira/browse/HADOOP-16492 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 3.4.0 >Reporter: zhongjun >Priority: Major > Attachments: Difference Between OBSA and S3A.pdf, > HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, > HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, > HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, > HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, > HADOOP-16492.013.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf > > > Added support for HuaweiCloud OBS > ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, > just like what we do before for S3, ADLS, OSS, etc. With simple > configuration, Hadoop applications can read/write data from OBS without any > code change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2130: HADOOP-17120. Fix failure of docker image creation due to pip2 install error.
iwasakims commented on pull request #2130: URL: https://github.com/apache/hadoop/pull/2130#issuecomment-655861058 Thanks, @aajisaka. I updated the title and merged. I'm going to cherry-pick this to other branches. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims merged pull request #2130: HADOOP-17120. Fix failure of docker image creation due to pip2 install error.
iwasakims merged pull request #2130: URL: https://github.com/apache/hadoop/pull/2130 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error
[ https://issues.apache.org/jira/browse/HADOOP-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-17120: -- Summary: Fix failure of docker image creation due to pip2 install error (was: Fix failure of docker image creation due to pip2 install error on branch-3.1) > Fix failure of docker image creation due to pip2 install error > -- > > Key: HADOOP-17120 > URL: https://issues.apache.org/jira/browse/HADOOP-17120 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.4 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > > {noformat} > The command '/bin/sh -c pip2 install configparser==4.0.2 > pylint==1.9.2' returned a non-zero code: 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2130: HADOOP-17120. Fix failure of docker image creation due to pip2 install error on branch-3.1.
hadoop-yetus commented on pull request #2130: URL: https://github.com/apache/hadoop/pull/2130#issuecomment-655857354 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 12m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | hadolint | 0m 0s | hadolint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-3.1 Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for branch | | -1 :x: | shadedclient | 1m 9s | branch has errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 16s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | -1 :x: | shadedclient | 0m 50s | patch has errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 17m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2130/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2130 | | Optional Tests | dupname asflicense hadolint shellcheck shelldocs | | uname | Linux 6fbb7706b6e2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.1 / 4fa8055 | | Max. process+thread count | 81 (vs. ulimit of 5500) | | modules | C: U: | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2130/1/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.3.7 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2130: HADOOP-17120. Fix failure of docker image creation due to pip2 install error on branch-3.1.
hadoop-yetus commented on pull request #2130: URL: https://github.com/apache/hadoop/pull/2130#issuecomment-655852395 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/hadoop-multibranch/job/PR-2130/1/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error on branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-17120: -- Status: Patch Available (was: Open) > Fix failure of docker image creation due to pip2 install error on branch-3.1 > > > Key: HADOOP-17120 > URL: https://issues.apache.org/jira/browse/HADOOP-17120 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.4 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > > {noformat} > The command '/bin/sh -c pip2 install configparser==4.0.2 > pylint==1.9.2' returned a non-zero code: 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error on branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154131#comment-17154131 ] Masatake Iwasaki commented on HADOOP-17120: --- Thanks, [~aajisaka]. I created PR. > Fix failure of docker image creation due to pip2 install error on branch-3.1 > > > Key: HADOOP-17120 > URL: https://issues.apache.org/jira/browse/HADOOP-17120 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.4 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > > {noformat} > The command '/bin/sh -c pip2 install configparser==4.0.2 > pylint==1.9.2' returned a non-zero code: 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims opened a new pull request #2130: HADOOP-17120. Fix failure of docker image creation due to pip2 install error on branch-3.1.
iwasakims opened a new pull request #2130: URL: https://github.com/apache/hadoop/pull/2130 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error on branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154107#comment-17154107 ] Akira Ajisaka commented on HADOOP-17120: The cause is that the latest version of isort dropped Python 2 support. > Fix failure of docker image creation due to pip2 install error on branch-3.1 > > > Key: HADOOP-17120 > URL: https://issues.apache.org/jira/browse/HADOOP-17120 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.4 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > > {noformat} > The command '/bin/sh -c pip2 install configparser==4.0.2 > pylint==1.9.2' returned a non-zero code: 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
jojochuang commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r451883022 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java ## @@ -320,21 +321,21 @@ private Iterator bindUsers; private BindUserInfo currentBindUser; - private String userbaseDN; + private volatile String userbaseDN; Review comment: Just realized it's there to make findbugs happy. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154083#comment-17154083 ] Hadoop QA commented on HADOOP-17101: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 20s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 23s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 43s{color} | {color:green} root: The patch generated 0 new + 67 unchanged - 3 fixed = 67 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 38s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-build-tools in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 29s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 22s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 11s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 11s{color}
[GitHub] [hadoop] iwasakims closed pull request #2129: YARN-10347. Fix double locking in CapacityScheduler#reinitialize in branch-3.1.
iwasakims closed pull request #2129: URL: https://github.com/apache/hadoop/pull/2129 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error on branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154078#comment-17154078 ] Masatake Iwasaki commented on HADOOP-17120: --- I got no error on trunk. This seems to be branch specific one. > Fix failure of docker image creation due to pip2 install error on branch-3.1 > > > Key: HADOOP-17120 > URL: https://issues.apache.org/jira/browse/HADOOP-17120 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.4 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > > {noformat} > The command '/bin/sh -c pip2 install configparser==4.0.2 > pylint==1.9.2' returned a non-zero code: 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error on branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154069#comment-17154069 ] Masatake Iwasaki commented on HADOOP-17120: --- {noformat} Step 26/37 : RUN pip2 install configparser==4.0.2 pylint==1.9.2 ---> Running in ee3598ed6a5a Collecting configparser==4.0.2 Downloading https://files.pythonhosted.org/packages/7a/2a/95ed0501cf5d8709490b1d3a3f9b5cf340da6c433f896bbe9ce08dbe6785/configparser-4.0.2-py2.py3-none-any.whl Collecting pylint==1.9.2 Downloading https://files.pythonhosted.org/packages/f2/95/0ca03c818ba3cd14f2dd4e95df5b7fa232424b7fc6ea1748d27f293bc007/pylint-1.9.2-py2.py3-none-any.whl (690kB) Collecting singledispatch; python_version < "3.4" (from pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/c5/10/369f50bcd4621b263927b0a1519987a04383d4a98fb10438042ad410cf88/singledispatch-3.4.0.3-py2.py3-none-any.whl Collecting isort>=4.2.5 (from pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/c4/4d/b6286cf463f9cfca698b15524e1198856d68080096f05ca7b3437f0af867/isort-5.0.5.tar.gz (77kB) Collecting backports.functools-lru-cache; python_version == "2.7" (from pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/da/d1/080d2bb13773803648281a49e3918f65b31b7beebf009887a529357fd44a/backports.functools_lru_cache-1.6.1-py2.py3-none-any.whl Collecting mccabe (from pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl Collecting astroid<2.0,>=1.6 (from pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/8b/29/0f7ec6fbf28a158886b7de49aee3a77a8a47a7e24c60e9fd6ec98ee2ec02/astroid-1.6.6-py2.py3-none-any.whl (305kB) Collecting six (from pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/ee/ff/48bde5c0f013094d729fe4b0316ba2a24774b3ff1c52d924a8a4cb04078a/six-1.15.0-py2.py3-none-any.whl Collecting enum34>=1.1.3; python_version < "3.4" (from astroid<2.0,>=1.6->pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/6f/2c/a9386903ece2ea85e9807e0e062174dc26fdce8b05f216d00491be29fad5/enum34-1.1.10-py2-none-any.whl Collecting wrapt (from astroid<2.0,>=1.6->pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/82/f7/e43cefbe88c5fd371f4cf0cf5eb3feccd07515af9fd6cf7dbf1d1793a797/wrapt-1.12.1.tar.gz Collecting lazy-object-proxy (from astroid<2.0,>=1.6->pylint==1.9.2) Downloading https://files.pythonhosted.org/packages/f3/1f/3e31313f557e0b97bd8f9716f502fa85c5fef181f582f816a4796b8f9ee1/lazy_object_proxy-1.5.0-cp27-cp27mu-manylinux1_x86_64.whl (55kB) Building wheels for collected packages: isort, wrapt Running setup.py bdist_wheel for isort: started Running setup.py bdist_wheel for isort: finished with status 'error' Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ayvPD4/isort/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmpzmXICSpip-wheel- --python-tag cp27: /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'python_requires' warnings.warn(msg) running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-2.7 creating build/lib.linux-x86_64-2.7/isort copying isort/sorting.py -> build/lib.linux-x86_64-2.7/isort copying isort/setuptools_commands.py -> build/lib.linux-x86_64-2.7/isort copying isort/logo.py -> build/lib.linux-x86_64-2.7/isort copying isort/wrap_modes.py -> build/lib.linux-x86_64-2.7/isort copying isort/__main__.py -> build/lib.linux-x86_64-2.7/isort copying isort/pylama_isort.py -> build/lib.linux-x86_64-2.7/isort copying isort/hooks.py -> build/lib.linux-x86_64-2.7/isort copying isort/main.py -> build/lib.linux-x86_64-2.7/isort copying isort/profiles.py -> build/lib.linux-x86_64-2.7/isort copying isort/exceptions.py -> build/lib.linux-x86_64-2.7/isort copying isort/_version.py -> build/lib.linux-x86_64-2.7/isort copying isort/io.py -> build/lib.linux-x86_64-2.7/isort copying isort/format.py -> build/lib.linux-x86_64-2.7/isort copying isort/output.py -> build/lib.linux-x86_64-2.7/isort copying isort/api.py -> build/lib.linux-x86_64-2.7/isort copying isort/utils.py -> build/lib.linux-x86_64-2.7/isort copying isort/__init__.py -> build/lib.linux-x86_64-2.7/isort copying isort/parse.py -> build/lib.linux-x86_64-2.7/isort copying isort/comments.py -> build/lib.linux-x86_64-2.7/isort copying isort/wrap.py -> build/lib.linux-x86_64-2.7/isort copying isort/place.py -> build/lib.linux-x86_64-2.7/isort copying isort/settings.py -> build/lib.linux-x86_64-2.7/isort copying isort/sections.py -> build/lib.linux-x86_64-2.7/isort creating build/lib.linux-x86_6
[jira] [Created] (HADOOP-17120) Fix failure of docker image creation due to pip2 install error on branch-3.1
Masatake Iwasaki created HADOOP-17120: - Summary: Fix failure of docker image creation due to pip2 install error on branch-3.1 Key: HADOOP-17120 URL: https://issues.apache.org/jira/browse/HADOOP-17120 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.1.4 Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki {noformat} The command '/bin/sh -c pip2 install configparser==4.0.2 pylint==1.9.2' returned a non-zero code: 1 {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154058#comment-17154058 ] Hadoop QA commented on HADOOP-17116: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 12s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 36s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/17032/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17116 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13007330/HADOOP-17116.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8711a4057310 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5b1ed2113b8 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://builds.apache.org/job/PreC
[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154047#comment-17154047 ] Hadoop QA commented on HADOOP-17116: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 29s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 15s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/17031/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17116 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13007329/HADOOP-17116.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c3e847e015ae 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5b1ed2113b8 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/17031/testReport/ | | Max. process+thread count | 3117 (vs. ulimit of 5
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-17116: --- Component/s: ha +1 pending Jenkins. Thanks for reporting this and the fix [~hanishakoneru]. > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153999#comment-17153999 ] Hadoop QA commented on HADOOP-17119: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 39s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 31s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.security.TestGroupsCaching | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/17029/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17119 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13007325/HADOOP-17119.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f7d363b96fef 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5b1ed2113b8 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | unit | https://bu
[GitHub] [hadoop] jojochuang commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
jojochuang commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r451814316 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java ## @@ -320,21 +321,21 @@ private Iterator bindUsers; private BindUserInfo currentBindUser; - private String userbaseDN; + private volatile String userbaseDN; Review comment: Why do we use volatile variables here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate
[ https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153990#comment-17153990 ] Ahmed Hussein commented on HADOOP-17099: I have checked the failing unit tests: |TestDecommission|Flaky| |TestDecommissionWithStripedBackoffMonitor|Flaky| |TestDFSStripedOutputStreamWithRandomECPolicy|Flaky| |TestExternalStoragePolicySatisfier|Flaky| |TestFileChecksum|Flaky| |TestFixKerberosTicketOrder|Flaky| |TestHDFSFileSystemContract|Flaky| |TestMaintenanceState|Flaky| |TestQuota|Flaky| |TestRaceWhenRelogin|Flaky| |TestSafeModeWithStripedFile|Flaky| |TestCapacityOverTimePolicy|Flaky| [~ayushtkn] , I believe this patch is ready. Can you please take a look at the changes and commit the patch if you are okay with it? > Replace Guava Predicate with Java8+ Predicate > - > > Key: HADOOP-17099 > URL: https://issues.apache.org/jira/browse/HADOOP-17099 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Minor > Attachments: HADOOP-17099.001.patch, HADOOP-17099.002.patch, > HADOOP-17099.003.patch > > > {{com.google.common.base.Predicate}} can be replaced with > {{java.util.function.Predicate}}. > The change involving 9 occurrences is straightforward: > {code:java} > Targets > Occurrences of 'com.google.common.base.Predicate' in project with mask > '*.java' > Found Occurrences (9 usages found) > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > CombinedHostFileManager.java (1 usage found) > 43 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode (1 usage found) > NameNodeResourceChecker.java (1 usage found) > 38 import com.google.common.base.Predicate; > org.apache.hadoop.hdfs.server.namenode.snapshot (1 usage found) > Snapshot.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.metrics2.impl (2 usages found) > MetricsRecords.java (1 usage found) > 21 import com.google.common.base.Predicate; > TestMetricsSystemImpl.java (1 usage found) > 41 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation (1 usage found) > AggregatedLogFormat.java (1 usage found) > 77 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller (1 usage found) > LogAggregationFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.logaggregation.filecontroller.ifile (1 usage > found) > LogAggregationIndexedFileController.java (1 usage found) > 22 import com.google.common.base.Predicate; > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation > (1 usage found) > AppLogAggregatorImpl.java (1 usage found) > 75 import com.google.common.base.Predicate; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-17116: Attachment: HADOOP-17116.003.patch > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
hadoop-yetus commented on pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#issuecomment-655741026 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 19s | trunk passed | | +1 :green_heart: | compile | 0m 34s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 29s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 20s | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | trunk passed | | +1 :green_heart: | shadedclient | 16m 37s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 24s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 22s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 50s | trunk passed | | -0 :warning: | patch | 1m 8s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 15s | the patch passed | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 30s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 22s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 66m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2123 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 34dd4239e5dd 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5b1ed2113b8 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/4/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/4/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/4/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | P
[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153979#comment-17153979 ] Hadoop QA commented on HADOOP-17116: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 57s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 30s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 12s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/17028/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17116 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13007247/HADOOP-17116.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ca6166e2d01b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5b1ed2113b8 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/17028/testReport/ | | Max. process+thread count | 2740 (vs. ulimit of 5
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153978#comment-17153978 ] Ayush Saxena commented on HADOOP-17119: --- Thanx [~BilwaST] for the update. v002 LGTM +1 > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-17116: Attachment: HADOOP-17116.002.patch > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on pull request #2123: ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
bilaharith commented on pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#issuecomment-655727374 **Driver test results using accounts in Central India** mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify **Account with HNS Support** [INFO] Tests run: 65, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 74 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 **Account without HNS support** [INFO] Tests run: 65, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 248 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153973#comment-17153973 ] Hadoop QA commented on HADOOP-17119: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 16s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 46s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/17027/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17119 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13007323/HADOOP-17119.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux de2076308874 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5b1ed2113b8 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/17027/testReport/ | | Max. process+thread count | 2019 (vs. ulimit of 5
[jira] [Commented] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153972#comment-17153972 ] Bilahari T H commented on HADOOP-17092: --- *New configs added as part of the JIRA* The exponential retry policy used for the AAD token fetch retries can be tuned with the following configurations. * fs.azure.oauth.token.fetch.retry.max.retries: Sets the maximum number of retries. Default value is 5. * fs.azure.oauth.token.fetch.retry.min.backoff.interval`: Minimum back-off interval. Added to the retry interval computed from delta backoff. By default this si set as 0. Set the interval in milli seconds. * fs.azure.oauth.token.fetch.retry.max.backoff.interval`: Maximum back-off interval. Default value is 6 (sixty seconds). Set the interval in milli seconds. * fs.azure.oauth.token.fetch.retry.delta.backoff`: Back-off interval between retries. Multiples of this timespan are used for subsequent retry attempts. The default value is 2. > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Bilahari T H >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923&data=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694&sdata=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D&reserved=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153971#comment-17153971 ] Bilahari T H commented on HADOOP-17092: --- If the token fetch call results in IOException, the AbfsRestOperation layer retries the same. Currently this is configured to 30 attempts. We have added it's own retry policy for token fetch call as part of this PR. The fix is, if all the attempts are failed and resulted in IOException, we are converting the same to HttpException so that the above layer AbfsRestOperation will not attempt retrying. > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Bilahari T H >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923&data=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694&sdata=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D&reserved=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153867#comment-17153867 ] Ahmed Hussein commented on HADOOP-17101: *I have checked unit tests failing on Yetus:* * TestBlockTokenWithDFSStriped: Filed Jira HDFS-15459 Flaky * TestExternalStoragePolicySatisfier: filed HDFS-15456 Flaky * TestFileChecksum: for HADOOP-17101, filed HDFS-15461 Flaky * TestFileCreation: filed HDFS-15460 Flaky * TestFsDatasetImpl: filed HDFS-15457 Flaky * TestGetFileChecksum: for HADOOP-17101, Filed HDFS-1546. an old jira exist HDFS-4723 Flaky * TestNameNodeRetryCacheMetrics filed a jira HDFS-1548 Flaky * TestPipelineFailover: Flaky * TestUnderReplicatedBlocks: Flaky * TestSafeModeWithStripedFileWithRandomECPolicy: Flaky > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.001.patch, HADOOP-17101.002.patch, > HADOOP-17101.003.patch, HADOOP-17101.004.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function
[ https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17101: --- Attachment: HADOOP-17101.004.patch > Replace Guava Function with Java8+ Function > --- > > Key: HADOOP-17101 > URL: https://issues.apache.org/jira/browse/HADOOP-17101 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17101.001.patch, HADOOP-17101.002.patch, > HADOOP-17101.003.patch, HADOOP-17101.004.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Function' > Found Occurrences (7 usages found) > hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff (1 usage found) > Apache_Hadoop_HDFS_2.6.0.xml (1 usage found) > 13603 type="com.google.common.base.Function" > org.apache.hadoop.hdfs.server.blockmanagement (1 usage found) > HostSet.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.datanode.checker (1 usage found) > AbstractFuture.java (1 usage found) > 58 * (ListenableFuture, com.google.common.base.Function) > Futures.transform} > org.apache.hadoop.hdfs.server.namenode.ha (1 usage found) > HATestUtil.java (1 usage found) > 40 import com.google.common.base.Function; > org.apache.hadoop.hdfs.server.protocol (1 usage found) > RemoteEditLog.java (1 usage found) > 20 import com.google.common.base.Function; > org.apache.hadoop.mapreduce.lib.input (1 usage found) > TestFileInputFormat.java (1 usage found) > 58 import com.google.common.base.Function; > org.apache.hadoop.yarn.api.protocolrecords.impl.pb (1 usage found) > GetApplicationsRequestPBImpl.java (1 usage found) > 38 import com.google.common.base.Function; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153848#comment-17153848 ] Wei-Chiu Chuang commented on HADOOP-17119: -- +1 feel free to submit via github PR next time. Thanks! > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-17119: --- Attachment: HADOOP-17119.002.patch > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153847#comment-17153847 ] Bilwa S T commented on HADOOP-17119: Hi [~ayushtkn] I think it makes sense to catch if its caused due to bind Exception.Will upload patch > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153837#comment-17153837 ] Krzysztof Adamski commented on HADOOP-17112: You are right Steve. Changing this toUri resolves the problem. We are having some challenges properly meet the extra test criteria, so this might take us a bit more. > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Priority: Major > Attachments: image-2020-07-03-16-08-52-340.png > > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153832#comment-17153832 ] Ayush Saxena commented on HADOOP-17119: --- Thanx [~BilwaST] for the report. Do you want to catch all of IOE? Or just the ones which are actually bind exception or are caused due to bind exception. Something like this : {code:java} } catch (IOException ex) { // Ignore exception if it is due to BindException. Move to next port. if ((ex instanceof BindException) || (ex .getCause() instanceof BindException)) { ioException = ex; } else { throw ex; } {code} > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet
[ https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17143104#comment-17143104 ] Xiaoyu Yao edited comment on HADOOP-17079 at 7/8/20, 6:07 PM: -- https://github.com/apache/hadoop/pull/2085 was (Author: xyao): https://github.com/apache/hadoop/pull/2085.patch > Optimize UGI#getGroups by adding UGI#getGroupsSet > - > > Key: HADOOP-17079 > URL: https://issues.apache.org/jira/browse/HADOOP-17079 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, > HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch, > HADOOP-17079.007.patch > > > UGI#getGroups has been optimized with HADOOP-13442 by avoiding the > List->Set->List conversion. However the returned list is not optimized to > contains lookup, especially the user's group membership list is huge > (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use > Set#contains() instead of List#contains() to speed up large group look up > while minimize List->Set conversions in Groups#getGroups() call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-17116: --- Priority: Blocker (was: Major) > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Blocker > Attachments: HADOOP-17116.001.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-17116: --- Target Version/s: 3.3.1 > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-17116: --- Priority: Major (was: Blocker) > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-17116: --- Issue Type: Bug (was: Task) > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Blocker > Attachments: HADOOP-17116.001.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-17116: --- Status: Patch Available (was: Open) > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153819#comment-17153819 ] Bilwa S T edited comment on HADOOP-17119 at 7/8/20, 6:00 PM: - [~weichiu] Uploaded a patch . Please check was (Author: bilwast): [~weichiu] Uploaded a patch . > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153819#comment-17153819 ] Bilwa S T commented on HADOOP-17119: [~weichiu] Uploaded a patch . > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens
sunchao commented on a change in pull request #2110: URL: https://github.com/apache/hadoop/pull/2110#discussion_r451729071 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java ## @@ -64,7 +69,13 @@ private String formatTokenId(TokenIdent id) { */ protected final Map currentTokens = new ConcurrentHashMap<>(); - + + /** + * Map of token real owners to its token count. This is used to generate + * top users by owned tokens. + */ + protected final Map tokenOwnerStats = new ConcurrentHashMap<>(); Review comment: OK. Can you add a comment for this though indicating that this only support RBF for now? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-17119: --- Attachment: HADOOP-17119.001.patch > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-17119: --- Status: Patch Available (was: Open) > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Attachments: HADOOP-17119.001.patch > > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153814#comment-17153814 ] Bilwa S T commented on HADOOP-17119: Ok i will upload a patch for this > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153811#comment-17153811 ] Wei-Chiu Chuang commented on HADOOP-17119: -- Feel free to submit a patch. If you look at the PR in HBASE-24197, I bet the patch applies cleanly here too. > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17119: - Affects Version/s: 3.1.1 > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153808#comment-17153808 ] Wei-Chiu Chuang commented on HADOOP-17119: -- Yeah that makes sense now. It's the same as HBASE-24197. > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153803#comment-17153803 ] Bilwa S T edited comment on HADOOP-17119 at 7/8/20, 5:48 PM: - Hi [~weichiu] Sorry jetty upgrade caused this . Jetty version used is 9.4.20.v20190813 and Hadoop version is 3.1.1 was (Author: bilwast): Hi [~weichiu] Jetty version used is 9.4.20.v20190813 and Hadoop version is 3.1.1 > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153803#comment-17153803 ] Bilwa S T commented on HADOOP-17119: Hi [~weichiu] Jetty version used is 9.4.20.v20190813 and Hadoop version is 3.1.1 > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-17119: --- Summary: Jetty upgrade to 9.4.x causes MR app fail with IOException (was: Netty upgrade to 9.4.x causes MR app fail with IOException) > Jetty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17119) Netty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153799#comment-17153799 ] Wei-Chiu Chuang commented on HADOOP-17119: -- Hadoop version? and the Netty version? > Netty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) > ... 17 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17119) Netty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-17119: --- Description: I think we should catch IOException here instead of BindException in HttpServer2#bindForPortRange {code:java} for(Integer port : portRanges) { if (port == startPort) { continue; } Thread.sleep(100); listener.setPort(port); try { bindListener(listener); return; } catch (BindException ex) { // Ignore exception. Move to next port. ioException = ex; } } {code} Stacktrace: {code:java} HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) at org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440) at org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) ... 17 more {code} was: I think we should catch IOException here instead of BindException in HttpServer2#bindForPortRange {code:java} for(Integer port : portRanges) { if (port == startPort) { continue; } Thread.sleep(100); listener.setPort(port); try { bindListener(listener); return; } catch (BindException ex) { // Ignore exception. Move to next port. ioException = ex; } } {code} > Netty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} > Stacktrace: > {code:java} > HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142 > java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101 > at > org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346) > at > org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) > at > org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190) > at > org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451) > at org.apache.hadoop.yarn
[jira] [Commented] (HADOOP-17119) Netty upgrade to 9.4.x causes MR app fail with IOException
[ https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153772#comment-17153772 ] Bilwa S T commented on HADOOP-17119: cc [~weichiu] [~ste...@apache.org] > Netty upgrade to 9.4.x causes MR app fail with IOException > -- > > Key: HADOOP-17119 > URL: https://issues.apache.org/jira/browse/HADOOP-17119 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > I think we should catch IOException here instead of BindException in > HttpServer2#bindForPortRange > {code:java} > for(Integer port : portRanges) { > if (port == startPort) { > continue; > } > Thread.sleep(100); > listener.setPort(port); > try { > bindListener(listener); > return; > } catch (BindException ex) { > // Ignore exception. Move to next port. > ioException = ex; > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17119) Netty upgrade to 9.4.x causes MR app fail with IOException
Bilwa S T created HADOOP-17119: -- Summary: Netty upgrade to 9.4.x causes MR app fail with IOException Key: HADOOP-17119 URL: https://issues.apache.org/jira/browse/HADOOP-17119 Project: Hadoop Common Issue Type: Bug Reporter: Bilwa S T Assignee: Bilwa S T I think we should catch IOException here instead of BindException in HttpServer2#bindForPortRange {code:java} for(Integer port : portRanges) { if (port == startPort) { continue; } Thread.sleep(100); listener.setPort(port); try { bindListener(listener); return; } catch (BindException ex) { // Ignore exception. Move to next port. ioException = ex; } } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153758#comment-17153758 ] Hadoop QA commented on HADOOP-16492: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 1s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 21 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 29s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} branch/hadoop-cloud-storage-project/hadoop-cloud-storage no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 13s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} The patch fails to run checkstyle in root {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 11s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 9s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 0m 20s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 8s{color} | {color:red} hadoop-tools in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-huaweicloud in the patch passed. {color} | | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 16s{color} | {color:blue} ASF License check generated no output? {color} | | {color:black}{color} | {color:black} {color} | {color:black}
[jira] [Created] (HADOOP-17118) TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently
Ahmed Hussein created HADOOP-17118: -- Summary: TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently Key: HADOOP-17118 URL: https://issues.apache.org/jira/browse/HADOOP-17118 Project: Hadoop Common Issue Type: Bug Reporter: Ahmed Hussein {{TestFileCreation.testServerDefaultsWithMinimalCaching}} fails intermittently on trunk {code:bash} [ERROR] Tests run: 25, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 103.413 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileCreation [ERROR] testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation) Time elapsed: 2.435 s <<< FAILURE! java.lang.AssertionError: expected:<402653184> but was:<268435456> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at org.apache.hadoop.hdfs.TestFileCreation.testServerDefaultsWithMinimalCaching(TestFileCreation.java:279) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2128: HDFS-15452.Dynamically initialize the capacity of BlocksMap.
hadoop-yetus commented on pull request #2128: URL: https://github.com/apache/hadoop/pull/2128#issuecomment-655608353 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 23m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 30s | trunk passed | | +1 :green_heart: | compile | 1m 16s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 1m 10s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 1m 0s | trunk passed | | +1 :green_heart: | mvnsite | 1m 13s | trunk passed | | +1 :green_heart: | shadedclient | 16m 2s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 35s | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 45s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 2m 54s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 51s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 8s | the patch passed | | +1 :green_heart: | compile | 1m 7s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 1m 7s | the patch passed | | +1 :green_heart: | compile | 1m 1s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 1m 1s | the patch passed | | +1 :green_heart: | checkstyle | 0m 53s | the patch passed | | +1 :green_heart: | mvnsite | 1m 6s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 51s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 31s | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 2m 56s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 95m 30s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | The patch does not generate ASF License warnings. | | | | 188m 41s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestGetFileChecksum | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2128/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2128 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 22e69578584b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3a4d05b8504 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2128/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2128/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-2128/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.a
[jira] [Commented] (HADOOP-17117) Fix typos in hadoop-aws documentation
[ https://issues.apache.org/jira/browse/HADOOP-17117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153684#comment-17153684 ] Hudson commented on HADOOP-17117: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18419 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18419/]) HADOOP-17117 Fix typos in hadoop-aws documentation (#2127) (github: rev 5b1ed2113b8e938ab2ff0fef7948148cb07e0457) * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committer_architecture.md > Fix typos in hadoop-aws documentation > - > > Key: HADOOP-17117 > URL: https://issues.apache.org/jira/browse/HADOOP-17117 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, fs/s3 >Reporter: Sebastian Nagel >Assignee: Sebastian Nagel >Priority: Trivial > Fix For: 3.3.1, 3.4.0 > > > There are couple of typos in the hadoop-aws documentation (markdown). I'll > open a PR. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17117) Fix typos in hadoop-aws documentation
[ https://issues.apache.org/jira/browse/HADOOP-17117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17117: --- Summary: Fix typos in hadoop-aws documentation (was: Typos in hadoop-aws documentation) > Fix typos in hadoop-aws documentation > - > > Key: HADOOP-17117 > URL: https://issues.apache.org/jira/browse/HADOOP-17117 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, fs/s3 >Reporter: Sebastian Nagel >Assignee: Sebastian Nagel >Priority: Trivial > Fix For: 3.3.1, 3.4.0 > > > There are couple of typos in the hadoop-aws documentation (markdown). I'll > open a PR. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17117) Typos in hadoop-aws documentation
[ https://issues.apache.org/jira/browse/HADOOP-17117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-17117. Fix Version/s: 3.4.0 3.3.1 Hadoop Flags: Reviewed Assignee: Sebastian Nagel Resolution: Fixed Merged the PR into trunk and branch-3.3. Thank you [~snagel]. > Typos in hadoop-aws documentation > - > > Key: HADOOP-17117 > URL: https://issues.apache.org/jira/browse/HADOOP-17117 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, fs/s3 >Reporter: Sebastian Nagel >Assignee: Sebastian Nagel >Priority: Trivial > Fix For: 3.3.1, 3.4.0 > > > There are couple of typos in the hadoop-aws documentation (markdown). I'll > open a PR. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2127: HADOOP-17117 Fix typos in hadoop-aws documentation
aajisaka commented on pull request #2127: URL: https://github.com/apache/hadoop/pull/2127#issuecomment-655575714 Thank you @sebastian-nagel for your contribution! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #2127: HADOOP-17117 Fix typos in hadoop-aws documentation
aajisaka merged pull request #2127: URL: https://github.com/apache/hadoop/pull/2127 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2129: YARN-10347. Fix double locking in CapacityScheduler#reinitialize in branch-3.1.
hadoop-yetus commented on pull request #2129: URL: https://github.com/apache/hadoop/pull/2129#issuecomment-68964 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | docker | 7m 55s | Docker failed to build yetus/hadoop:d84386ccf7a. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/2129 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2129/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims opened a new pull request #2129: YARN-10347. Fix double locking in CapacityScheduler#reinitialize in branch-3.1.
iwasakims opened a new pull request #2129: URL: https://github.com/apache/hadoop/pull/2129 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhongjun updated HADOOP-16492: -- Attachment: HADOOP-16492.012.patch > Support HuaweiCloud Object Storage as a Hadoop Backend File System > -- > > Key: HADOOP-16492 > URL: https://issues.apache.org/jira/browse/HADOOP-16492 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 3.4.0 >Reporter: zhongjun >Priority: Major > Attachments: Difference Between OBSA and S3A.pdf, > HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, > HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, > HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, > HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, OBSA > HuaweiCloud OBS Adapter for Hadoop Support.pdf > > > Added support for HuaweiCloud OBS > ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, > just like what we do before for S3, ADLS, OSS, etc. With simple > configuration, Hadoop applications can read/write data from OBS without any > code change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu opened a new pull request #2128: HDFS-15452.Dynamically initialize the capacity of BlocksMap.
jianghuazhu opened a new pull request #2128: URL: https://github.com/apache/hadoop/pull/2128 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2127: HADOOP-17117 Fix typos in hadoop-aws documentation
hadoop-yetus commented on pull request #2127: URL: https://github.com/apache/hadoop/pull/2127#issuecomment-655417150 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 48s | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | trunk passed | | +1 :green_heart: | shadedclient | 38m 9s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 29s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 57m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2127/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2127 | | Optional Tests | dupname asflicense mvnsite markdownlint | | uname | Linux a6a60196f1a7 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3a4d05b8504 | | Max. process+thread count | 368 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2127/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sebastian-nagel opened a new pull request #2127: HADOOP-17117 Fix typos in hadoop-aws documentation
sebastian-nagel opened a new pull request #2127: URL: https://github.com/apache/hadoop/pull/2127 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.
mukund-thakur commented on a change in pull request #2069: URL: https://github.com/apache/hadoop/pull/2069#discussion_r451386244 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/IOStatisticsSupport.java ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.statistics; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; + +/** + * Support for working with IOStatistics. + */ +@InterfaceAudience.Public +@InterfaceStability.Unstable +public final class IOStatisticsSupport { + + private IOStatisticsSupport() { + } + + /** + * Take a snapshot of the current statistics state. + * + * This is not an atomic option. + * + * The instance can be serialized, and its + * {@code toString()} method lists all the values. + * @param statistics statistics + * @return a snapshot of the current values. + */ + public static IOStatisticsSnapshot + snapshotIOStatistics(IOStatistics statistics) { + +IOStatisticsSnapshot stats = new IOStatisticsSnapshot(statistics); +stats.snapshot(statistics); Review comment: Why do we need to call snapshot() again? I can see the snapshot calculation just happened in the previous call during constructor initialisation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17117) Typos in hadoop-aws documentation
Sebastian Nagel created HADOOP-17117: Summary: Typos in hadoop-aws documentation Key: HADOOP-17117 URL: https://issues.apache.org/jira/browse/HADOOP-17117 Project: Hadoop Common Issue Type: Bug Components: documentation, fs/s3 Reporter: Sebastian Nagel There are couple of typos in the hadoop-aws documentation (markdown). I'll open a PR. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.
mukund-thakur commented on a change in pull request #2069: URL: https://github.com/apache/hadoop/pull/2069#discussion_r451329375 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/CounterIOStatisticsBuilder.java ## @@ -0,0 +1,37 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.statistics.impl; + +/** + * Builder of the CounterIOStatistics class. + */ +public interface CounterIOStatisticsBuilder { Review comment: nit: Don't you think name is bit confusing here? We are providing support for min, max , mean as well here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16781) Backport HADOOP-16612 "Track Azure Blob File System client-perceived latency" to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeetesh Mangwani resolved HADOOP-16781. --- Resolution: Won't Fix Previously we used to keep branch-2 updated with the changes we pushed to trunk. This way, the few customers on older versions continued to get support. Now branch-2 is not maintained anymore. > Backport HADOOP-16612 "Track Azure Blob File System client-perceived latency" > to branch-2 > - > > Key: HADOOP-16781 > URL: https://issues.apache.org/jira/browse/HADOOP-16781 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Assignee: Jeetesh Mangwani >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org