[jira] [Commented] (HADOOP-16606) checksum link from hadoop web site is broken.
[ https://issues.apache.org/jira/browse/HADOOP-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938411#comment-16938411 ] Rohith Sharma K S commented on HADOOP-16606: I am wondering why do we need to move sha512 though mds is not harm as per apache standards. I might be missed critical discussions reg this! > checksum link from hadoop web site is broken. > - > > Key: HADOOP-16606 > URL: https://issues.apache.org/jira/browse/HADOOP-16606 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Akira Ajisaka >Priority: Blocker > > Post HADOOP-16494, artifacts generated for release doesn't include *mds* > file. But hadoop web site binary tar ball points to mds file which doesn't > have. This breaks the hadoop website. > For 3.2.1 release, I have manually generated *mds* file and pushed into > artifacts folder so that hadoop website link is not broken. > The same issue will happen for 3.1.3 release also. > I am referring https://hadoop.apache.org/releases.html page for checksum > hyperlink. > cc:/ [~vinodkv] [~tangzhankun] [~aajisaka] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16606) checksum link from hadoop web site is broken.
[ https://issues.apache.org/jira/browse/HADOOP-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16606: --- Description: Post HADOOP-16494, artifacts generated for release doesn't include *mds* file. But hadoop web site binary tar ball points to mds file which doesn't have. This breaks the hadoop website. For 3.2.1 release, I have manually generated *mds* file and pushed into artifacts folder so that hadoop website link is not broken. The same issue will happen for 3.1.3 release also. I am referring https://hadoop.apache.org/releases.html page for checksum hyperlink. cc:/ [~vinodkv] [~tangzhankun] [~aajisaka] was: Post HADOOP-16494, artifacts generated for release doesn't include *mds* file. But hadoop web site binary tar ball points to mds file which doesn't have. This breaks the hadoop website. For 3.2.1 release, I have manually generated md5 file and pushed into artifacts folder so that hadoop website link is not broken. The same issue will happen for 3.1.3 release also. I am referring https://hadoop.apache.org/releases.html page for checksum hyperlink. cc:/ [~vinodkv] [~tangzhankun] [~aajisaka] > checksum link from hadoop web site is broken. > - > > Key: HADOOP-16606 > URL: https://issues.apache.org/jira/browse/HADOOP-16606 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Priority: Blocker > > Post HADOOP-16494, artifacts generated for release doesn't include *mds* > file. But hadoop web site binary tar ball points to mds file which doesn't > have. This breaks the hadoop website. > For 3.2.1 release, I have manually generated *mds* file and pushed into > artifacts folder so that hadoop website link is not broken. > The same issue will happen for 3.1.3 release also. > I am referring https://hadoop.apache.org/releases.html page for checksum > hyperlink. > cc:/ [~vinodkv] [~tangzhankun] [~aajisaka] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16606) checksum link from hadoop web site is broken.
[ https://issues.apache.org/jira/browse/HADOOP-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937714#comment-16937714 ] Rohith Sharma K S commented on HADOOP-16606: As per Apache standard https://www.apache.org/dev/release-distribution#sigs-and-sums, *mds* file *may* be provided. Should we keep both i.e mds and sha512? > checksum link from hadoop web site is broken. > - > > Key: HADOOP-16606 > URL: https://issues.apache.org/jira/browse/HADOOP-16606 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Priority: Blocker > > Post HADOOP-16494, artifacts generated for release doesn't include *mds* > file. But hadoop web site binary tar ball points to mds file which doesn't > have. This breaks the hadoop website. > For 3.2.1 release, I have manually generated md5 file and pushed into > artifacts folder so that hadoop website link is not broken. > The same issue will happen for 3.1.3 release also. > I am referring https://hadoop.apache.org/releases.html page for checksum > hyperlink. > cc:/ [~vinodkv] [~tangzhankun] [~aajisaka] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16606) checksum link from hadoop web site is broken.
Rohith Sharma K S created HADOOP-16606: -- Summary: checksum link from hadoop web site is broken. Key: HADOOP-16606 URL: https://issues.apache.org/jira/browse/HADOOP-16606 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Post HADOOP-16494, artifacts generated for release doesn't include *mds* file. But hadoop web site binary tar ball points to mds file which doesn't have. This breaks the hadoop website. For 3.2.1 release, I have manually generated md5 file and pushed into artifacts folder so that hadoop website link is not broken. The same issue will happen for 3.1.3 release also. I am referring https://hadoop.apache.org/releases.html page for checksum hyperlink. cc:/ [~vinodkv] [~tangzhankun] [~aajisaka] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU
[ https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937702#comment-16937702 ] Rohith Sharma K S commented on HADOOP-16193: updated fix version as 3.1.3 released > add extra S3A MPU test to see what happens if a file is created during the MPU > -- > > Key: HADOOP-16193 > URL: https://issues.apache.org/jira/browse/HADOOP-16193 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.1.4 > > > Proposed extra test for the S3A MPU: if you create and then delete a file > while an MPU is in progress, when you finally complete the MPU the new data > is present. > This verifies that the other FS operations don't somehow cancel the > in-progress upload, and that eventual consistency brings the latest value out. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU
[ https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16193: --- Fix Version/s: (was: 3.1.3) 3.1.4 > add extra S3A MPU test to see what happens if a file is created during the MPU > -- > > Key: HADOOP-16193 > URL: https://issues.apache.org/jira/browse/HADOOP-16193 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.1.4 > > > Proposed extra test for the S3A MPU: if you create and then delete a file > while an MPU is in progress, when you finally complete the MPU the new data > is present. > This verifies that the other FS operations don't somehow cancel the > in-progress upload, and that eventual consistency brings the latest value out. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16341) ShutDownHookManager: Regressed performance on Hook removals after HADOOP-15679
[ https://issues.apache.org/jira/browse/HADOOP-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937701#comment-16937701 ] Rohith Sharma K S commented on HADOOP-16341: updated the fix version as 3.1.3 released > ShutDownHookManager: Regressed performance on Hook removals after HADOOP-15679 > -- > > Key: HADOOP-16341 > URL: https://issues.apache.org/jira/browse/HADOOP-16341 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.1.2 >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Fix For: 3.1.4 > > Attachments: HADOOP-16341.branch-3.1.002.patch, > HADOOP-16341.branch-3.1.1.patch, shutdown-hook-removal.png > > > !shutdown-hook-removal.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16341) ShutDownHookManager: Regressed performance on Hook removals after HADOOP-15679
[ https://issues.apache.org/jira/browse/HADOOP-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16341: --- Fix Version/s: (was: 3.1.3) 3.1.4 > ShutDownHookManager: Regressed performance on Hook removals after HADOOP-15679 > -- > > Key: HADOOP-16341 > URL: https://issues.apache.org/jira/browse/HADOOP-16341 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.1.2 >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Fix For: 3.1.4 > > Attachments: HADOOP-16341.branch-3.1.002.patch, > HADOOP-16341.branch-3.1.1.patch, shutdown-hook-removal.png > > > !shutdown-hook-removal.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-15864: --- Fix Version/s: (was: 3.2.1) > Job submitter / executor fail when SBN domain name can not resolved > --- > > Key: HADOOP-15864 > URL: https://issues.apache.org/jira/browse/HADOOP-15864 > Project: Hadoop Common > Issue Type: Bug >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Critical > Fix For: 3.0.4, 3.3.0, 3.1.2 > > Attachments: HADOOP-15864-branch.2.7.001.patch, > HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, > HADOOP-15864.004.patch, HADOOP-15864.005.patch, > HADOOP-15864.branch.2.7.004.patch > > > Job submit failure and Task executes failure if Standby NameNode domain name > can not resolved on HDFS HA with DelegationToken feature. > This issue is triggered when create {{ConfiguredFailoverProxyProvider}} > instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode > with Security. Since in HDFS HA mode UGI need include separate token for each > NameNode in order to dealing with Active-Standby switch, the double tokens' > content is same of course. > However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} > it checks whether the address of NameNode has been resolved or not, if Not, > throw #IllegalArgumentException upon, then job submitter/ task executor fail. > HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets > resolve completely. > Another questions many guys consider is why NameNode domain name can not > resolve? I think there are many scenarios, for instance node replace when > meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure > should not impact Hadoop cluster stability in my opinion. > a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 > {code:java} > public static Text buildTokenService(InetSocketAddress addr) { > String host = null; > if (useIpForTokenService) { > if (addr.isUnresolved()) { // host has no ip address > throw new IllegalArgumentException( > new UnknownHostException(addr.getHostName()) > ); > } > host = addr.getAddress().getHostAddress(); > } else { > host = StringUtils.toLowerCase(addr.getHostName()); > } > return new Text(host + ":" + addr.getPort()); > } > {code} > b.exception log ref: > {code:xml} > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Couldn't create proxy provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) > at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) > at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176) > at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665) > ... 35 more > Caused by:
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927523#comment-16927523 ] Rohith Sharma K S commented on HADOOP-16494: bq. it might be a good idea to address this in a new issue. Do you think? I am +1 for addressing this. > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927241#comment-16927241 ] Rohith Sharma K S commented on HADOOP-16494: Just checked previous releases where same as above folder path is in md5 files. Looks it is fine. > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927236#comment-16927236 ] Rohith Sharma K S commented on HADOOP-16494: [~aajisaka] I created release artifacts for 3.2.1 and trying to verify sha512. I see below error {noformat} rohithsharmaks@ip-172-31-38-26:~/branch-3.2.1/target/artifacts$ sha512sum -c CHANGELOG.md.sha512 sha512sum: /build/source/target/artifacts/CHANGELOG.md: No such file or directory /build/source/target/artifacts/CHANGELOG.md: FAILED open or read sha512sum: WARNING: 1 listed file could not be read {noformat} When sha512 is created, it is taking entire folder path into account because *"${i}"* is entire folder patch. {noformat} for i in ${ARTIFACTS_DIR}/*; do ${GPG} --use-agent --armor --output "${i}.asc" --detach-sig "${i}" sha512sum --tag "${i}" > "${i}.sha512" done {noformat} Is this expected? or need fix to consider only file name. How was earlier md5 was running? > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL
[ https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925127#comment-16925127 ] Rohith Sharma K S commented on HADOOP-15922: [~eyang] [~tasanuma] [~hexiaoqiao] Could anyone update the release note since it is a incompatible issue for release 3.2.1? > DelegationTokenAuthenticationFilter get wrong doAsUser since it does not > decode URL > --- > > Key: HADOOP-15922 > URL: https://issues.apache.org/jira/browse/HADOOP-15922 > Project: Hadoop Common > Issue Type: Bug > Components: common, kms >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, > HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch, > HADOOP-15922.006.patch, HADOOP-15922.007.patch > > > DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from > client is complete kerberos name (e.g., user/hostn...@realm.com, actually it > is acceptable), because DelegationTokenAuthenticationFilter does not decode > DOAS parameter in URL which is encoded by {{URLEncoder}} at client. > e.g. KMS as example: > a. KMSClientProvider creates connection to KMS Server using > DelegationTokenAuthenticatedURL#openConnection. > b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} > with url encoded user as one parameter of http request. > {code:java} > // proxyuser > if (doAs != null) { > extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8")); > } > {code} > c. when KMS server receives the request, it does not decode the proxy user. > As result, KMS Server will get the wrong proxy user if this proxy user is > complete Kerberos Name or it includes some special character. Some other > authentication and authorization exception will throws next to it. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2
[ https://issues.apache.org/jira/browse/HADOOP-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924783#comment-16924783 ] Rohith Sharma K S commented on HADOOP-16211: This JIRA is committed using commit id HADOOP-16213. Updating the Fix Version as 3.2.1 {code:java} commit e0b3cbd221c1e611660b364a64d1aec52b10bc4e Author: Sean Mackrory Date: Thu Jun 13 07:53:40 2019 -0600 HADOOP-16213. Update guava to 27.0-jre. Contributed by Gabor Bota. {code} > Update guava to 27.0-jre in hadoop-project branch-3.2 > - > > Key: HADOOP-16211 > URL: https://issues.apache.org/jira/browse/HADOOP-16211 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-16211-branch-3.2.001.patch, > HADOOP-16211-branch-3.2.002.patch, HADOOP-16211-branch-3.2.003.patch, > HADOOP-16211-branch-3.2.004.patch, HADOOP-16211-branch-3.2.005.patch, > HADOOP-16211-branch-3.2.006.patch > > > com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found > CVE-2018-10237. > This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that > particular branch. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2
[ https://issues.apache.org/jira/browse/HADOOP-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16211: --- Fix Version/s: 3.2.1 > Update guava to 27.0-jre in hadoop-project branch-3.2 > - > > Key: HADOOP-16211 > URL: https://issues.apache.org/jira/browse/HADOOP-16211 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > Fix For: 3.2.1 > > Attachments: HADOOP-16211-branch-3.2.001.patch, > HADOOP-16211-branch-3.2.002.patch, HADOOP-16211-branch-3.2.003.patch, > HADOOP-16211-branch-3.2.004.patch, HADOOP-16211-branch-3.2.005.patch, > HADOOP-16211-branch-3.2.006.patch > > > com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found > CVE-2018-10237. > This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that > particular branch. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16225) Fix links to the developer mailing lists in DownstreamDev.md
[ https://issues.apache.org/jira/browse/HADOOP-16225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16225: --- Fix Version/s: 3.1.3 3.2.1 3.3.0 > Fix links to the developer mailing lists in DownstreamDev.md > > > Key: HADOOP-16225 > URL: https://issues.apache.org/jira/browse/HADOOP-16225 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.2.0, 3.0.3, 3.1.2 >Reporter: Akira Ajisaka >Assignee: Wanqiang Ji >Priority: Minor > Labels: newbie > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-16225.001.patch > > > The following links are wrong. > {noformat} > * [dev-common](mailto:dev-com...@apache.org) > * [dev-hdfs](mailto:dev-h...@apache.org) > * [dev-mapreduce](mailto:dev-mapred...@apache.org) > * [dev-yarn](mailto:dev-y...@apache.org) > {noformat} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16225) Fix links to the developer mailing lists in DownstreamDev.md
[ https://issues.apache.org/jira/browse/HADOOP-16225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924780#comment-16924780 ] Rohith Sharma K S commented on HADOOP-16225: Updated "Fix Version/s" as it present in 3.3.0, 3.2.1 and 3.1.3 > Fix links to the developer mailing lists in DownstreamDev.md > > > Key: HADOOP-16225 > URL: https://issues.apache.org/jira/browse/HADOOP-16225 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.2.0, 3.0.3, 3.1.2 >Reporter: Akira Ajisaka >Assignee: Wanqiang Ji >Priority: Minor > Labels: newbie > Attachments: HADOOP-16225.001.patch > > > The following links are wrong. > {noformat} > * [dev-common](mailto:dev-com...@apache.org) > * [dev-hdfs](mailto:dev-h...@apache.org) > * [dev-mapreduce](mailto:dev-mapred...@apache.org) > * [dev-yarn](mailto:dev-y...@apache.org) > {noformat} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16269: --- Fix Version/s: 3.2.1 3.3.0 2.10.0 > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Fix For: 2.10.0, 3.3.0, 3.2.1 > > Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, > HADOOP-16269-003.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom
[ https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924779#comment-16924779 ] Rohith Sharma K S commented on HADOOP-16269: Updated "Fix Version/s" as it present in 3.3.0, 3.2.1 and 2.10. > ABFS: add listFileStatus with StartFrom > --- > > Key: HADOOP-16269 > URL: https://issues.apache.org/jira/browse/HADOOP-16269 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, > HADOOP-16269-003.patch > > > Adding a ListFileStatus in a path from a entry name in lexical order. > This is added to AzureBlobFileSystemStore and won't be exposed to FS level > api. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Ensure jar validation works on Windows.
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919196#comment-16919196 ] Rohith Sharma K S commented on HADOOP-15998: Thanks [~busbey] for committing the patch! > Ensure jar validation works on Windows. > --- > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917368#comment-16917368 ] Rohith Sharma K S commented on HADOOP-15998: [~busbey] Latest patch is verified by [~abmodi] who is also committer. I think we should get this in unless others don't have any concerns. However I am +1 for this. > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916490#comment-16916490 ] Rohith Sharma K S commented on HADOOP-15998: Thanks [~abmodi] for validating the patch on windows. [~busbey] Could we get this in or should we wait ? > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916438#comment-16916438 ] Rohith Sharma K S commented on HADOOP-15998: [~abmodi] [~Sushil-K-S] Could you please help to verify this patch in windows? I hope it doesn't take much time, but need to build in windows > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915468#comment-16915468 ] Rohith Sharma K S commented on HADOOP-15998: I am tracking this JIRA for 3.2.1 release. Thanks [~busbey] for reviewing and updating patch. Even I don't have windows platform to verify authentically. Let me find if anyone has windows platform to verify this. > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16088) Build failure for -Dhbase.profile=2.0
[ https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755657#comment-16755657 ] Rohith Sharma K S commented on HADOOP-16088: Attached quick patch to unblock build failure. But not sure if there any test failures later in the module > Build failure for -Dhbase.profile=2.0 > - > > Key: HADOOP-16088 > URL: https://issues.apache.org/jira/browse/HADOOP-16088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Priority: Blocker > Attachments: HADOOP-16088.01.patch > > > Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. > {noformat} > HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade > -Dhbase.profile=2.0 > [INFO] Scanning for projects... > [ERROR] [ERROR] Some problems were encountered while processing the POMs: > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is > missing. @ line 485, column 21 > @ > [ERROR] The build could not read 1 project -> [Help 1] > [ERROR] > [ERROR] The project > org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT > > (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) > has 1 error > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar > is missing. @ line 485, column 21 > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException > {noformat} > cc:/ [~ajisakaa] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0
[ https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16088: --- Attachment: HADOOP-16088.01.patch > Build failure for -Dhbase.profile=2.0 > - > > Key: HADOOP-16088 > URL: https://issues.apache.org/jira/browse/HADOOP-16088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Priority: Blocker > Attachments: HADOOP-16088.01.patch > > > Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. > {noformat} > HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade > -Dhbase.profile=2.0 > [INFO] Scanning for projects... > [ERROR] [ERROR] Some problems were encountered while processing the POMs: > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is > missing. @ line 485, column 21 > @ > [ERROR] The build could not read 1 project -> [Help 1] > [ERROR] > [ERROR] The project > org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT > > (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) > has 1 error > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar > is missing. @ line 485, column 21 > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException > {noformat} > cc:/ [~ajisakaa] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0
[ https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16088: --- Status: Patch Available (was: Open) > Build failure for -Dhbase.profile=2.0 > - > > Key: HADOOP-16088 > URL: https://issues.apache.org/jira/browse/HADOOP-16088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Priority: Blocker > Attachments: HADOOP-16088.01.patch > > > Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. > {noformat} > HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade > -Dhbase.profile=2.0 > [INFO] Scanning for projects... > [ERROR] [ERROR] Some problems were encountered while processing the POMs: > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is > missing. @ line 485, column 21 > @ > [ERROR] The build could not read 1 project -> [Help 1] > [ERROR] > [ERROR] The project > org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT > > (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) > has 1 error > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar > is missing. @ line 485, column 21 > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException > {noformat} > cc:/ [~ajisakaa] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0
[ https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16088: --- Target Version/s: 3.3.0 > Build failure for -Dhbase.profile=2.0 > - > > Key: HADOOP-16088 > URL: https://issues.apache.org/jira/browse/HADOOP-16088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Priority: Blocker > > Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. > {noformat} > HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade > -Dhbase.profile=2.0 > [INFO] Scanning for projects... > [ERROR] [ERROR] Some problems were encountered while processing the POMs: > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is > missing. @ line 485, column 21 > @ > [ERROR] The build could not read 1 project -> [Help 1] > [ERROR] > [ERROR] The project > org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT > > (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) > has 1 error > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar > is missing. @ line 485, column 21 > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException > {noformat} > cc:/ [~ajisakaa] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0
[ https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-16088: --- Description: Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. {noformat} HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade -Dhbase.profile=2.0 [INFO] Scanning for projects... [ERROR] [ERROR] Some problems were encountered while processing the POMs: [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is missing. @ line 485, column 21 @ [ERROR] The build could not read 1 project -> [Help 1] [ERROR] [ERROR] The project org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) has 1 error [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is missing. @ line 485, column 21 [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException {noformat} cc:/ [~ajisakaa] was: Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. {noformat} HW12723:hadoop rsharmaks$ mci [INFO] Scanning for projects... [ERROR] [ERROR] Some problems were encountered while processing the POMs: [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is missing. @ line 485, column 21 @ [ERROR] The build could not read 1 project -> [Help 1] [ERROR] [ERROR] The project org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) has 1 error [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is missing. @ line 485, column 21 [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException {noformat} cc:/ [~ajisakaa] > Build failure for -Dhbase.profile=2.0 > - > > Key: HADOOP-16088 > URL: https://issues.apache.org/jira/browse/HADOOP-16088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Rohith Sharma K S >Priority: Blocker > > Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. > {noformat} > HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade > -Dhbase.profile=2.0 > [INFO] Scanning for projects... > [ERROR] [ERROR] Some problems were encountered while processing the POMs: > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is > missing. @ line 485, column 21 > @ > [ERROR] The build could not read 1 project -> [Help 1] > [ERROR] > [ERROR] The project > org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT > > (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) > has 1 error > [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar > is missing. @ line 485, column 21 > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException > {noformat} > cc:/ [~ajisakaa] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16088) Build failure for -Dhbase.profile=2.0
Rohith Sharma K S created HADOOP-16088: -- Summary: Build failure for -Dhbase.profile=2.0 Key: HADOOP-16088 URL: https://issues.apache.org/jira/browse/HADOOP-16088 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. {noformat} HW12723:hadoop rsharmaks$ mci [INFO] Scanning for projects... [ERROR] [ERROR] Some problems were encountered while processing the POMs: [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is missing. @ line 485, column 21 @ [ERROR] The build could not read 1 project -> [Help 1] [ERROR] [ERROR] The project org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml) has 1 error [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is missing. @ line 485, column 21 [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException {noformat} cc:/ [~ajisakaa] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16078) Revert YARN-8270 from branch-3.1
Rohith Sharma K S created HADOOP-16078: -- Summary: Revert YARN-8270 from branch-3.1 Key: HADOOP-16078 URL: https://issues.apache.org/jira/browse/HADOOP-16078 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Assignee: Rohith Sharma K S It is observed that in hadoop-3.1-RC0, NodeManager are unable to initialize TimelineCollectorWebService! Primary reason is HADOOP-15657 is not present in hadoop-3.1 branch! Following error is seen NM logs {noformat} Caused by: org.apache.hadoop.metrics2.MetricsException: Unsupported metric field putEntitiesFailureLatency of type org.apache.hadoop.metrics2.lib.MutableQuantiles at org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(MutableMetricsFactory.java:87) {noformat} We need to revert YARN-8270 from branch-3.1! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version to 9.3.24
[ https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1767#comment-1767 ] Rohith Sharma K S commented on HADOOP-15815: FYI.. it appears trunk compilation fails after this change. See YARN-8950 > Upgrade Eclipse Jetty version to 9.3.24 > --- > > Key: HADOOP-15815 > URL: https://issues.apache.org/jira/browse/HADOOP-15815 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.1.1, 3.0.3 >Reporter: Boris Vulikh >Assignee: Boris Vulikh >Priority: Major > Fix For: 3.2.0, 3.0.4, 3.3.0, 3.1.2 > > Attachments: HADOOP-15815.01-2.patch > > > * > [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657] > * > [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658] > * > [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656] > * > [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536] > We should upgrade the dependency to version 9.3.24 or the latest, if possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15884) Compilation fails for hadoop-yarn-server-timelineservice-hbase-client package
Rohith Sharma K S created HADOOP-15884: -- Summary: Compilation fails for hadoop-yarn-server-timelineservice-hbase-client package Key: HADOOP-15884 URL: https://issues.apache.org/jira/browse/HADOOP-15884 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Dependency check for hbase-client package fails {noformat} [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-yarn-server-timelineservice-hbase-client --- [WARNING] Dependency convergence error for org.eclipse.jetty:jetty-http:9.3.24.v20180605 paths to dependency are: +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:3.3.0-SNAPSHOT +-org.eclipse.jetty:jetty-server:9.3.24.v20180605 +-org.eclipse.jetty:jetty-http:9.3.24.v20180605 and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT +-org.apache.hbase:hbase-server:2.0.0-beta-1 +-org.apache.hbase:hbase-http:2.0.0-beta-1 +-org.eclipse.jetty:jetty-http:9.3.19.v20170502 [WARNING] Dependency convergence error for org.eclipse.jetty:jetty-security:9.3.24.v20180605 paths to dependency are: +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:3.3.0-SNAPSHOT +-org.eclipse.jetty:jetty-servlet:9.3.24.v20180605 +-org.eclipse.jetty:jetty-security:9.3.24.v20180605 and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT +-org.apache.hbase:hbase-server:2.0.0-beta-1 +-org.apache.hbase:hbase-http:2.0.0-beta-1 +-org.eclipse.jetty:jetty-security:9.3.19.v20170502 [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence failed with message: Failed while enforcing releasability. See above detailed error message. {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15657) Registering MutableQuantiles via Metric annotation
[ https://issues.apache.org/jira/browse/HADOOP-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604592#comment-16604592 ] Rohith Sharma K S commented on HADOOP-15657: I have added [~Sushil-K-S] to contributor list. From now on wards he can assign himself. > Registering MutableQuantiles via Metric annotation > -- > > Key: HADOOP-15657 > URL: https://issues.apache.org/jira/browse/HADOOP-15657 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Reporter: Sushil Ks >Assignee: Sushil Ks >Priority: Major > Attachments: HADOOP-15657.001.patch > > > Currently when creating new metrics we use @Metric annotation for registering > the MutableMetric i.e > {code:java} > @Metric > private MutableInt foobarMetricCount > {code} > However, there's no support for registering MutableQuantiles via Metric > annotation, hence creating this Jira to register MutableQuantiles via Metric > annotation. > Example: > > {code:java} > @Metric(about = "async PUT entities latency", valueName = "latency", interval > = 10) > private MutableQuantiles foobarAsyncLatency; > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15657) Registering MutableQuantiles via Metric annotation
[ https://issues.apache.org/jira/browse/HADOOP-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reassigned HADOOP-15657: -- Assignee: Sushil Ks > Registering MutableQuantiles via Metric annotation > -- > > Key: HADOOP-15657 > URL: https://issues.apache.org/jira/browse/HADOOP-15657 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Reporter: Sushil Ks >Assignee: Sushil Ks >Priority: Major > Attachments: HADOOP-15657.001.patch > > > Currently when creating new metrics we use @Metric annotation for registering > the MutableMetric i.e > {code:java} > @Metric > private MutableInt foobarMetricCount > {code} > However, there's no support for registering MutableQuantiles via Metric > annotation, hence creating this Jira to register MutableQuantiles via Metric > annotation. > Example: > > {code:java} > @Metric(about = "async PUT entities latency", valueName = "latency", interval > = 10) > private MutableQuantiles foobarAsyncLatency; > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15580) ATSv2 HBase tests are failing with ClassNotFoundException
Rohith Sharma K S created HADOOP-15580: -- Summary: ATSv2 HBase tests are failing with ClassNotFoundException Key: HADOOP-15580 URL: https://issues.apache.org/jira/browse/HADOOP-15580 Project: Hadoop Common Issue Type: Improvement Reporter: Rohith Sharma K S It is seen in recent QA report that ATSv2 Hbase tests are failing with ClassNotFoundException. This looks to be regression from hadoop common patch or any other patch. We need to figure out which JIRA broke this and fix tests failure. {noformat} ERROR] org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps Time elapsed: 0.102 s <<< ERROR! java.lang.NoClassDefFoundError: org/apache/hadoop/crypto/key/KeyProviderTokenIssuer at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps.setupBeforeClass(TestHBaseTimelineStorageApps.java:97) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.crypto.key.KeyProviderTokenIssuer at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15514) NoClassDefFoundError for TimelineCollectorManager when starting MiniYARNCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502980#comment-16502980 ] Rohith Sharma K S commented on HADOOP-15514: thanks [~sunilg] for committing and thanks to [~zjffdu] for verification. > NoClassDefFoundError for TimelineCollectorManager when starting > MiniYARNCluster > --- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Rohith Sharma K S >Priority: Major > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-15514.01.patch > > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1
[ https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502873#comment-16502873 ] Rohith Sharma K S commented on HADOOP-15483: I did basic testing with this patch, and found that it is breaking Yarn scheduler queue page. This patch need to revisited for YARN UI > Upgrade jquery to version 3.3.1 > --- > > Key: HADOOP-15483 > URL: https://issues.apache.org/jira/browse/HADOOP-15483 > Project: Hadoop Common > Issue Type: Task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, > HADOOP-15483.003.patch, HADOOP-15483.004.patch, HADOOP-15483.005.patch, > HADOOP-15483.006.patch > > > This Jira aims to upgrade jquery to version 3.3.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501521#comment-16501521 ] Rohith Sharma K S edited comment on HADOOP-15514 at 6/5/18 11:01 AM: - Updated the patch fixing MiniYARNcluster start issue. Following are the change in patch # Timeline service jar was excluded in hadoop-client-minicluster jar. This patch includes timeline-service jar classes. # After above change, started getting NoClassDefFoundError error for zookeeper package. Looking to hadoop-client-minicluster.jar, zookeeper package is excluded assuming that hadoop-client-runtime.jar includes it. But zookeeper package was not shaded anywhere which leading this issue. I removed zookeeper package from exclude list as well. [~sunilg] [~vinodkv] kindly review this change. was (Author: rohithsharma): Updated the patch fixing MiniYARNcluster start issue. There are change I did # Timeline service jar was excluded in hadoop-client-minicluster jar. This patch includes timeline-service jar classes. # After above change, started getting NoClassDefFoundError error for zookeeper package. Looking to hadoop-client-minicluster.jar, zookeeper package is excluded assuming that hadoop-client-runtime.jar includes it. But zookeeper package was not shaded anywhere which leading this issue. I removed zookeeper package from exclude list as well. [~sunilg] [~vinodkv] kindly review this change. > NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster > -- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Rohith Sharma K S >Priority: Major > Attachments: HADOOP-15514.01.patch > > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-15514: --- Status: Patch Available (was: Open) Updated the patch fixing MiniYARNcluster start issue. There are change I did # Timeline service jar was excluded in hadoop-client-minicluster jar. This patch includes timeline-service jar classes. # After above change, started getting NoClassDefFoundError error for zookeeper package. Looking to hadoop-client-minicluster.jar, zookeeper package is excluded assuming that hadoop-client-runtime.jar includes it. But zookeeper package was not shaded anywhere which leading this issue. I removed zookeeper package from exclude list as well. [~sunilg] [~vinodkv] kindly review this change. > NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster > -- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Rohith Sharma K S >Priority: Major > Attachments: HADOOP-15514.01.patch > > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-15514: --- Attachment: HADOOP-15514.01.patch > NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster > -- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Rohith Sharma K S >Priority: Major > Attachments: HADOOP-15514.01.patch > > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reassigned HADOOP-15514: -- Assignee: Rohith Sharma K S > NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster > -- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Rohith Sharma K S >Priority: Major > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
[ https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501286#comment-16501286 ] Rohith Sharma K S commented on HADOOP-15514: It looks shaded client jar doesn't include timeline-service jar. > NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster > -- > > Key: HADOOP-15514 > URL: https://issues.apache.org/jira/browse/HADOOP-15514 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jeff Zhang >Priority: Major > > {code:java} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975){code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497521#comment-16497521 ] Rohith Sharma K S commented on HADOOP-15137: [~bharatviswa] [~zjffdu] which are the branches affected with this? which are the branches need to be committed? [~zjffdu] would you once verify the fix please? > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15272) Update Guava, see what breaks
[ https://issues.apache.org/jira/browse/HADOOP-15272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381990#comment-16381990 ] Rohith Sharma K S commented on HADOOP-15272: This need to be discussed/tested irrespective of HBase-2 version which will be added in YARN-7346. ATSv2 code compiles by default with HBase-1.2.6 even after YARN-7346. So this JIRA shouldn't depends upon YARN-7346 specifically. > Update Guava, see what breaks > - > > Key: HADOOP-15272 > URL: https://issues.apache.org/jira/browse/HADOOP-15272 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Priority: Major > > We're still on Guava 11; the last attempt at an update (HADOOP-10101) failed > to take > The HBase 2 version of ATS should permit this, at least for its profile. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378116#comment-16378116 ] Rohith Sharma K S commented on HADOOP-15137: Thanks [~bharatviswa] for working on this jira. The patch looks reasonable to me which removes hadoop-yarn-server-common dependencies. > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: HADOOP-15137 > URL: https://issues.apache.org/jira/browse/HADOOP-15137 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jeff Zhang >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, > YARN-7673.00.patch > > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13786) Add S3A committer for zero-rename commits to S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16266376#comment-16266376 ] Rohith Sharma K S commented on HADOOP-13786: This patch generates java doc errors. See MAPREDUCE-7014. > Add S3A committer for zero-rename commits to S3 endpoints > - > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 3.1.0 > > Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, > HADOOP-13786-038.patch, HADOOP-13786-039.patch, > HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, > HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, > HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, > HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, > HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, > HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, > HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, > HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, > HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, > HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, > HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, > HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, > HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, > HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, > HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, > HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, > HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, > MAPREDUCE-6823-003.patch, cloud-intergration-test-failure.log, > objectstore.pdf, s3committer-master.zip > > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263739#comment-16263739 ] Rohith Sharma K S commented on HADOOP-15059: Update : Fortunately I couldn't reproduce the issue which I reported in earlier comment. I am able to install Hadoop-3.0-RC0 + HBase-1.2.6 in secure mode and run successfully today. I am not sure that any issues post Hadoop-alpha-2 has fixed this issue. IIRC, the build which I used to test this combination is Hadoop-3.0-alpha2/3 + HBase-1.2.4/5! Anyway its good news for ATSv2 folks which we were worried about this. I will be keep trying to reproduce this weekend as well. If there any issues found, I will be updating here. Till that time, please ignore that issue. I would appreciate if someone else can also validate the behavior. This gives additional confidence that wire compatibility across Hadoop-2 and Hadoop-3 is achieved! > 3.0 deployment cannot work with old version MR tar ball which break rolling > upgrade > --- > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Priority: Blocker > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262023#comment-16262023 ] Rohith Sharma K S commented on HADOOP-15059: Atsv2 officially claims HBase-1.2.6 as backend. It works _absolutely fine_ in non-secure mode i.e installing *Hadoop-3.0 + HBase-1.2.6*. But the same deployment in secured cluster does not work because HBase-1.2.6 does not communicate to Hadoop-3.x because of token proto mismatch. Basically HMaster daemon start fails with exception while it is connecting into Hadoop-3.x in secure cluster! To simplify the problem, Hadoop-2.x clients(HBase-1.2.6 compiled against Hadoop-2.x) doesn't communicate with Hadoop-3.x cluster. Are we going to keep binary compatibility across hadoop-2.x and hadoop-3.x? Similar scenario can happen while rolling upgrade as well which reported in this JIRA. Btw, from ATSv2 we are planning to add this in documentation as known issue until hbase release 2.x. cc:/[~vrushalic] [~varun_saxena] > 3.0 deployment cannot work with old version MR tar ball which break rolling > upgrade > --- > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Priority: Blocker > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user
[ https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121786#comment-16121786 ] Rohith Sharma K S commented on HADOOP-14728: It seems reasonable to me to throw AuthorizationException rather than continuing on it. I will cancel the patch! > Configuring AuthenticationFilterInitializer throws IllegalArgumentException: > Null user > -- > > Key: HADOOP-14728 > URL: https://issues.apache.org/jira/browse/HADOOP-14728 > Project: Hadoop Common > Issue Type: Bug >Reporter: Krishna Pandey > Attachments: HADOOP-14728.01.patch > > > Configured AuthenticationFilterInitializer and started a cluster. When > accessing YARN UI using doAs, encountering following error. > URL : http://localhost:25005/cluster??doAs=guest > {noformat} > org.apache.hadoop.security.authentication.util.SignerException: Invalid > signature > 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster > java.lang.IllegalArgumentException: Null user > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499) > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79) > at org.apache.hadoop.yarn.webapp.View.render(View.java:235) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49) > at > org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117) > at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848) > at > org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61) > at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82) > at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user
[ https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-14728: --- Status: Open (was: Patch Available) > Configuring AuthenticationFilterInitializer throws IllegalArgumentException: > Null user > -- > > Key: HADOOP-14728 > URL: https://issues.apache.org/jira/browse/HADOOP-14728 > Project: Hadoop Common > Issue Type: Bug >Reporter: Krishna Pandey > Attachments: HADOOP-14728.01.patch > > > Configured AuthenticationFilterInitializer and started a cluster. When > accessing YARN UI using doAs, encountering following error. > URL : http://localhost:25005/cluster??doAs=guest > {noformat} > org.apache.hadoop.security.authentication.util.SignerException: Invalid > signature > 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster > java.lang.IllegalArgumentException: Null user > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499) > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79) > at org.apache.hadoop.yarn.webapp.View.render(View.java:235) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49) > at > org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117) > at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848) > at > org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61) > at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82) > at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user
[ https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112163#comment-16112163 ] Rohith Sharma K S commented on HADOOP-14728: HttpServletRequest#getRemoteUser could be null. Need to handle null check before creating proxyUGI otherwise UGI creation will throw an exception. > Configuring AuthenticationFilterInitializer throws IllegalArgumentException: > Null user > -- > > Key: HADOOP-14728 > URL: https://issues.apache.org/jira/browse/HADOOP-14728 > Project: Hadoop Common > Issue Type: Bug >Reporter: Krishna Pandey > Attachments: HADOOP-14728.01.patch > > > Configured AuthenticationFilterInitializer and started a cluster. When > accessing YARN UI using doAs, encountering following error. > URL : http://localhost:25005/cluster??doAs=guest > {noformat} > org.apache.hadoop.security.authentication.util.SignerException: Invalid > signature > 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster > java.lang.IllegalArgumentException: Null user > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499) > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79) > at org.apache.hadoop.yarn.webapp.View.render(View.java:235) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49) > at > org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117) > at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848) > at > org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61) > at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82) > at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user
[ https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-14728: --- Status: Patch Available (was: Open) > Configuring AuthenticationFilterInitializer throws IllegalArgumentException: > Null user > -- > > Key: HADOOP-14728 > URL: https://issues.apache.org/jira/browse/HADOOP-14728 > Project: Hadoop Common > Issue Type: Bug >Reporter: Krishna Pandey > Attachments: HADOOP-14728.01.patch > > > Configured AuthenticationFilterInitializer and started a cluster. When > accessing YARN UI using doAs, encountering following error. > URL : http://localhost:25005/cluster??doAs=guest > {noformat} > org.apache.hadoop.security.authentication.util.SignerException: Invalid > signature > 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster > java.lang.IllegalArgumentException: Null user > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499) > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79) > at org.apache.hadoop.yarn.webapp.View.render(View.java:235) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49) > at > org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117) > at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848) > at > org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61) > at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82) > at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user
[ https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-14728: --- Attachment: HADOOP-14728.01.patch > Configuring AuthenticationFilterInitializer throws IllegalArgumentException: > Null user > -- > > Key: HADOOP-14728 > URL: https://issues.apache.org/jira/browse/HADOOP-14728 > Project: Hadoop Common > Issue Type: Bug >Reporter: Krishna Pandey > Attachments: HADOOP-14728.01.patch > > > Configured AuthenticationFilterInitializer and started a cluster. When > accessing YARN UI using doAs, encountering following error. > URL : http://localhost:25005/cluster??doAs=guest > {noformat} > org.apache.hadoop.security.authentication.util.SignerException: Invalid > signature > 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster > java.lang.IllegalArgumentException: Null user > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499) > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79) > at org.apache.hadoop.yarn.webapp.View.render(View.java:235) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49) > at > org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117) > at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848) > at > org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61) > at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82) > at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user
[ https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S moved YARN-6928 to HADOOP-14728: -- Key: HADOOP-14728 (was: YARN-6928) Project: Hadoop Common (was: Hadoop YARN) > Configuring AuthenticationFilterInitializer throws IllegalArgumentException: > Null user > -- > > Key: HADOOP-14728 > URL: https://issues.apache.org/jira/browse/HADOOP-14728 > Project: Hadoop Common > Issue Type: Bug >Reporter: Krishna Pandey > > Configured AuthenticationFilterInitializer and started a cluster. When > accessing YARN UI using doAs, encountering following error. > URL : http://localhost:25005/cluster??doAs=guest > {noformat} > org.apache.hadoop.security.authentication.util.SignerException: Invalid > signature > 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster > java.lang.IllegalArgumentException: Null user > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499) > at > org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82) > at > org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207) > at > org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79) > at org.apache.hadoop.yarn.webapp.View.render(View.java:235) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49) > at > org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117) > at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848) > at > org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61) > at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82) > at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters
[ https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-14412: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.2 3.0.0-alpha3 2.9.0 Status: Resolved (was: Patch Available) committed to trunk/branch-2/branch-2.8. thanks Jason for the patch! > HostsFileReader#getHostDetails is very expensive on large clusters > -- > > Key: HADOOP-14412 > URL: https://issues.apache.org/jira/browse/HADOOP-14412 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.8.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.2 > > Attachments: HADOOP-14412.001.patch, HADOOP-14412.002.patch, > HADOOP-14412-branch-2.001.patch, HADOOP-14412-branch-2.002.patch, > HADOOP-14412-branch-2.002.patch, HADOOP-14412-branch-2.8.002.patch > > > After upgrading one of our large clusters to 2.8 we noticed many IPC server > threads of the resourcemanager spending time in NodesListManager#isValidNode > which in turn was calling HostsFileReader#getHostDetails. The latter is > creating complete copies of the include and exclude sets for every node > heartbeat, and these sets are not small due to the size of the cluster. > These copies are causing multiple resizes of the underlying HashSets being > filled and creating lots of garbage. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters
[ https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013429#comment-16013429 ] Rohith Sharma K S commented on HADOOP-14412: committing shortly > HostsFileReader#getHostDetails is very expensive on large clusters > -- > > Key: HADOOP-14412 > URL: https://issues.apache.org/jira/browse/HADOOP-14412 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.8.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Attachments: HADOOP-14412.001.patch, HADOOP-14412.002.patch, > HADOOP-14412-branch-2.001.patch, HADOOP-14412-branch-2.002.patch, > HADOOP-14412-branch-2.002.patch, HADOOP-14412-branch-2.8.002.patch > > > After upgrading one of our large clusters to 2.8 we noticed many IPC server > threads of the resourcemanager spending time in NodesListManager#isValidNode > which in turn was calling HostsFileReader#getHostDetails. The latter is > creating complete copies of the include and exclude sets for every node > heartbeat, and these sets are not small due to the size of the cluster. > These copies are causing multiple resizes of the underlying HashSets being > filled and creating lots of garbage. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters
[ https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011741#comment-16011741 ] Rohith Sharma K S commented on HADOOP-14412: I will commit trunk patch later of today if no more objections. Branch-2-v2 patch jenkins has not triggered. Branch-2.8 patch looks good to me. > HostsFileReader#getHostDetails is very expensive on large clusters > -- > > Key: HADOOP-14412 > URL: https://issues.apache.org/jira/browse/HADOOP-14412 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.8.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Attachments: HADOOP-14412.001.patch, HADOOP-14412.002.patch, > HADOOP-14412-branch-2.001.patch, HADOOP-14412-branch-2.002.patch, > HADOOP-14412-branch-2.8.002.patch > > > After upgrading one of our large clusters to 2.8 we noticed many IPC server > threads of the resourcemanager spending time in NodesListManager#isValidNode > which in turn was calling HostsFileReader#getHostDetails. The latter is > creating complete copies of the include and exclude sets for every node > heartbeat, and these sets are not small due to the size of the cluster. > These copies are causing multiple resizes of the underlying HashSets being > filled and creating lots of garbage. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters
[ https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007915#comment-16007915 ] Rohith Sharma K S commented on HADOOP-14412: Thanks Jason for finding this issue. I am +1 for using AtomicReference and for the patch. It looks cleaner and better solution now. > HostsFileReader#getHostDetails is very expensive on large clusters > -- > > Key: HADOOP-14412 > URL: https://issues.apache.org/jira/browse/HADOOP-14412 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.8.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Attachments: HADOOP-14412.001.patch > > > After upgrading one of our large clusters to 2.8 we noticed many IPC server > threads of the resourcemanager spending time in NodesListManager#isValidNode > which in turn was calling HostsFileReader#getHostDetails. The latter is > creating complete copies of the include and exclude sets for every node > heartbeat, and these sets are not small due to the size of the cluster. > These copies are causing multiple resizes of the underlying HashSets being > filled and creating lots of garbage. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320163#comment-15320163 ] Rohith Sharma K S commented on HADOOP-13184: +1 for option 1 > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285931#comment-15285931 ] Rohith Sharma K S commented on HADOOP-12687: I closed the INFRA JIRA as wont fix as per discussion with Allen ([comment-link|https://issues.apache.org/jira/browse/YARN-4478?focusedCommentId=15257550=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15257550]) in issue YARN-4478. I believe this JIRA should get in. Given RFC standard are negotiable, Can other folks express their opinion on the patch? > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G >Priority: Blocker > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285925#comment-15285925 ] Rohith Sharma K S commented on HADOOP-12687: Linking to YARN-4478, few useful discussion happened related to this issue. Discussion starts from this [comment|https://issues.apache.org/jira/browse/YARN-4478?focusedCommentId=15174874=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15174874] > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G >Priority: Blocker > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-12863) Too many connection opened to TimelineServer while publishing entities
Rohith Sharma K S created HADOOP-12863: -- Summary: Too many connection opened to TimelineServer while publishing entities Key: HADOOP-12863 URL: https://issues.apache.org/jira/browse/HADOOP-12863 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Priority: Critical It is observed that there are too many connections are kept opened to TimelineServer while publishing entities via SystemMetricsPublisher. This cause sometimes resource shortage for other process or RM itself {noformat} tcp0 0 10.18.99.110:3999 10.18.214.60:59265 ESTABLISHED 115302/java tcp0 0 10.18.99.110:25001 :::*LISTEN 115302/java tcp0 0 10.18.99.110:25002 :::*LISTEN 115302/java tcp0 0 10.18.99.110:25003 :::*LISTEN 115302/java tcp0 0 10.18.99.110:25004 :::*LISTEN 115302/java tcp0 0 10.18.99.110:25005 :::*LISTEN 115302/java tcp1 0 10.18.99.110:48866 10.18.99.110:8188 CLOSE_WAIT 115302/java tcp1 0 10.18.99.110:48137 10.18.99.110:8188 CLOSE_WAIT 115302/java tcp1 0 10.18.99.110:47553 10.18.99.110:8188 CLOSE_WAIT 115302/java tcp1 0 10.18.99.110:48424 10.18.99.110:8188 CLOSE_WAIT 115302/java tcp1 0 10.18.99.110:48139 10.18.99.110:8188 CLOSE_WAIT 115302/java tcp1 0 10.18.99.110:48096 10.18.99.110:8188 CLOSE_WAIT 115302/java tcp1 0 10.18.99.110:47558 10.18.99.110:8188 CLOSE_WAIT 115302/java tcp1 0 10.18.99.110:49270 10.18.99.110:8188 CLOSE_WAIT 115302/java {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12757) Findbug compilation fails for 'Kafka Library support'
Rohith Sharma K S created HADOOP-12757: -- Summary: Findbug compilation fails for 'Kafka Library support' Key: HADOOP-12757 URL: https://issues.apache.org/jira/browse/HADOOP-12757 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Findbug compilation is failing for 'Kafka Library support' {noformat} [INFO] Apache Hadoop Amazon Web Services support .. SUCCESS [ 12.731 s] [INFO] Apache Hadoop Azure support SUCCESS [ 14.972 s] [INFO] Apache Hadoop Client ... SUCCESS [ 0.051 s] [INFO] Apache Hadoop Mini-Cluster . SUCCESS [ 0.045 s] [INFO] Apache Hadoop Scheduler Load Simulator . SUCCESS [ 15.146 s] [INFO] Apache Hadoop Tools Dist ... SUCCESS [ 0.045 s] [INFO] Apache Hadoop Kafka Library support FAILURE [ 0.263 s] [INFO] Apache Hadoop Tools SKIPPED [INFO] Apache Hadoop Distribution . SKIPPED [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 21:40 min [INFO] Finished at: 2016-02-02T10:16:57+05:30 [INFO] Final Memory: 159M/941M [INFO] [ERROR] Could not find resource '/home/root1/workspace/hadoop-trunk/hadoop-tools/hadoop-kafka/dev-support/findbugs-exclude.xml'. -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ResourceNotFoundException {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reopened HADOOP-12687: Reverted the issue commit, and reopening the issue. > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088944#comment-15088944 ] Rohith Sharma K S commented on HADOOP-12687: All the VM's machine should contains "." at the end of hostname in /etc/hosts file. I verified tests cases by adding dot "." and all tests are passing. I think need to raise INFRA jira for changing hostname in VM's. > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-12687: --- Fix Version/s: (was: 2.9.0) > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-12687: --- Hadoop Flags: (was: Reviewed) > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12687) Timeout for tests in TestYarnClient, TestAMRMClient and TestNMClient
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085438#comment-15085438 ] Rohith Sharma K S commented on HADOOP-12687: Hi [~sunilg], small update can you do for the patch? # Instead of catching UnknownHostException , can you move down {{addr = InetAddress.getByName(host);}} like below. So need not catch UnknownHostException. And add a comment there. {code} addr = getByNameWithSearch(host); if (addr == null) { addr = getByExactName(host); if (addr == null) { // comment addr = InetAddress.getByName(host); } } {code} # Not related to patch, need to change the summary of this JIRA that reflect actual code change in Hadoop Common. > Timeout for tests in TestYarnClient, TestAMRMClient and TestNMClient > > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-12687: --- Status: Patch Available (was: Open) > SecureUtil#getByName should also try to resolve direct hostname incase > multiple loopback addresses are present in etc/hosts > --- > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-12687: --- Status: Open (was: Patch Available) > SecureUtil#getByName should also try to resolve direct hostname incase > multiple loopback addresses are present in etc/hosts > --- > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086731#comment-15086731 ] Rohith Sharma K S commented on HADOOP-12687: Cancelled the patch and resubmitted again to trigger Jenkin > SecureUtil#getByName should also try to resolve direct hostname incase > multiple loopback addresses are present in etc/hosts > --- > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086808#comment-15086808 ] Rohith Sharma K S commented on HADOOP-12687: committing shortly > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-12687: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.9.0 Status: Resolved (was: Patch Available) committed to trunk/branch-2. thanks [~sunilg] for the patch:-) and thanks [~vinayrpet] for the review > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G > Labels: security > Fix For: 2.9.0 > > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HADOOP-12313) Some tests in TestRMAdminService fails with NPE
[ https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S moved YARN-4035 to HADOOP-12313: -- Affects Version/s: (was: 2.8.0) Target Version/s: 2.8.0 (was: 2.8.0) Key: HADOOP-12313 (was: YARN-4035) Project: Hadoop Common (was: Hadoop YARN) Some tests in TestRMAdminService fails with NPE Key: HADOOP-12313 URL: https://issues.apache.org/jira/browse/HADOOP-12313 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Assignee: Gabor Liptak Attachments: YARN-4035.1.patch It is observed that after YARN-4019 some tests are failing in TestRMAdminService with null pointer exceptions in build [build failure |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt] {noformat} unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.132 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824) testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.121 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-12313) Some tests in TestRMAdminService fails with NPE
[ https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reassigned HADOOP-12313: -- Assignee: Rohith Sharma K S (was: Gabor Liptak) Some tests in TestRMAdminService fails with NPE Key: HADOOP-12313 URL: https://issues.apache.org/jira/browse/HADOOP-12313 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Assignee: Rohith Sharma K S Attachments: YARN-4035.1.patch It is observed that after YARN-4019 some tests are failing in TestRMAdminService with null pointer exceptions in build [build failure |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt] {noformat} unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.132 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824) testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.121 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12313) Possible NPE in JvmPauseMonitor.stop()
[ https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated HADOOP-12313: --- Summary: Possible NPE in JvmPauseMonitor.stop() (was: Some tests in TestRMAdminService fails with NPE ) Possible NPE in JvmPauseMonitor.stop() -- Key: HADOOP-12313 URL: https://issues.apache.org/jira/browse/HADOOP-12313 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Assignee: Rohith Sharma K S Attachments: YARN-4035.1.patch It is observed that after YARN-4019 some tests are failing in TestRMAdminService with null pointer exceptions in build [build failure |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt] {noformat} unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.132 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824) testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.121 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-12313) Possible NPE in JvmPauseMonitor.stop()
[ https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reassigned HADOOP-12313: -- Assignee: Gabor Liptak (was: Rohith Sharma K S) Assigned to me by mistake, assigned back to [~gliptak] Possible NPE in JvmPauseMonitor.stop() -- Key: HADOOP-12313 URL: https://issues.apache.org/jira/browse/HADOOP-12313 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Assignee: Gabor Liptak Attachments: YARN-4035.1.patch It is observed that after YARN-4019 some tests are failing in TestRMAdminService with null pointer exceptions in build [build failure |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt] {noformat} unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.132 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824) testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService) Time elapsed: 0.121 sec ERROR! java.lang.NullPointerException: null at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.AbstractService.close(AbstractService.java:250) at org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9654) IPC timeout doesn't seem to be kicking in
[ https://issues.apache.org/jira/browse/HADOOP-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681262#comment-14681262 ] Rohith Sharma K S commented on HADOOP-9654: --- Is it same as HADOOP-11252? IPC timeout doesn't seem to be kicking in - Key: HADOOP-9654 URL: https://issues.apache.org/jira/browse/HADOOP-9654 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 2.1.0-beta Reporter: Roman Shaposhnik Assignee: Ajith S During my Bigtop testing I made the NN OOM. This, in turn, made all of the clients stuck in the IPC call (even the new clients that I run *after* the NN went OOM). Here's an example of a jstack output on the client that was running: {noformat} $ hadoop fs -lsr / {noformat} Stacktrace: {noformat} /usr/java/jdk1.6.0_21/bin/jstack 19078 2013-06-19 23:14:00 Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode): Attach Listener daemon prio=10 tid=0x7fcd8c8c1800 nid=0x5105 waiting on condition [0x] java.lang.Thread.State: RUNNABLE IPC Client (1223039541) connection to ip-10-144-82-213.ec2.internal/10.144.82.213:17020 from root daemon prio=10 tid=0x7fcd8c7ea000 nid=0x4aa0 runnable [0x7fcd443e2000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) - locked 0x7fcd7529de18 (a sun.nio.ch.Util$1) - locked 0x7fcd7529de00 (a java.util.Collections$UnmodifiableSet) - locked 0x7fcd7529da80 (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:116) at java.io.FilterInputStream.read(FilterInputStream.java:116) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:421) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) - locked 0x7fcd752aaf18 (a java.io.BufferedInputStream) at java.io.DataInputStream.readInt(DataInputStream.java:370) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:943) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:840) Low Memory Detector daemon prio=10 tid=0x7fcd8c09 nid=0x4a9b runnable [0x] java.lang.Thread.State: RUNNABLE CompilerThread1 daemon prio=10 tid=0x7fcd8c08d800 nid=0x4a9a waiting on condition [0x] java.lang.Thread.State: RUNNABLE CompilerThread0 daemon prio=10 tid=0x7fcd8c08a800 nid=0x4a99 waiting on condition [0x] java.lang.Thread.State: RUNNABLE Signal Dispatcher daemon prio=10 tid=0x7fcd8c088800 nid=0x4a98 runnable [0x] java.lang.Thread.State: RUNNABLE Finalizer daemon prio=10 tid=0x7fcd8c06a000 nid=0x4a97 in Object.wait() [0x7fcd902e9000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) - locked 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159) Reference Handler daemon prio=10 tid=0x7fcd8c068000 nid=0x4a96 in Object.wait() [0x7fcd903ea000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock) main prio=10 tid=0x7fcd8c00a800 nid=0x4a92 in Object.wait() [0x7fcd91b06000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on 0x7fcd752528e8 (a org.apache.hadoop.ipc.Client$Call) at java.lang.Object.wait(Object.java:485)