[jira] [Commented] (HADOOP-14932) Move Mockito up to 1.10.19 to be compatible with HBase
[ https://issues.apache.org/jira/browse/HADOOP-14932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194163#comment-16194163 ] Akira Ajisaka commented on HADOOP-14932: Thanks. Set the priority to minor. > Move Mockito up to 1.10.19 to be compatible with HBase > -- > > Key: HADOOP-14932 > URL: https://issues.apache.org/jira/browse/HADOOP-14932 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Reporter: Akira Ajisaka >Priority: Minor > > HADOOP-14178 will upgrade Mockito up to 2.x, however, the progress is slow > and probably this won't be in Hadoop 3.0 GA. Apache HBase community wants > Hadoop to upgrade Mockito up to 1.10.19 to compile HBase test code with > Hadoop 3.0 successfully if we cannot upgrade Mockito to 2.x. > Thanks [~tedyu] for the > [report|https://issues.apache.org/jira/browse/HADOOP-14178?focusedCommentId=16193128=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16193128]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14932) Move Mockito up to 1.10.19 to be compatible with HBase
[ https://issues.apache.org/jira/browse/HADOOP-14932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14932: --- Priority: Minor (was: Critical) > Move Mockito up to 1.10.19 to be compatible with HBase > -- > > Key: HADOOP-14932 > URL: https://issues.apache.org/jira/browse/HADOOP-14932 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Reporter: Akira Ajisaka >Priority: Minor > > HADOOP-14178 will upgrade Mockito up to 2.x, however, the progress is slow > and probably this won't be in Hadoop 3.0 GA. Apache HBase community wants > Hadoop to upgrade Mockito up to 1.10.19 to compile HBase test code with > Hadoop 3.0 successfully if we cannot upgrade Mockito to 2.x. > Thanks [~tedyu] for the > [report|https://issues.apache.org/jira/browse/HADOOP-14178?focusedCommentId=16193128=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16193128]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14934) Remove Warnings when building ozone
Bharat Viswanadham created HADOOP-14934: --- Summary: Remove Warnings when building ozone Key: HADOOP-14934 URL: https://issues.apache.org/jira/browse/HADOOP-14934 Project: Hadoop Common Issue Type: Bug Reporter: Bharat Viswanadham [WARNING] [WARNING] Some problems were encountered while building the effective model for org.apache.hadoop:hadoop-ozone:jar:3.1.0-SNAPSHOT [WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-project-info-reports-plugin is missing. @ org.apache.hadoop:hadoop-ozone:[unknown-version], /Users/aengineer/codereview/hadoop-tools/hadoop-ozone/pom.xml, line 36, column 15 [WARNING] [WARNING] Some problems were encountered while building the effective model for org.apache.hadoop:hadoop-dist:jar:3.1.0-SNAPSHOT [WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-gpg-plugin is missing. @ line 133, column 15 [WARNING] [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build. [WARNING] [WARNING] For this reason, future Maven versions might no longer support building such malformed projects. [WARNING] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194065#comment-16194065 ] Hudson commented on HADOOP-14521: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13039 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13039/]) HADOOP-14521. KMS client needs retry logic. Contributed by Rushabh S (xiao: rev 25f31d9fc47d21ac2f3afd7042e2ced1b849da39) * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/kms/TestLoadBalancingKMSClientProvider.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 2.8.3, 3.0.0 > > Attachments: HADOOP-14521.09.patch, HADOOP-14521.11.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.
[ https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194064#comment-16194064 ] Xiao Chen commented on HADOOP-14920: httpfs tests from HDFS-12600 passed, and the failures there for hdfs looks unrelated. Can we add coverage in {{TestDelegationTokenAuthenticationHandlerWithMocks}}, to test renew and cancel of tokens that were gotten with a service? +1 after that's done > KMSClientProvider won't work with KMS delegation token retrieved from > non-Java client. > -- > > Key: HADOOP-14920 > URL: https://issues.apache.org/jira/browse/HADOOP-14920 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-14920.001.patch, HADOOP-14920.002.patch > > > HADOOP-13381 added support to use KMS delegation token to connect to KMS > server for key operations. However, the logic to check if the UGI container > KMS delegation token assumes that the token must contain a service attribute. > Otherwise, a KMS delegation token won't be recognized. > For delegation token obtained via non-java client such curl (http), the > default DelegationTokenAuthenticationHandler only support *renewer* parameter > and assume the client itself will add the service attribute. This makes a > java client with KMSClientProvdier can't use for KMS delegation token > retrieved form non-java client because the token does not contain a service > attribute. > I did some investigation on this and found two solutions: > 1. Similar use case exists for webhdfs, and webhdfs supports it with a > ["service" > parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token]. > We can do this similarly by allowing client to specify a service attribute in > the request URL and included in the token returned like webhdfs. Even though > this will change in DelegationTokenAuthenticationHandler and may affect many > other web component, this seems to be a clean and low risk solution because > it will be an optional parameter. Also, other components get non-java client > interop support for free if they have the similar use case. > 2. The other way to solve this is to release the token check in > KMSClientProvider to check only the token kind instead of the service. This > is an easy work around but seems less optimal to me. > cc: [~xiaochen] for additional input. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14521: --- Resolution: Fixed Hadoop Flags: Reviewed (was: Incompatible change,Reviewed) Fix Version/s: 3.0.0 2.8.3 2.9.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-3.0, branch-2, branch-2.8. There was a trivial conflict backporting to branch-2.8 (log message), compiled before committing. Thanks Rushabh and Andrew! Resolving this jira and removed the incompatible flag. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 2.8.3, 3.0.0 > > Attachments: HADOOP-14521.09.patch, HADOOP-14521.11.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-14929) Cleanup usage of decodecomponent and use QueryStringDecoder from netty
[ https://issues.apache.org/jira/browse/HADOOP-14929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-14929 started by Bharat Viswanadham. --- > Cleanup usage of decodecomponent and use QueryStringDecoder from netty > -- > > Key: HADOOP-14929 > URL: https://issues.apache.org/jira/browse/HADOOP-14929 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > This is from the review of HADOOP-14910 > There is also other place usage of > decodeComponent(param(CreateFlagParam.NAME), StandardCharsets.UTF_8); > In ParameterParser.java Line 147-148: > String cf = decodeComponent(param(CreateFlagParam.NAME), > StandardCharsets.UTF_8); > Use QueryStringDecoder from netty here too and cleanup the decodeComponent. > Actually this is added for netty issue only. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14929) Cleanup usage of decodecomponent and use QueryStringDecoder from netty
[ https://issues.apache.org/jira/browse/HADOOP-14929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194013#comment-16194013 ] Bharat Viswanadham edited comment on HADOOP-14929 at 10/6/17 1:19 AM: -- will upload a patch once HADOOP-14910 gets committed. was (Author: bharatviswa): will upload a patch once HADOOP-14910 gets committed. > Cleanup usage of decodecomponent and use QueryStringDecoder from netty > -- > > Key: HADOOP-14929 > URL: https://issues.apache.org/jira/browse/HADOOP-14929 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > This is from the review of HADOOP-14910 > There is also other place usage of > decodeComponent(param(CreateFlagParam.NAME), StandardCharsets.UTF_8); > In ParameterParser.java Line 147-148: > String cf = decodeComponent(param(CreateFlagParam.NAME), > StandardCharsets.UTF_8); > Use QueryStringDecoder from netty here too and cleanup the decodeComponent. > Actually this is added for netty issue only. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14929) Cleanup usage of decodecomponent and use QueryStringDecoder from netty
[ https://issues.apache.org/jira/browse/HADOOP-14929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194013#comment-16194013 ] Bharat Viswanadham commented on HADOOP-14929: - will upload a patch once HADOOP-14910 gets committed. > Cleanup usage of decodecomponent and use QueryStringDecoder from netty > -- > > Key: HADOOP-14929 > URL: https://issues.apache.org/jira/browse/HADOOP-14929 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > This is from the review of HADOOP-14910 > There is also other place usage of > decodeComponent(param(CreateFlagParam.NAME), StandardCharsets.UTF_8); > In ParameterParser.java Line 147-148: > String cf = decodeComponent(param(CreateFlagParam.NAME), > StandardCharsets.UTF_8); > Use QueryStringDecoder from netty here too and cleanup the decodeComponent. > Actually this is added for netty issue only. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194011#comment-16194011 ] Robert Kanter commented on HADOOP-14930: Jetty moved a number of things, some that we're using and are updated in the patch, from {{SessionManager}} to {{SessionHandler}}. This means that upgrading Hadoop from Jetty 9.3.x to 9.4.x is going to also force _all_ downstream projects to also upgrade to Jetty 9.4.x. Are we okay with that? > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193999#comment-16193999 ] Andrew Wang commented on HADOOP-14930: -- How safe is this? [~jzhuge] / [~rkanter] any thoughts as recent Jetty experts? > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193956#comment-16193956 ] Andrew Wang commented on HADOOP-14521: -- +1 based on the diff, thanks Xiao and Rushabh! > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14521.09.patch, HADOOP-14521.11.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HADOOP-13835: Resolution: Fixed Fix Version/s: 2.9.0 Status: Resolved (was: Patch Available) > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch, > HADOOP-13835.branch-2.007.patch, HADOOP-13835.branch-2.008.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193952#comment-16193952 ] Wangda Tan commented on HADOOP-13835: - Pushed to branch-2, thanks [~vvasudev] and review from [~ajisakaa]! > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch, > HADOOP-13835.branch-2.007.patch, HADOOP-13835.branch-2.008.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash and linkFallback for ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193924#comment-16193924 ] Manoj Govindassamy commented on HADOOP-13055: - Test failures are not related to the patch. > Implement linkMergeSlash and linkFallback for ViewFileSystem > > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Affects Versions: 2.7.5 >Reporter: Zhe Zhang >Assignee: Manoj Govindassamy > Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, > HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, > HADOOP-13055.05.patch, HADOOP-13055.06.patch, HADOOP-13055.07.patch, > HADOOP-13055.08.patch, HADOOP-13055.09.patch > > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193869#comment-16193869 ] Hadoop QA commented on HADOOP-14930: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 29s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14930 | | JIRA Patch URL |
[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.
[ https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193783#comment-16193783 ] Xiao Chen commented on HADOOP-14920: Aha, see you already did HDFS-12600, let's wait for that then. Thanks Xiaoyu. > KMSClientProvider won't work with KMS delegation token retrieved from > non-Java client. > -- > > Key: HADOOP-14920 > URL: https://issues.apache.org/jira/browse/HADOOP-14920 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-14920.001.patch, HADOOP-14920.002.patch > > > HADOOP-13381 added support to use KMS delegation token to connect to KMS > server for key operations. However, the logic to check if the UGI container > KMS delegation token assumes that the token must contain a service attribute. > Otherwise, a KMS delegation token won't be recognized. > For delegation token obtained via non-java client such curl (http), the > default DelegationTokenAuthenticationHandler only support *renewer* parameter > and assume the client itself will add the service attribute. This makes a > java client with KMSClientProvdier can't use for KMS delegation token > retrieved form non-java client because the token does not contain a service > attribute. > I did some investigation on this and found two solutions: > 1. Similar use case exists for webhdfs, and webhdfs supports it with a > ["service" > parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token]. > We can do this similarly by allowing client to specify a service attribute in > the request URL and included in the token returned like webhdfs. Even though > this will change in DelegationTokenAuthenticationHandler and may affect many > other web component, this seems to be a clean and low risk solution because > it will be an optional parameter. Also, other components get non-java client > interop support for free if they have the similar use case. > 2. The other way to solve this is to release the token check in > KMSClientProvider to check only the token kind instead of the service. This > is an easy work around but seems less optimal to me. > cc: [~xiaochen] for additional input. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.
[ https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193779#comment-16193779 ] Xiao Chen commented on HADOOP-14920: bq. How do we do that? Hacky but I think a dummy change in hadoop-hdfs would trick pre-commit for us. No need to move jira :) > KMSClientProvider won't work with KMS delegation token retrieved from > non-Java client. > -- > > Key: HADOOP-14920 > URL: https://issues.apache.org/jira/browse/HADOOP-14920 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-14920.001.patch, HADOOP-14920.002.patch > > > HADOOP-13381 added support to use KMS delegation token to connect to KMS > server for key operations. However, the logic to check if the UGI container > KMS delegation token assumes that the token must contain a service attribute. > Otherwise, a KMS delegation token won't be recognized. > For delegation token obtained via non-java client such curl (http), the > default DelegationTokenAuthenticationHandler only support *renewer* parameter > and assume the client itself will add the service attribute. This makes a > java client with KMSClientProvdier can't use for KMS delegation token > retrieved form non-java client because the token does not contain a service > attribute. > I did some investigation on this and found two solutions: > 1. Similar use case exists for webhdfs, and webhdfs supports it with a > ["service" > parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token]. > We can do this similarly by allowing client to specify a service attribute in > the request URL and included in the token returned like webhdfs. Even though > this will change in DelegationTokenAuthenticationHandler and may affect many > other web component, this seems to be a clean and low risk solution because > it will be an optional parameter. Also, other components get non-java client > interop support for free if they have the similar use case. > 2. The other way to solve this is to release the token check in > KMSClientProvider to check only the token kind instead of the service. This > is an easy work around but seems less optimal to me. > cc: [~xiaochen] for additional input. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14910) Upgrade netty-all jar to latest 4.0.x.Final
[ https://issues.apache.org/jira/browse/HADOOP-14910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193762#comment-16193762 ] Jitendra Nath Pandey commented on HADOOP-14910: --- I will commit this tomorrow, if there are no objections. > Upgrade netty-all jar to latest 4.0.x.Final > --- > > Key: HADOOP-14910 > URL: https://issues.apache.org/jira/browse/HADOOP-14910 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Critical > Attachments: HADOOP-14910-01.patch, HADOOP-14910-02.patch > > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193723#comment-16193723 ] Bharat Viswanadham commented on HADOOP-14930: - Thank You [~te...@apache.org] for review. > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193694#comment-16193694 ] Ted Yu commented on HADOOP-14930: - lgtm > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193664#comment-16193664 ] Bharat Viswanadham edited comment on HADOOP-14930 at 10/5/17 9:01 PM: -- [~te...@apache.org] Uploaded the patch. Followed the guide to upgrade from 9.3 to 9.4 http://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html was (Author: bharatviswa): [~te...@apache.org] Uploaded the patch. > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193664#comment-16193664 ] Bharat Viswanadham commented on HADOOP-14930: - [~te...@apache.org] Uploaded the patch. > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14930: Status: Patch Available (was: In Progress) > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14930: Attachment: HADOOP-14930.00.patch > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > Attachments: HADOOP-14930.00.patch > > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14930: Attachment: HDFS-12553.06.patch > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14930: Attachment: (was: HDFS-12553.06.patch) > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193615#comment-16193615 ] Sean Busbey commented on HADOOP-14178: -- {quote} Ted Yu Can Hbase use hadoop shaded jars to avoid these kind of issue? {quote} Maybe in the future? Right now HBase's dependency on Hadoop is kind of messy for a few reasons. # we have to keep working on top of Hadoop 2.x and Hadoop 3.x. We mostly have this abstracted. # we have parts of HBase that make use of Hadoop internals such that we can't currently move all of the Hadoop-3 build version on to the client artifacts # The parts of HBase that use Hadoop internals are currently all mixed up with parts that are proper downstream consumers, so we can't even e.g. isolate the problem parts and then avoid mockito there. > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14933) CLONE - KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.
Xiaoyu Yao created HADOOP-14933: --- Summary: CLONE - KMSClientProvider won't work with KMS delegation token retrieved from non-Java client. Key: HADOOP-14933 URL: https://issues.apache.org/jira/browse/HADOOP-14933 Project: Hadoop Common Issue Type: Bug Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao HADOOP-13381 added support to use KMS delegation token to connect to KMS server for key operations. However, the logic to check if the UGI container KMS delegation token assumes that the token must contain a service attribute. Otherwise, a KMS delegation token won't be recognized. For delegation token obtained via non-java client such curl (http), the default DelegationTokenAuthenticationHandler only support *renewer* parameter and assume the client itself will add the service attribute. This makes a java client with KMSClientProvdier can't use for KMS delegation token retrieved form non-java client because the token does not contain a service attribute. I did some investigation on this and found two solutions: 1. Similar use case exists for webhdfs, and webhdfs supports it with a ["service" parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token]. We can do this similarly by allowing client to specify a service attribute in the request URL and included in the token returned like webhdfs. Even though this will change in DelegationTokenAuthenticationHandler and may affect many other web component, this seems to be a clean and low risk solution because it will be an optional parameter. Also, other components get non-java client interop support for free if they have the similar use case. 2. The other way to solve this is to release the token check in KMSClientProvider to check only the token kind instead of the service. This is an easy work around but seems less optimal to me. cc: [~xiaochen] for additional input. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193472#comment-16193472 ] Ted Yu commented on HADOOP-14178: - Logged HDFS-12599 > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14932) Move Mockito up to 1.10.19 to be compatible with HBase
[ https://issues.apache.org/jira/browse/HADOOP-14932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193465#comment-16193465 ] Steve Loughran commented on HADOOP-14932: - I think this isn't critical; it's just that some mockito references in {{DataNodeTestUtils}} are stopping minidfs cluster from working unless the right version of mockito is on the CP. The fix here is not to upgrade the version, it's to remove that transitive dependency moving the test util method into another class. > Move Mockito up to 1.10.19 to be compatible with HBase > -- > > Key: HADOOP-14932 > URL: https://issues.apache.org/jira/browse/HADOOP-14932 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Reporter: Akira Ajisaka >Priority: Critical > > HADOOP-14178 will upgrade Mockito up to 2.x, however, the progress is slow > and probably this won't be in Hadoop 3.0 GA. Apache HBase community wants > Hadoop to upgrade Mockito up to 1.10.19 to compile HBase test code with > Hadoop 3.0 successfully if we cannot upgrade Mockito to 2.x. > Thanks [~tedyu] for the > [report|https://issues.apache.org/jira/browse/HADOOP-14178?focusedCommentId=16193128=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16193128]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193458#comment-16193458 ] Steve Loughran commented on HADOOP-14178: - Ted, the problem you have is that HDFS-11164 restored the dependency MiniDFS cluster had on mockito which HDFS-9226 cut away. If you can move the new method {{DataNodeTestUtils.mockDatanodeBlkPinning}} to a new class of mock test utils, we can make this stack trace go away > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.
[ https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193439#comment-16193439 ] Xiaoyu Yao commented on HADOOP-14920: - bq. Can we run a full hadoop-hdfs test as well, just to make sure there is no regressions? I recall this area has caused issues before, because pre-commit only runs hadoop-common. How do we do that? I will file a new HDFS ticket and rename the patch to HDFS-xxx to see if that can trigger a hadoop-hdfs precommit run. > KMSClientProvider won't work with KMS delegation token retrieved from > non-Java client. > -- > > Key: HADOOP-14920 > URL: https://issues.apache.org/jira/browse/HADOOP-14920 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-14920.001.patch, HADOOP-14920.002.patch > > > HADOOP-13381 added support to use KMS delegation token to connect to KMS > server for key operations. However, the logic to check if the UGI container > KMS delegation token assumes that the token must contain a service attribute. > Otherwise, a KMS delegation token won't be recognized. > For delegation token obtained via non-java client such curl (http), the > default DelegationTokenAuthenticationHandler only support *renewer* parameter > and assume the client itself will add the service attribute. This makes a > java client with KMSClientProvdier can't use for KMS delegation token > retrieved form non-java client because the token does not contain a service > attribute. > I did some investigation on this and found two solutions: > 1. Similar use case exists for webhdfs, and webhdfs supports it with a > ["service" > parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token]. > We can do this similarly by allowing client to specify a service attribute in > the request URL and included in the token returned like webhdfs. Even though > this will change in DelegationTokenAuthenticationHandler and may affect many > other web component, this seems to be a clean and low risk solution because > it will be an optional parameter. Also, other components get non-java client > interop support for free if they have the similar use case. > 2. The other way to solve this is to release the token check in > KMSClientProvider to check only the token kind instead of the service. This > is an easy work around but seems less optimal to me. > cc: [~xiaochen] for additional input. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193335#comment-16193335 ] Akira Ajisaka commented on HADOOP-14178: I'm seeing both Mockito 1.8.5 and 1.10.19 have the interface, anyway using shaded jars seems fine. You can use hadoop-client-minicluster instead of hadoop-minicluster. Thanks [~tedyu] and [~bharatviswa]. > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193306#comment-16193306 ] Bharat Viswanadham edited comment on HADOOP-14178 at 10/5/17 5:50 PM: -- [~te...@apache.org] Can Hbase use hadoop shaded jars to avoid these kind of issue? was (Author: bharatviswa): [~te...@apache.org] Can Hbase use hadoop shaded jars to avoid these kind of issue? > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193306#comment-16193306 ] Bharat Viswanadham commented on HADOOP-14178: - [~te...@apache.org] Can Hbase use hadoop shaded jars to avoid these kind of issue? > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193304#comment-16193304 ] Ted Yu commented on HADOOP-14178: - However mockito 1.10.19 doesn't have it. MiniDFSCluster would use 1.10.19 in hbase tests. HBASE-18925 is upgrading to mockito 2.1.0 : {code} -1.10.19 +2.1.0 {code} > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193295#comment-16193295 ] Akira Ajisaka commented on HADOOP-14178: Umm, Mockito 1.8.5 has org.mockito.stubbing.Answer, so I don't think the NoClassDefFoundError is caused by the conflict of the Mockito versions. > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.
[ https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193290#comment-16193290 ] Xiao Chen commented on HADOOP-14920: Thanks [~xyao] for the new patch, and the explanations. The fix looks good to me, and is compatible since it's optional. I was also checking if this would impose any security issues, but looks fine. Note that my comment above was about httpfs, not webhdfs. Though the rest api is largely the same, httpfs is one of the 'services that use DelegationTokenAuthenticationHandler/DelegationTokenAuthenticator like KMS'. Can we run a full hadoop-hdfs test as well, just to make sure there is no regressions? I recall this area has caused issues before, because pre-commit only runs hadoop-common. > KMSClientProvider won't work with KMS delegation token retrieved from > non-Java client. > -- > > Key: HADOOP-14920 > URL: https://issues.apache.org/jira/browse/HADOOP-14920 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-14920.001.patch, HADOOP-14920.002.patch > > > HADOOP-13381 added support to use KMS delegation token to connect to KMS > server for key operations. However, the logic to check if the UGI container > KMS delegation token assumes that the token must contain a service attribute. > Otherwise, a KMS delegation token won't be recognized. > For delegation token obtained via non-java client such curl (http), the > default DelegationTokenAuthenticationHandler only support *renewer* parameter > and assume the client itself will add the service attribute. This makes a > java client with KMSClientProvdier can't use for KMS delegation token > retrieved form non-java client because the token does not contain a service > attribute. > I did some investigation on this and found two solutions: > 1. Similar use case exists for webhdfs, and webhdfs supports it with a > ["service" > parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token]. > We can do this similarly by allowing client to specify a service attribute in > the request URL and included in the token returned like webhdfs. Even though > this will change in DelegationTokenAuthenticationHandler and may affect many > other web component, this seems to be a clean and low risk solution because > it will be an optional parameter. Also, other components get non-java client > interop support for free if they have the similar use case. > 2. The other way to solve this is to release the token check in > KMSClientProvider to check only the token kind instead of the service. This > is an easy work around but seems less optimal to me. > cc: [~xiaochen] for additional input. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193248#comment-16193248 ] Akira Ajisaka commented on HADOOP-14178: Filed HADOOP-14932. > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14932) Move Mockito up to 1.10.19 to be compatible with HBase
Akira Ajisaka created HADOOP-14932: -- Summary: Move Mockito up to 1.10.19 to be compatible with HBase Key: HADOOP-14932 URL: https://issues.apache.org/jira/browse/HADOOP-14932 Project: Hadoop Common Issue Type: Sub-task Components: test Reporter: Akira Ajisaka Priority: Critical HADOOP-14178 will upgrade Mockito up to 2.x, however, the progress is slow and probably this won't be in Hadoop 3.0 GA. Apache HBase community wants Hadoop to upgrade Mockito up to 1.10.19 to compile HBase test code with Hadoop 3.0 successfully if we cannot upgrade Mockito to 2.x. Thanks [~tedyu] for the [report|https://issues.apache.org/jira/browse/HADOOP-14178?focusedCommentId=16193128=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16193128]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14930: Status: In Progress (was: Patch Available) > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-14930: --- Status: Patch Available (was: Open) > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193231#comment-16193231 ] Akira Ajisaka commented on HADOOP-14178: Now I don't think this is going to Hadoop 3.0 GA. bq. If not, how about upgrading to 1.10.19 for hadoop-3 ? Agreed. I'll file a jira. > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14184) Remove service loader config entry for ftp fs
[ https://issues.apache.org/jira/browse/HADOOP-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reassigned HADOOP-14184: --- Assignee: Sen Zhao > Remove service loader config entry for ftp fs > - > > Key: HADOOP-14184 > URL: https://issues.apache.org/jira/browse/HADOOP-14184 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.3 >Reporter: John Zhuge >Assignee: Sen Zhao >Priority: Minor > Labels: newbie > > Per discussion in HADOOP-14132. Remove line > {{org.apache.hadoop.fs.ftp.FTPFileSystem}} from the service loader config > file > hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem > and add property {{fs.ftp.impl}} to {{core-default.xml}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14184) Remove service loader config entry for ftp fs
[ https://issues.apache.org/jira/browse/HADOOP-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193215#comment-16193215 ] John Zhuge commented on HADOOP-14184: - [~Sen Zhao], [~knishant10] Added you both to the contributor list. > Remove service loader config entry for ftp fs > - > > Key: HADOOP-14184 > URL: https://issues.apache.org/jira/browse/HADOOP-14184 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.3 >Reporter: John Zhuge >Priority: Minor > Labels: newbie > > Per discussion in HADOOP-14132. Remove line > {{org.apache.hadoop.fs.ftp.FTPFileSystem}} from the service loader config > file > hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem > and add property {{fs.ftp.impl}} to {{core-default.xml}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14184) Remove service loader config entry for ftp fs
[ https://issues.apache.org/jira/browse/HADOOP-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193189#comment-16193189 ] KUMAR NISHANT commented on HADOOP-14184: Hi John, Do I need to get specific access for working on this ticket? > Remove service loader config entry for ftp fs > - > > Key: HADOOP-14184 > URL: https://issues.apache.org/jira/browse/HADOOP-14184 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.3 >Reporter: John Zhuge >Priority: Minor > Labels: newbie > > Per discussion in HADOOP-14132. Remove line > {{org.apache.hadoop.fs.ftp.FTPFileSystem}} from the service loader config > file > hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem > and add property {{fs.ftp.impl}} to {{core-default.xml}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14872: Status: Open (was: Patch Available) Still the same memory corruption. Looking... {noformat} *** Error in `/usr/lib/jvm/java-8-oracle/jre/bin/java': free(): corrupted unsorted chunks: 0x7fa2acad3270 *** Aborted (core dumped) {noformat} > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193128#comment-16193128 ] Ted Yu edited comment on HADOOP-14178 at 10/5/17 4:25 PM: -- Is this going to hadoop-3 beta / GA ? If not, how about upgrading to 1.10.19 for hadoop-3 ? I got the following when starting hadoop-3 mini dfs cluster within hbase unit test: {code} 2017-10-05 08:31:26,525 WARN [main] hbase.HBaseTestingUtility(1077): error starting mini dfs java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer at org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953) {code} hbase uses mockito-all 1.10.19 If I downgrade to 1.8.5 (hadoop), hbase code won't compile: {code} [ERROR] /Users/tyu/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientScanner.java:[146,62] cannot find symbol [ERROR] symbol: method getArgumentAt(int,java.lang.Class) [ERROR] location: variable invocation of type org.mockito.invocation.InvocationOnMock [ERROR] /Users/tyu/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientScanner.java:[207,60] cannot find symbol [ERROR] symbol: method getArgumentAt(int,java.lang.Class) [ERROR] location: variable invocation of type org.mockito.invocation.InvocationOnMock {code} was (Author: yuzhih...@gmail.com): Is this going to hadoop-3 beta / GA ? If not, how about upgrading to 1.10.19 for hadoop-3 ? I got the following when starting hadoop-3 mini dfs cluster: {code} 2017-10-05 08:31:26,525 WARN [main] hbase.HBaseTestingUtility(1077): error starting mini dfs java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer at org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953) {code} hbase uses mockito-all 1.10.19 If I downgrade to 1.8.5 (hadoop), hbase code won't compile: {code} [ERROR] /Users/tyu/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientScanner.java:[146,62] cannot find symbol [ERROR] symbol: method getArgumentAt(int,java.lang.Class) [ERROR] location: variable invocation of type org.mockito.invocation.InvocationOnMock [ERROR] /Users/tyu/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientScanner.java:[207,60] cannot find symbol [ERROR] symbol: method getArgumentAt(int,java.lang.Class) [ERROR] location: variable invocation of type org.mockito.invocation.InvocationOnMock {code} > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193128#comment-16193128 ] Ted Yu commented on HADOOP-14178: - Is this going to hadoop-3 beta / GA ? If not, how about upgrading to 1.10.19 for hadoop-3 ? I got the following when starting hadoop-3 mini dfs cluster: {code} 2017-10-05 08:31:26,525 WARN [main] hbase.HBaseTestingUtility(1077): error starting mini dfs java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer at org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953) {code} hbase uses mockito-all 1.10.19 If I downgrade to 1.8.5 (hadoop), hbase code won't compile: {code} [ERROR] /Users/tyu/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientScanner.java:[146,62] cannot find symbol [ERROR] symbol: method getArgumentAt(int,java.lang.Class) [ERROR] location: variable invocation of type org.mockito.invocation.InvocationOnMock [ERROR] /Users/tyu/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientScanner.java:[207,60] cannot find symbol [ERROR] symbol: method getArgumentAt(int,java.lang.Class) [ERROR] location: variable invocation of type org.mockito.invocation.InvocationOnMock {code} > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193116#comment-16193116 ] Bharat Viswanadham commented on HADOOP-14930: - [~te...@apache.org] Yup got it.. Thank You for info. In 9.4 jetty there is no sessionManager got it. will provide a patch for this. > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193112#comment-16193112 ] Ted Yu commented on HADOOP-14930: - hadoop uses 9.3.19.v20170502 That was why NoSuchMethodError was encountered when jetty on the classpath is 9.4 (hbase) > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193107#comment-16193107 ] Bharat Viswanadham commented on HADOOP-14930: - Hi [~te...@apache.org] Is it because of jetty? As I am seeing jetty code 9.4, still I dont see the method getSessionManager? https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/jetty-server/src/main/java/org/eclipse/jetty/server/session/SessionHandler.java Please let me know if i am missing something here. > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-14931) Isolation of native libraries in JARs
[ https://issues.apache.org/jira/browse/HADOOP-14931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory moved HDFS-12595 to HADOOP-14931: --- Key: HADOOP-14931 (was: HDFS-12595) Project: Hadoop Common (was: Hadoop HDFS) > Isolation of native libraries in JARs > - > > Key: HADOOP-14931 > URL: https://issues.apache.org/jira/browse/HADOOP-14931 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > There is a native library embedded in the Netty JAR. Even with shading, this > can cause conflicts if a user application is used a different version of > Netty. Hadoop does not use the native implementations, so we could just > remove it, or we could relocate it more intelligently. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14930) Upgrade Jetty to 9.4 version
[ https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-14930: --- Assignee: Bharat Viswanadham > Upgrade Jetty to 9.4 version > > > Key: HADOOP-14930 > URL: https://issues.apache.org/jira/browse/HADOOP-14930 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Bharat Viswanadham > > Currently 9.3.19.v20170502 is used. > In hbase 2.0+, 9.4.6.v20170531 is used. > When starting mini dfs cluster in hbase unit tests, we get the following: > {code} > java.lang.NoSuchMethodError: > org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; > at > org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) > at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) > at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) > {code} > This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14919) BZip2 drops records when reading data in splits
[ https://issues.apache.org/jira/browse/HADOOP-14919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193088#comment-16193088 ] Jason Lowe commented on HADOOP-14919: - Thanks for taking the patch for a test drive! Glad to hear it fixes the problem and doesn't seem to regress anything so far. > BZip2 drops records when reading data in splits > --- > > Key: HADOOP-14919 > URL: https://issues.apache.org/jira/browse/HADOOP-14919 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1 >Reporter: Aki Tanaka >Assignee: Jason Lowe >Priority: Critical > Attachments: 25.bz2, HADOOP-14919.001.patch, > HADOOP-14919-test.patch > > > BZip2 can drop records when reading data in splits. This problem was already > discussed before in HADOOP-11445 and HADOOP-13270. But we still have a > problem in corner case, causing lost data blocks. > > I attached a unit test for this issue. You can reproduce the problem if you > run the unit test. > > First, this issue happens when position of newly created stream is equal to > start of split. Hadoop has some test cases for this (blockEndingInCR.txt.bz2 > file for TestLineRecordReader#testBzip2SplitStartAtBlockMarker, etc). > However, the issue I am reporting does not happen when we run these tests > because this issue happens only when the start of split byte block includes > both block marker and compressed data. > > BZip2 block marker - 0x314159265359 > (00110001010101011001001001100101001101011001) > > blockEndingInCR.txt.bz2 (Start of Split - 136504): > {code:java} > $ xxd -l 6 -g 1 -b -seek 136498 > ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/test-classes/blockEndingInCR.txt.bz2 > 0021532: 00110001 0101 01011001 00100110 01010011 01011001 1AY > {code} > > Test bz2 File (Start of Split - 203426) > {code:java} > $ xxd -l 7 -g 1 -b -seek 203419 25.bz2 > 0031a9b: 11100110 00101000 00101011 00100100 11001010 01101011 .(+$.k > 0031aa1: 0010 / > {code} > > Let's say a job splits this test bz2 file into two splits at the start of > split (position 203426). > The former split does not read records which start position 203426 because > BZip2 says the position of these dropped records is 203427. The latter split > does not read the records because BZip2CompressionInputStream read the block > from position 320955. > Due to this behavior, records between 203427 and 320955 are lost. > Also, if we reverted the changes in HADOOP-13270, we will not see this issue. > We will see HADOOP-13270 issue though. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9
[ https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193079#comment-16193079 ] Hadoop QA commented on HADOOP-12760: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 54s{color} | {color:green} root generated 0 new + 1265 unchanged - 6 fixed = 1265 total (was 1271) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 136 unchanged - 0 fixed = 137 total (was 136) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 4s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-12760 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12850964/HADOOP-12760.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ab01073ee1bb 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9288206 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13457/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13457/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results |
[jira] [Commented] (HADOOP-14919) BZip2 drops records when reading data in splits
[ https://issues.apache.org/jira/browse/HADOOP-14919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193075#comment-16193075 ] Aki Tanaka commented on HADOOP-14919: - Thank you for the patch. I tested the patch and confirmed that the patch can fix the issues we saw in our production environment. As far as I tested, I did not see any regressions or new issues. > BZip2 drops records when reading data in splits > --- > > Key: HADOOP-14919 > URL: https://issues.apache.org/jira/browse/HADOOP-14919 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1 >Reporter: Aki Tanaka >Assignee: Jason Lowe >Priority: Critical > Attachments: 25.bz2, HADOOP-14919.001.patch, > HADOOP-14919-test.patch > > > BZip2 can drop records when reading data in splits. This problem was already > discussed before in HADOOP-11445 and HADOOP-13270. But we still have a > problem in corner case, causing lost data blocks. > > I attached a unit test for this issue. You can reproduce the problem if you > run the unit test. > > First, this issue happens when position of newly created stream is equal to > start of split. Hadoop has some test cases for this (blockEndingInCR.txt.bz2 > file for TestLineRecordReader#testBzip2SplitStartAtBlockMarker, etc). > However, the issue I am reporting does not happen when we run these tests > because this issue happens only when the start of split byte block includes > both block marker and compressed data. > > BZip2 block marker - 0x314159265359 > (00110001010101011001001001100101001101011001) > > blockEndingInCR.txt.bz2 (Start of Split - 136504): > {code:java} > $ xxd -l 6 -g 1 -b -seek 136498 > ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/test-classes/blockEndingInCR.txt.bz2 > 0021532: 00110001 0101 01011001 00100110 01010011 01011001 1AY > {code} > > Test bz2 File (Start of Split - 203426) > {code:java} > $ xxd -l 7 -g 1 -b -seek 203419 25.bz2 > 0031a9b: 11100110 00101000 00101011 00100100 11001010 01101011 .(+$.k > 0031aa1: 0010 / > {code} > > Let's say a job splits this test bz2 file into two splits at the start of > split (position 203426). > The former split does not read records which start position 203426 because > BZip2 says the position of these dropped records is 203427. The latter split > does not read the records because BZip2CompressionInputStream read the block > from position 320955. > Due to this behavior, records between 203427 and 320955 are lost. > Also, if we reverted the changes in HADOOP-13270, we will not see this issue. > We will see HADOOP-13270 issue though. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14930) Upgrade Jetty to 9.4 version
Ted Yu created HADOOP-14930: --- Summary: Upgrade Jetty to 9.4 version Key: HADOOP-14930 URL: https://issues.apache.org/jira/browse/HADOOP-14930 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu Currently 9.3.19.v20170502 is used. In hbase 2.0+, 9.4.6.v20170531 is used. When starting mini dfs cluster in hbase unit tests, we get the following: {code} java.lang.NoSuchMethodError: org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager; at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548) at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529) at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119) at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921) {code} This issue is to upgrade Jetty to 9.4 version -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks
[ https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14845: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Cherry picked in final changes to {{TestNativeAzureFileSystemAuthorization}} from trunk to branch-2. The alternative: rollback, build a new patch, reapply wouldn't have worked. Closing as fixed for 2.9+. Thanks for your contrib! > Azure wasb: getFileStatus not making any auth checks > > > Key: HADOOP-14845 > URL: https://issues.apache.org/jira/browse/HADOOP-14845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, security >Affects Versions: 2.8.0, 2.7.4 >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Fix For: 2.9.0, 3.1.0 > > Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, > HADOOP-14845.003.patch, HADOOP-14845.004.patch, > HADOOP-14845-branch-2-001.patch.txt, HADOOP-14845-branch-2-002.patch, > HADOOP-14845-branch-2-003.patch, HADOOP-14845-branch-2-005.patch > > > The HDFS spec requires only traverse checks for any file accessed via > getFileStatus ... and since WASB does not support traverse checks, removing > this call effectively removed all protections for the getFileStatus call. The > reasoning at that time was that doing a performAuthCheck was the wrong thing > to do, since it was going against the specand that the correct fix to the > getFileStatus issue was to implement traverse checks rather than go against > the spec by calling performAuthCheck. The side-effects of such a change were > not fully clear at that time, but the thinking was that it was safer to > remain true to the spec, as far as possible. > The reasoning remains correct even today. But in view of the security hole > introduced by this change (that anyone can load up any other user's data in > hive), and keeping in mind that WASB does not intend to implement traverse > checks, we propose a compromise. > We propose (re)introducing a read-access check to getFileStatus(), that would > check the existing ancestor for read-access whenever invoked. Although not > perfect (in that it is a departure from the spec), we believe that it is a > good compromise between having no checks at all; and implementing full-blown > traverse checks. > For scenarios that deal with intermediate folders like mkdirs, the call would > check for read access against an existing ancestor (when invoked from shell) > for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" > exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" > }}. This can be thought of, as being a close-enough substitute for the > traverse checks that hdfs does. > For other scenarios that don't deal with non-existent intermediate folders – > like read, delete etc, the check will happen against the parent. Once again, > we can think of the read-check against the parent as a substitute for the > traverse check, which can be customized for various users with ranger > policies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks
[ https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192976#comment-16192976 ] Hadoop QA commented on HADOOP-14845: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 10s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:eaf5c66 | | JIRA Issue | HADOOP-14845 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890526/HADOOP-14845-branch-2-005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 10523454f196 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 7fd4a99 | | Default Java | 1.7.0_151 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13458/testReport/ | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13458/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Azure wasb: getFileStatus not making any auth checks > > > Key: HADOOP-14845 > URL: https://issues.apache.org/jira/browse/HADOOP-14845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, security >Affects Versions: 2.8.0, 2.7.4 >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Fix For: 2.9.0 > > Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, > HADOOP-14845.003.patch,
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192965#comment-16192965 ] Rushabh S Shah commented on HADOOP-14521: - thanks [~xiaochen] for the latest patch. +1 non-binding. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14521.09.patch, HADOOP-14521.11.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks
[ https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14845: Status: Patch Available (was: Open) > Azure wasb: getFileStatus not making any auth checks > > > Key: HADOOP-14845 > URL: https://issues.apache.org/jira/browse/HADOOP-14845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, security >Affects Versions: 2.7.4, 2.8.0 >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Fix For: 2.9.0 > > Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, > HADOOP-14845.003.patch, HADOOP-14845.004.patch, > HADOOP-14845-branch-2-001.patch.txt, HADOOP-14845-branch-2-002.patch, > HADOOP-14845-branch-2-003.patch, HADOOP-14845-branch-2-005.patch > > > The HDFS spec requires only traverse checks for any file accessed via > getFileStatus ... and since WASB does not support traverse checks, removing > this call effectively removed all protections for the getFileStatus call. The > reasoning at that time was that doing a performAuthCheck was the wrong thing > to do, since it was going against the specand that the correct fix to the > getFileStatus issue was to implement traverse checks rather than go against > the spec by calling performAuthCheck. The side-effects of such a change were > not fully clear at that time, but the thinking was that it was safer to > remain true to the spec, as far as possible. > The reasoning remains correct even today. But in view of the security hole > introduced by this change (that anyone can load up any other user's data in > hive), and keeping in mind that WASB does not intend to implement traverse > checks, we propose a compromise. > We propose (re)introducing a read-access check to getFileStatus(), that would > check the existing ancestor for read-access whenever invoked. Although not > perfect (in that it is a departure from the spec), we believe that it is a > good compromise between having no checks at all; and implementing full-blown > traverse checks. > For scenarios that deal with intermediate folders like mkdirs, the call would > check for read access against an existing ancestor (when invoked from shell) > for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" > exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" > }}. This can be thought of, as being a close-enough substitute for the > traverse checks that hdfs does. > For other scenarios that don't deal with non-existent intermediate folders – > like read, delete etc, the check will happen against the parent. Once again, > we can think of the read-check against the parent as a substitute for the > traverse check, which can be customized for various users with ranger > policies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks
[ https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14845: Attachment: HADOOP-14845-branch-2-005.patch Patch branch-2-005; the changes to trunk's TestNativeAzureFileSystemAuthorization merged into branch 2 &, where needed, fixup for java 7. This is just from a diff of branch-2 & trunk & picking in the relevant changes, as such it is the final bit of the branch-2 patch, addressing test conflict with HADOOP-14768. Tested {code} ava HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.182 sec - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.fs.azure.TestNativeAzureFSAuthorizationCaching Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.257 sec - in org.apache.hadoop.fs.azure.TestNativeAzureFSAuthorizationCaching Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.fs.azure.TestNativeAzureFSAuthWithBlobSpecificKeys Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.394 sec - in org.apache.hadoop.fs.azure.TestNativeAzureFSAuthWithBlobSpecificKeys {code} If yetus is happy, I'm going to pull this is in as the final bit of the patch > Azure wasb: getFileStatus not making any auth checks > > > Key: HADOOP-14845 > URL: https://issues.apache.org/jira/browse/HADOOP-14845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, security >Affects Versions: 2.8.0, 2.7.4 >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Fix For: 2.9.0 > > Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, > HADOOP-14845.003.patch, HADOOP-14845.004.patch, > HADOOP-14845-branch-2-001.patch.txt, HADOOP-14845-branch-2-002.patch, > HADOOP-14845-branch-2-003.patch, HADOOP-14845-branch-2-005.patch > > > The HDFS spec requires only traverse checks for any file accessed via > getFileStatus ... and since WASB does not support traverse checks, removing > this call effectively removed all protections for the getFileStatus call. The > reasoning at that time was that doing a performAuthCheck was the wrong thing > to do, since it was going against the specand that the correct fix to the > getFileStatus issue was to implement traverse checks rather than go against > the spec by calling performAuthCheck. The side-effects of such a change were > not fully clear at that time, but the thinking was that it was safer to > remain true to the spec, as far as possible. > The reasoning remains correct even today. But in view of the security hole > introduced by this change (that anyone can load up any other user's data in > hive), and keeping in mind that WASB does not intend to implement traverse > checks, we propose a compromise. > We propose (re)introducing a read-access check to getFileStatus(), that would > check the existing ancestor for read-access whenever invoked. Although not > perfect (in that it is a departure from the spec), we believe that it is a > good compromise between having no checks at all; and implementing full-blown > traverse checks. > For scenarios that deal with intermediate folders like mkdirs, the call would > check for read access against an existing ancestor (when invoked from shell) > for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" > exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" > }}. This can be thought of, as being a close-enough substitute for the > traverse checks that hdfs does. > For other scenarios that don't deal with non-existent intermediate folders – > like read, delete etc, the check will happen against the parent. Once again, > we can think of the read-check against the parent as a substitute for the > traverse check, which can be customized for various users with ranger > policies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks
[ https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192941#comment-16192941 ] Hudson commented on HADOOP-14845: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13032 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13032/]) HADOOP-14845. Azure wasb: getFileStatus not making any auth check. (stevel: rev 9288206cb3c1a39044a8e106436987185ef43ddf) * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestWasbRemoteCallHelper.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/metrics/ITestAzureFileSystemInstrumentation.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java > Azure wasb: getFileStatus not making any auth checks > > > Key: HADOOP-14845 > URL: https://issues.apache.org/jira/browse/HADOOP-14845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, security >Affects Versions: 2.8.0, 2.7.4 >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Fix For: 2.9.0 > > Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, > HADOOP-14845.003.patch, HADOOP-14845.004.patch, > HADOOP-14845-branch-2-001.patch.txt, HADOOP-14845-branch-2-002.patch, > HADOOP-14845-branch-2-003.patch > > > The HDFS spec requires only traverse checks for any file accessed via > getFileStatus ... and since WASB does not support traverse checks, removing > this call effectively removed all protections for the getFileStatus call. The > reasoning at that time was that doing a performAuthCheck was the wrong thing > to do, since it was going against the specand that the correct fix to the > getFileStatus issue was to implement traverse checks rather than go against > the spec by calling performAuthCheck. The side-effects of such a change were > not fully clear at that time, but the thinking was that it was safer to > remain true to the spec, as far as possible. > The reasoning remains correct even today. But in view of the security hole > introduced by this change (that anyone can load up any other user's data in > hive), and keeping in mind that WASB does not intend to implement traverse > checks, we propose a compromise. > We propose (re)introducing a read-access check to getFileStatus(), that would > check the existing ancestor for read-access whenever invoked. Although not > perfect (in that it is a departure from the spec), we believe that it is a > good compromise between having no checks at all; and implementing full-blown > traverse checks. > For scenarios that deal with intermediate folders like mkdirs, the call would > check for read access against an existing ancestor (when invoked from shell) > for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" > exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" > }}. This can be thought of, as being a close-enough substitute for the > traverse checks that hdfs does. > For other scenarios that don't deal with non-existent intermediate folders – > like read, delete etc, the check will happen against the parent. Once again, > we can think of the read-check against the parent as a substitute for the > traverse check, which can be customized for various users with ranger > policies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks
[ https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14845: Status: Open (was: Patch Available) > Azure wasb: getFileStatus not making any auth checks > > > Key: HADOOP-14845 > URL: https://issues.apache.org/jira/browse/HADOOP-14845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, security >Affects Versions: 2.7.4, 2.8.0 >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Fix For: 2.9.0 > > Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, > HADOOP-14845.003.patch, HADOOP-14845.004.patch, > HADOOP-14845-branch-2-001.patch.txt, HADOOP-14845-branch-2-002.patch, > HADOOP-14845-branch-2-003.patch > > > The HDFS spec requires only traverse checks for any file accessed via > getFileStatus ... and since WASB does not support traverse checks, removing > this call effectively removed all protections for the getFileStatus call. The > reasoning at that time was that doing a performAuthCheck was the wrong thing > to do, since it was going against the specand that the correct fix to the > getFileStatus issue was to implement traverse checks rather than go against > the spec by calling performAuthCheck. The side-effects of such a change were > not fully clear at that time, but the thinking was that it was safer to > remain true to the spec, as far as possible. > The reasoning remains correct even today. But in view of the security hole > introduced by this change (that anyone can load up any other user's data in > hive), and keeping in mind that WASB does not intend to implement traverse > checks, we propose a compromise. > We propose (re)introducing a read-access check to getFileStatus(), that would > check the existing ancestor for read-access whenever invoked. Although not > perfect (in that it is a departure from the spec), we believe that it is a > good compromise between having no checks at all; and implementing full-blown > traverse checks. > For scenarios that deal with intermediate folders like mkdirs, the call would > check for read access against an existing ancestor (when invoked from shell) > for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" > exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" > }}. This can be thought of, as being a close-enough substitute for the > traverse checks that hdfs does. > For other scenarios that don't deal with non-existent intermediate folders – > like read, delete etc, the check will happen against the parent. Once again, > we can think of the read-check against the parent as a substitute for the > traverse check, which can be customized for various users with ranger > policies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks
[ https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192917#comment-16192917 ] Steve Loughran commented on HADOOP-14845: - OK, testing on trunk and all is well; I was caught out by the fact that these tests are skipped in branch-2 unless you enable auth in your test azure-auth-keys.xml file. It seems to me that the tests could actually turn on auth rather than skip; they just need to make sure that a new FS instance is created just for this test suite. which means: the branch-2 tests are broken as the merge was incomplete. Anyway, I've applied patch 004 to runk & rerunning {{ITestNativeAzureFSAuth*}} as well as {{TestNativeAzureFileSystemAuthorization}}: all is well +1 for trunk, committing as is, and about to build a patch for branch-2 which fixes the test runs > Azure wasb: getFileStatus not making any auth checks > > > Key: HADOOP-14845 > URL: https://issues.apache.org/jira/browse/HADOOP-14845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, security >Affects Versions: 2.8.0, 2.7.4 >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Fix For: 2.9.0 > > Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, > HADOOP-14845.003.patch, HADOOP-14845.004.patch, > HADOOP-14845-branch-2-001.patch.txt, HADOOP-14845-branch-2-002.patch, > HADOOP-14845-branch-2-003.patch > > > The HDFS spec requires only traverse checks for any file accessed via > getFileStatus ... and since WASB does not support traverse checks, removing > this call effectively removed all protections for the getFileStatus call. The > reasoning at that time was that doing a performAuthCheck was the wrong thing > to do, since it was going against the specand that the correct fix to the > getFileStatus issue was to implement traverse checks rather than go against > the spec by calling performAuthCheck. The side-effects of such a change were > not fully clear at that time, but the thinking was that it was safer to > remain true to the spec, as far as possible. > The reasoning remains correct even today. But in view of the security hole > introduced by this change (that anyone can load up any other user's data in > hive), and keeping in mind that WASB does not intend to implement traverse > checks, we propose a compromise. > We propose (re)introducing a read-access check to getFileStatus(), that would > check the existing ancestor for read-access whenever invoked. Although not > perfect (in that it is a departure from the spec), we believe that it is a > good compromise between having no checks at all; and implementing full-blown > traverse checks. > For scenarios that deal with intermediate folders like mkdirs, the call would > check for read access against an existing ancestor (when invoked from shell) > for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" > exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" > }}. This can be thought of, as being a close-enough substitute for the > traverse checks that hdfs does. > For other scenarios that don't deal with non-existent intermediate folders – > like read, delete etc, the check will happen against the parent. Once again, > we can think of the read-check against the parent as a substitute for the > traverse check, which can be customized for various users with ranger > policies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined
[ https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192916#comment-16192916 ] Nandor Kollar commented on HADOOP-14459: Thanks [~templedf] for reviewing and committing this patch! > SerializationFactory shouldn't throw a NullPointerException if the > serializations list is not defined > - > > Key: HADOOP-14459 > URL: https://issues.apache.org/jira/browse/HADOOP-14459 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Nandor Kollar >Assignee: Nandor Kollar >Priority: Minor > Fix For: 3.0.0, 3.1.0 > > Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, > HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459_6.patch, > HADOOP-14459.patch > > > The SerializationFactory throws an NPE if > CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192604#comment-16192604 ] Hadoop QA commented on HADOOP-14872: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 8s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}197m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.security.TestKDiag | | Timed out junit tests | org.apache.hadoop.crypto.TestCryptoStreamsNormal | | | org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec | | | org.apache.hadoop.crypto.TestCryptoStreams | | | org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14872 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890478/HADOOP-14872.009.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 222697aa287e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |