[jira] [Comment Edited] (HADOOP-10150) Hadoop cryptographic file system
[ https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966140#comment-13966140 ] Uma Maheswara Rao G edited comment on HADOOP-10150 at 4/11/14 6:03 AM: --- Todd, thanks for your comments. {quote}A few questions here... First, let me confirm my understanding of the key structure and storage: Client master key: this lives on the Key Management Server, and might be different from application to application. {quote} Yes. {quote}In many cases there may be just one per cluster, though in a multitenant cluster, perhaps we could have one per tenant.{quote} It depends on the KeyProvider implementation, these kinds of details can be encapsulated into the KeyProvider implementation which could be pluggable in CFS. Thus, the user can use their own strategy to deploy one master key or multiple master key, by application or by user-group etc. {quote}Data key: this is set per encrypted directory. This key is stored in the directory xattr on the NN, but encrypted by the client master key (which the NN doesn't know).{quote} Yes. {quote}So, when a client wants to read a file, the following is the process: 1) Notices that the file is in an encrypted directory. Fetches the encrypted data key from the NN's xattr on the directory. 2) Somehow associates this encrypted data key with the master key that was used to encrypt it (perhaps it's tagged with some identifier). Fetches the appropriate master key from the key store. 2a) The keystore somehow authenticates and authorizes the client's access to this key 3) The client decrypts the data key using the master key, and is now able to set up a decrypting stream for the file itself. (I've ignored the IV here, but assume it's also stored in an xattr) {quote} Yes. {quote}In terms of attack vectors: let's say that the NN disk is stolen. The thief now has access to a bunch of keys, but they're all encrypted by various master keys. So we're OK.{quote} Yes. {quote}let's say that a client is malicious. It can get whichever master keys it has access to from the KMS. If we only have one master key per cluster, then the combination of one malicious client plus stealing the fsimage will give up all the keys{quote} When a client get access to master key and fsimage, there is nothing we can do to protected those data. The separation of data encryption key and master key is for master key rotation. So that one does not need to decrypt all data file then encrypt it with new encryption key again. {quote}let's say that a client has escalated to root access on one of the slave nodes in the cluster, or otherwise has malicious access to a NodeManager process. By looking at a running MR task, it could steal whatever credentials the task is using to access the KMS, and/or dump the memory of the client process in order to give up the master key above.{quote} When a client has root access, all information can be dumped from any process, right? I remember Nicholas asked the similar question on HDFS-6134. If a client has escalated to root access on slave nodes, how can we assume the namenode, standby namenode/secondary namenode are secure in the same cluster? On the other hand, as long as data keys remain in encrypted form in the process memory of the NameNode and DataNodes, and they don't have access to the wrapping keys, then there is no attack vector there. {quote}How does the MR task in this context get the credentials to fetch keys from the KMS? If the KMS accepts the same authentication tokens as the NameNode, then is there any reason that this is more secure than having the NameNode supply the keys? Or is it just that decoupling the NameNode and the key server allows this approach to work for non-HDFS filesystems, at the expense of an additional daemon running a key distribution service?{quote} It is a good question. Securely distributing the secrets as you mentioned among the cluster nodes will always be a hard problem to solve. Without adequate hardware support, it could possibly be a weak point during operations like unwrapping key. We want to leave options to KeyProvider implementation to decouple the key protection mechanism and data encryption mechanism, and to make above two work on top of any filesystem. It is possible to have a KeyProvider implementation which use NN as KMS as we already discussed, and leave room for other parties to plug their own solution? was (Author: hitliuyi): Todd, thanks for your comments. {quote}A few questions here... First, let me confirm my understanding of the key structure and storage: Client master key: this lives on the Key Management Server, and might be different from application to application. {quote} Yes. {quote}In many cases there may be just one per cluster, though in a multitenant cluster, perhaps we could have one
[jira] [Updated] (HADOOP-10229) DaemonFactory should not extend Daemon
[ https://issues.apache.org/jira/browse/HADOOP-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Avinash Kujur updated HADOOP-10229: --- Attachment: HADOOP-10229.patch made changes DaemonFactory should not extend Daemon -- Key: HADOOP-10229 URL: https://issues.apache.org/jira/browse/HADOOP-10229 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.2.0 Reporter: Hiroshi Ikeda Priority: Minor Attachments: HADOOP-10229.patch Original Estimate: 5m Remaining Estimate: 5m The static nested class org.apache.hadoop.util.Daemon.DaemonFactory unnecessarily extends its nesting class Daemon, though a thread factory is not required to be a thread. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10488) TestKeyProviderFactory fails randomly
[ https://issues.apache.org/jira/browse/HADOOP-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966407#comment-13966407 ] Hudson commented on HADOOP-10488: - SUCCESS: Integrated in Hadoop-Yarn-trunk #537 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/537/]) HADOOP-10488. TestKeyProviderFactory fails randomly. (tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586382) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java TestKeyProviderFactory fails randomly - Key: HADOOP-10488 URL: https://issues.apache.org/jira/browse/HADOOP-10488 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10488.patch This test is fail randomly depending on the order of execution of the test methods, the reason is that the keystore used by the different testmethods is the same. We should either delete it before/after each test, or we should use a diff diff or each run. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10350) BUILDING.txt should mention openssl dependency required for hadoop-pipes
[ https://issues.apache.org/jira/browse/HADOOP-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966408#comment-13966408 ] Hudson commented on HADOOP-10350: - SUCCESS: Integrated in Hadoop-Yarn-trunk #537 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/537/]) HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586433) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Reverse merged revision(s) 1586425 from hadoop/common/trunk: HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586432) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586425) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt BUILDING.txt should mention openssl dependency required for hadoop-pipes Key: HADOOP-10350 URL: https://issues.apache.org/jira/browse/HADOOP-10350 Project: Hadoop Common Issue Type: Bug Reporter: Vinayakumar B Assignee: Vinayakumar B Fix For: 2.5.0 Attachments: HADOOP-10350.patch, HADOOP-10350.patch BUILDING.txt should mention openssl dependency required for hadoop-pipes -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10490) TestMapFile and TestBloomMapFile leak file descriptors.
[ https://issues.apache.org/jira/browse/HADOOP-10490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966409#comment-13966409 ] Hudson commented on HADOOP-10490: - SUCCESS: Integrated in Hadoop-Yarn-trunk #537 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/537/]) HADOOP-10490. TestMapFile and TestBloomMapFile leak file descriptors. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586570) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestBloomMapFile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java TestMapFile and TestBloomMapFile leak file descriptors. --- Key: HADOOP-10490 URL: https://issues.apache.org/jira/browse/HADOOP-10490 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0, 2.4.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor Fix For: 3.0.0, 2.4.1 Attachments: HADOOP-10490.1.patch, HADOOP-10490.2.patch Multiple tests in {{TestMapFile}} and {{TestBloomMapFile}} open files but don't close them. On Windows, the leaked file descriptors cause subsequent tests to fail, because file locks are still held while trying to delete the test data directory. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10491) Add Collection of Labels to KeyProvider API
Larry McCay created HADOOP-10491: Summary: Add Collection of Labels to KeyProvider API Key: HADOOP-10491 URL: https://issues.apache.org/jira/browse/HADOOP-10491 Project: Hadoop Common Issue Type: Improvement Components: security Reporter: Larry McCay Assignee: Larry McCay A set of arbitrary labels would provide opportunity for interesting access policy decisions based on things like classification, etc. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10491) Add Collection of Labels to KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966459#comment-13966459 ] Larry McCay commented on HADOOP-10491: -- A property map to hold a set of labels with the key labels may be a good approach for extensibility. We could then just add to the property map for future metadata enhancements. Add Collection of Labels to KeyProvider API --- Key: HADOOP-10491 URL: https://issues.apache.org/jira/browse/HADOOP-10491 Project: Hadoop Common Issue Type: Improvement Components: security Reporter: Larry McCay Assignee: Larry McCay A set of arbitrary labels would provide opportunity for interesting access policy decisions based on things like classification, etc. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8087) Paths that start with a double slash cause No filesystem for scheme: null errors
[ https://issues.apache.org/jira/browse/HADOOP-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966460#comment-13966460 ] Eric Payne commented on HADOOP-8087: [~daryn] and [~cmccabe] : I came across this issue as part of a 0.23 backlog review. Will this issue be resolved in 0.23 or 2.0? If not, can we remove the 0.23.3 and 2.0.0-alpha targets and leave this JIRA targeted for 3.0.0? Paths that start with a double slash cause No filesystem for scheme: null errors -- Key: HADOOP-8087 URL: https://issues.apache.org/jira/browse/HADOOP-8087 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0, 0.24.0 Reporter: Daryn Sharp Assignee: Colin Patrick McCabe Attachments: HADOOP-8087.001.patch, HADOOP-8087.002.patch {{Path}} is incorrectly parsing {{//dir/path}} in a very unexpected way. While it should translate to the directory {{$fs.default.name}/dir/path}}, it instead discards the {{//dir}} and returns {{$fs.default.name/path}}. The problem is {{Path}} is trying to parsing an authority even when a scheme is not present. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10492) Help Commands needs change after deprecation
Raja Nagendra Kumar created HADOOP-10492: Summary: Help Commands needs change after deprecation Key: HADOOP-10492 URL: https://issues.apache.org/jira/browse/HADOOP-10492 Project: Hadoop Common Issue Type: Bug Reporter: Raja Nagendra Kumar As hadoop fs is deprecated, the help should show usage with HDFS e.g in the following command it still refers to Usage: hadoop fs [generic options] D:\Apps\java\BI\hadoop\hw\hdp\hadoop-2.2.0.2.0.6.0-0009hdfs dfs Usage: hadoop fs [generic options] [-appendToFile localsrc ... dst] [-cat [-ignoreCrc] src ...] [-checksum src ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] MODE[,MODE]... | OCTALMODE PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] localsrc ... dst] [-copyToLocal [-p] [-ignoreCrc] [-crc] src ... localdst] [-count [-q] path ...] [-cp [-f] [-p] src ... dst] [-createSnapshot snapshotDir [snapshotName]] [-deleteSnapshot snapshotDir snapshotName] [-df [-h] [path ...]] [-du [-s] [-h] path ...] -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8087) Paths that start with a double slash cause No filesystem for scheme: null errors
[ https://issues.apache.org/jira/browse/HADOOP-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966466#comment-13966466 ] Hadoop QA commented on HADOOP-8087: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12592873/HADOOP-8087.002.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3786//console This message is automatically generated. Paths that start with a double slash cause No filesystem for scheme: null errors -- Key: HADOOP-8087 URL: https://issues.apache.org/jira/browse/HADOOP-8087 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0, 0.24.0 Reporter: Daryn Sharp Assignee: Colin Patrick McCabe Attachments: HADOOP-8087.001.patch, HADOOP-8087.002.patch {{Path}} is incorrectly parsing {{//dir/path}} in a very unexpected way. While it should translate to the directory {{$fs.default.name}/dir/path}}, it instead discards the {{//dir}} and returns {{$fs.default.name/path}}. The problem is {{Path}} is trying to parsing an authority even when a scheme is not present. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8746) TestNativeIO fails when run with jdk7
[ https://issues.apache.org/jira/browse/HADOOP-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966471#comment-13966471 ] Thomas Graves commented on HADOOP-8746: --- If you are seeing the failure on 2.5 go ahead and move it. Otherwise go ahead and close it. TestNativeIO fails when run with jdk7 - Key: HADOOP-8746 URL: https://issues.apache.org/jira/browse/HADOOP-8746 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Thomas Graves Assignee: Thomas Graves Labels: java7 TestNativeIo fails when run with jdk7. Test set: org.apache.hadoop.io.nativeio.TestNativeIO --- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec FAILURE! testSyncFileRange(org.apache.hadoop.io.nativeio.TestNativeIO) Time elapsed: 0.166 sec ERROR! EINVAL: Invalid argument at org.apache.hadoop.io.nativeio.NativeIO.sync_file_range(Native Method) at org.apache.hadoop.io.nativeio.TestNativeIO.testSyncFileRange(TestNativeIO.java:254) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10350) BUILDING.txt should mention openssl dependency required for hadoop-pipes
[ https://issues.apache.org/jira/browse/HADOOP-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966497#comment-13966497 ] Hudson commented on HADOOP-10350: - FAILURE: Integrated in Hadoop-Hdfs-trunk #1729 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1729/]) HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586433) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Reverse merged revision(s) 1586425 from hadoop/common/trunk: HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586432) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586425) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt BUILDING.txt should mention openssl dependency required for hadoop-pipes Key: HADOOP-10350 URL: https://issues.apache.org/jira/browse/HADOOP-10350 Project: Hadoop Common Issue Type: Bug Reporter: Vinayakumar B Assignee: Vinayakumar B Fix For: 2.5.0 Attachments: HADOOP-10350.patch, HADOOP-10350.patch BUILDING.txt should mention openssl dependency required for hadoop-pipes -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10490) TestMapFile and TestBloomMapFile leak file descriptors.
[ https://issues.apache.org/jira/browse/HADOOP-10490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966498#comment-13966498 ] Hudson commented on HADOOP-10490: - FAILURE: Integrated in Hadoop-Hdfs-trunk #1729 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1729/]) HADOOP-10490. TestMapFile and TestBloomMapFile leak file descriptors. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586570) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestBloomMapFile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java TestMapFile and TestBloomMapFile leak file descriptors. --- Key: HADOOP-10490 URL: https://issues.apache.org/jira/browse/HADOOP-10490 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0, 2.4.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor Fix For: 3.0.0, 2.4.1 Attachments: HADOOP-10490.1.patch, HADOOP-10490.2.patch Multiple tests in {{TestMapFile}} and {{TestBloomMapFile}} open files but don't close them. On Windows, the leaked file descriptors cause subsequent tests to fail, because file locks are still held while trying to delete the test data directory. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10488) TestKeyProviderFactory fails randomly
[ https://issues.apache.org/jira/browse/HADOOP-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966496#comment-13966496 ] Hudson commented on HADOOP-10488: - FAILURE: Integrated in Hadoop-Hdfs-trunk #1729 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1729/]) HADOOP-10488. TestKeyProviderFactory fails randomly. (tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586382) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java TestKeyProviderFactory fails randomly - Key: HADOOP-10488 URL: https://issues.apache.org/jira/browse/HADOOP-10488 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10488.patch This test is fail randomly depending on the order of execution of the test methods, the reason is that the keystore used by the different testmethods is the same. We should either delete it before/after each test, or we should use a diff diff or each run. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8746) TestNativeIO fails when run with jdk7
[ https://issues.apache.org/jira/browse/HADOOP-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966523#comment-13966523 ] Mit Desai commented on HADOOP-8746: --- I haven't seen this failing in 2.X. Closing it for now TestNativeIO fails when run with jdk7 - Key: HADOOP-8746 URL: https://issues.apache.org/jira/browse/HADOOP-8746 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Thomas Graves Assignee: Thomas Graves Labels: java7 TestNativeIo fails when run with jdk7. Test set: org.apache.hadoop.io.nativeio.TestNativeIO --- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec FAILURE! testSyncFileRange(org.apache.hadoop.io.nativeio.TestNativeIO) Time elapsed: 0.166 sec ERROR! EINVAL: Invalid argument at org.apache.hadoop.io.nativeio.NativeIO.sync_file_range(Native Method) at org.apache.hadoop.io.nativeio.TestNativeIO.testSyncFileRange(TestNativeIO.java:254) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-8746) TestNativeIO fails when run with jdk7
[ https://issues.apache.org/jira/browse/HADOOP-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mit Desai resolved HADOOP-8746. --- Resolution: Not a Problem Target Version/s: 2.0.2-alpha, 0.23.3, 3.0.0 (was: 0.23.3, 3.0.0, 2.0.2-alpha) TestNativeIO fails when run with jdk7 - Key: HADOOP-8746 URL: https://issues.apache.org/jira/browse/HADOOP-8746 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Thomas Graves Assignee: Thomas Graves Labels: java7 TestNativeIo fails when run with jdk7. Test set: org.apache.hadoop.io.nativeio.TestNativeIO --- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec FAILURE! testSyncFileRange(org.apache.hadoop.io.nativeio.TestNativeIO) Time elapsed: 0.166 sec ERROR! EINVAL: Invalid argument at org.apache.hadoop.io.nativeio.NativeIO.sync_file_range(Native Method) at org.apache.hadoop.io.nativeio.TestNativeIO.testSyncFileRange(TestNativeIO.java:254) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8087) Paths that start with a double slash cause No filesystem for scheme: null errors
[ https://issues.apache.org/jira/browse/HADOOP-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966530#comment-13966530 ] Daryn Sharp commented on HADOOP-8087: - It's a simple change so I'd like it to remain targeted to at least 2.x. Paths that start with a double slash cause No filesystem for scheme: null errors -- Key: HADOOP-8087 URL: https://issues.apache.org/jira/browse/HADOOP-8087 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0, 0.24.0 Reporter: Daryn Sharp Assignee: Colin Patrick McCabe Attachments: HADOOP-8087.001.patch, HADOOP-8087.002.patch {{Path}} is incorrectly parsing {{//dir/path}} in a very unexpected way. While it should translate to the directory {{$fs.default.name}/dir/path}}, it instead discards the {{//dir}} and returns {{$fs.default.name/path}}. The problem is {{Path}} is trying to parsing an authority even when a scheme is not present. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10492) Help Commands needs change after deprecation
[ https://issues.apache.org/jira/browse/HADOOP-10492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966587#comment-13966587 ] Daryn Sharp commented on HADOOP-10492: -- bq. As hadoop fs is deprecated, the help should show usage with HDFS I thought hdfs dfs is being deprecated, not hadoop fs? Both call FsShell which is in hadoop-common because it works with any filesystem implementation. Requiring users to invoke hdfs dfs doesn't make sense for non-hdfs filesystems. Help Commands needs change after deprecation Key: HADOOP-10492 URL: https://issues.apache.org/jira/browse/HADOOP-10492 Project: Hadoop Common Issue Type: Bug Reporter: Raja Nagendra Kumar As hadoop fs is deprecated, the help should show usage with HDFS e.g in the following command it still refers to Usage: hadoop fs [generic options] D:\Apps\java\BI\hadoop\hw\hdp\hadoop-2.2.0.2.0.6.0-0009hdfs dfs Usage: hadoop fs [generic options] [-appendToFile localsrc ... dst] [-cat [-ignoreCrc] src ...] [-checksum src ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] MODE[,MODE]... | OCTALMODE PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] localsrc ... dst] [-copyToLocal [-p] [-ignoreCrc] [-crc] src ... localdst] [-count [-q] path ...] [-cp [-f] [-p] src ... dst] [-createSnapshot snapshotDir [snapshotName]] [-deleteSnapshot snapshotDir snapshotName] [-df [-h] [path ...]] [-du [-s] [-h] path ...] -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10322) Add ability to read principal names from a keytab
[ https://issues.apache.org/jira/browse/HADOOP-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966600#comment-13966600 ] Daryn Sharp commented on HADOOP-10322: -- Looks good, although my opinion is only {{getMatchingPrincipalNames}} should be exposed as public. Add ability to read principal names from a keytab - Key: HADOOP-10322 URL: https://issues.apache.org/jira/browse/HADOOP-10322 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.2.0 Reporter: Benoy Antony Assignee: Benoy Antony Attachments: HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, javadoc-warnings.txt It will be useful to have an ability to enumerate the principals stored in a keytab. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10488) TestKeyProviderFactory fails randomly
[ https://issues.apache.org/jira/browse/HADOOP-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966609#comment-13966609 ] Hudson commented on HADOOP-10488: - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1754 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1754/]) HADOOP-10488. TestKeyProviderFactory fails randomly. (tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586382) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java TestKeyProviderFactory fails randomly - Key: HADOOP-10488 URL: https://issues.apache.org/jira/browse/HADOOP-10488 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10488.patch This test is fail randomly depending on the order of execution of the test methods, the reason is that the keystore used by the different testmethods is the same. We should either delete it before/after each test, or we should use a diff diff or each run. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10490) TestMapFile and TestBloomMapFile leak file descriptors.
[ https://issues.apache.org/jira/browse/HADOOP-10490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966611#comment-13966611 ] Hudson commented on HADOOP-10490: - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1754 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1754/]) HADOOP-10490. TestMapFile and TestBloomMapFile leak file descriptors. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586570) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestBloomMapFile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java TestMapFile and TestBloomMapFile leak file descriptors. --- Key: HADOOP-10490 URL: https://issues.apache.org/jira/browse/HADOOP-10490 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0, 2.4.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor Fix For: 3.0.0, 2.4.1 Attachments: HADOOP-10490.1.patch, HADOOP-10490.2.patch Multiple tests in {{TestMapFile}} and {{TestBloomMapFile}} open files but don't close them. On Windows, the leaked file descriptors cause subsequent tests to fail, because file locks are still held while trying to delete the test data directory. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10350) BUILDING.txt should mention openssl dependency required for hadoop-pipes
[ https://issues.apache.org/jira/browse/HADOOP-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966610#comment-13966610 ] Hudson commented on HADOOP-10350: - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1754 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1754/]) HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586433) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Reverse merged revision(s) 1586425 from hadoop/common/trunk: HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586432) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt HADOOP-10350. BUILDING.txt should mention openssl dependency required for hadoop-pipes (Vinayakumar B) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586425) * /hadoop/common/trunk/BUILDING.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt BUILDING.txt should mention openssl dependency required for hadoop-pipes Key: HADOOP-10350 URL: https://issues.apache.org/jira/browse/HADOOP-10350 Project: Hadoop Common Issue Type: Bug Reporter: Vinayakumar B Assignee: Vinayakumar B Fix For: 2.5.0 Attachments: HADOOP-10350.patch, HADOOP-10350.patch BUILDING.txt should mention openssl dependency required for hadoop-pipes -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10487) Racy code in UserGroupInformation#ensureInitialized()
[ https://issues.apache.org/jira/browse/HADOOP-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966628#comment-13966628 ] Daryn Sharp commented on HADOOP-10487: -- s/need for a final/need for a volatile/ Racy code in UserGroupInformation#ensureInitialized() - Key: HADOOP-10487 URL: https://issues.apache.org/jira/browse/HADOOP-10487 Project: Hadoop Common Issue Type: Bug Reporter: Haohui Mai Assignee: Haohui Mai UserGroupInformation#ensureInitialized() uses the double-check-locking pattern to reduce the synchronization cost: {code} private static void ensureInitialized() { if (conf == null) { synchronized(UserGroupInformation.class) { if (conf == null) { // someone might have beat us initialize(new Configuration(), false); } } } } {code} As [~tlipcon] pointed out in the original jira (HADOOP-9748). This pattern is incorrect. Please see more details in http://en.wikipedia.org/wiki/Double-checked_locking and http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html This jira proposes to use the static class holder pattern to do it correctly. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8826) Docs still refer to 0.20.205 as stable line
[ https://issues.apache.org/jira/browse/HADOOP-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1395#comment-1395 ] Jonathan Eagles commented on HADOOP-8826: - +1. Thanks, Mit. Committing to branch-0.23, branch-2.4, branch-2, and trunk. Docs still refer to 0.20.205 as stable line --- Key: HADOOP-8826 URL: https://issues.apache.org/jira/browse/HADOOP-8826 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3 Reporter: Robert Joseph Evans Assignee: Mit Desai Priority: Minor Labels: documentation Attachments: HADOOP-8826-b23.patch, HADOOP-8826.patch The main docs page still refers to 0.20.205 as the stable line, 1.0 is the stable line now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8826) Docs still refer to 0.20.205 as stable line
[ https://issues.apache.org/jira/browse/HADOOP-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1396#comment-1396 ] Hudson commented on HADOOP-8826: SUCCESS: Integrated in Hadoop-trunk-Commit #5501 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5501/]) HADOOP-8826. Docs still refer to 0.20.205 as stable line (Mit Desai via jeagles) (jeagles: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586685) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YARN.apt.vm Docs still refer to 0.20.205 as stable line --- Key: HADOOP-8826 URL: https://issues.apache.org/jira/browse/HADOOP-8826 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3 Reporter: Robert Joseph Evans Assignee: Mit Desai Priority: Minor Labels: documentation Attachments: HADOOP-8826-b23.patch, HADOOP-8826.patch The main docs page still refers to 0.20.205 as the stable line, 1.0 is the stable line now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-8826) Docs still refer to 0.20.205 as stable line
[ https://issues.apache.org/jira/browse/HADOOP-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-8826: Resolution: Fixed Fix Version/s: 2.4.1 2.5.0 0.23.11 3.0.0 Status: Resolved (was: Patch Available) Docs still refer to 0.20.205 as stable line --- Key: HADOOP-8826 URL: https://issues.apache.org/jira/browse/HADOOP-8826 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3 Reporter: Robert Joseph Evans Assignee: Mit Desai Priority: Minor Labels: documentation Fix For: 3.0.0, 0.23.11, 2.5.0, 2.4.1 Attachments: HADOOP-8826-b23.patch, HADOOP-8826.patch The main docs page still refers to 0.20.205 as the stable line, 1.0 is the stable line now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-9219) coverage fixing for org.apache.hadoop.tools.rumen
[ https://issues.apache.org/jira/browse/HADOOP-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles resolved HADOOP-9219. - Resolution: Fixed Duping this issue to MAPREDUCE-3860 as per Andrey's comment. coverage fixing for org.apache.hadoop.tools.rumen - Key: HADOOP-9219 URL: https://issues.apache.org/jira/browse/HADOOP-9219 Project: Hadoop Common Issue Type: Test Components: tools Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: HADOOP-9219-trunk-a.patch, HADOOP-9219-trunk-b.patch, HADOOP-9219-trunk.patch Original Estimate: 168h Remaining Estimate: 168h coverage fixing for org.apache.hadoop.tools.rumen HADOOP-9219-trunk.patch for trunk, brunch-2 and branch-0.23 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10492) Help Commands needs change after deprecation
[ https://issues.apache.org/jira/browse/HADOOP-10492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raja Nagendra Kumar updated HADOOP-10492: - Description: As hadoop dfs is deprecated, the help should show usage with HDFS e.g in the following command it still refers to Usage: hadoop fs [generic options] D:\Apps\java\BI\hadoop\hw\hdp\hadoop-2.2.0.2.0.6.0-0009hdfs dfs Usage: hadoop fs [generic options] [-appendToFile localsrc ... dst] [-cat [-ignoreCrc] src ...] [-checksum src ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] MODE[,MODE]... | OCTALMODE PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] localsrc ... dst] [-copyToLocal [-p] [-ignoreCrc] [-crc] src ... localdst] [-count [-q] path ...] [-cp [-f] [-p] src ... dst] [-createSnapshot snapshotDir [snapshotName]] [-deleteSnapshot snapshotDir snapshotName] [-df [-h] [path ...]] [-du [-s] [-h] path ...] was: As hadoop fs is deprecated, the help should show usage with HDFS e.g in the following command it still refers to Usage: hadoop fs [generic options] D:\Apps\java\BI\hadoop\hw\hdp\hadoop-2.2.0.2.0.6.0-0009hdfs dfs Usage: hadoop fs [generic options] [-appendToFile localsrc ... dst] [-cat [-ignoreCrc] src ...] [-checksum src ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] MODE[,MODE]... | OCTALMODE PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] localsrc ... dst] [-copyToLocal [-p] [-ignoreCrc] [-crc] src ... localdst] [-count [-q] path ...] [-cp [-f] [-p] src ... dst] [-createSnapshot snapshotDir [snapshotName]] [-deleteSnapshot snapshotDir snapshotName] [-df [-h] [path ...]] [-du [-s] [-h] path ...] Help Commands needs change after deprecation Key: HADOOP-10492 URL: https://issues.apache.org/jira/browse/HADOOP-10492 Project: Hadoop Common Issue Type: Bug Reporter: Raja Nagendra Kumar As hadoop dfs is deprecated, the help should show usage with HDFS e.g in the following command it still refers to Usage: hadoop fs [generic options] D:\Apps\java\BI\hadoop\hw\hdp\hadoop-2.2.0.2.0.6.0-0009hdfs dfs Usage: hadoop fs [generic options] [-appendToFile localsrc ... dst] [-cat [-ignoreCrc] src ...] [-checksum src ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] MODE[,MODE]... | OCTALMODE PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] localsrc ... dst] [-copyToLocal [-p] [-ignoreCrc] [-crc] src ... localdst] [-count [-q] path ...] [-cp [-f] [-p] src ... dst] [-createSnapshot snapshotDir [snapshotName]] [-deleteSnapshot snapshotDir snapshotName] [-df [-h] [path ...]] [-du [-s] [-h] path ...] -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-8087) Paths that start with a double slash cause No filesystem for scheme: null errors
[ https://issues.apache.org/jira/browse/HADOOP-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966849#comment-13966849 ] Colin Patrick McCabe commented on HADOOP-8087: -- Wow, this is an oldie (and not goodie). I agree that we should fix this in 2.x. It should be a compatible change. Paths that start with a double slash cause No filesystem for scheme: null errors -- Key: HADOOP-8087 URL: https://issues.apache.org/jira/browse/HADOOP-8087 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0, 0.24.0 Reporter: Daryn Sharp Assignee: Colin Patrick McCabe Attachments: HADOOP-8087.001.patch, HADOOP-8087.002.patch {{Path}} is incorrectly parsing {{//dir/path}} in a very unexpected way. While it should translate to the directory {{$fs.default.name}/dir/path}}, it instead discards the {{//dir}} and returns {{$fs.default.name/path}}. The problem is {{Path}} is trying to parsing an authority even when a scheme is not present. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-8087) Paths that start with a double slash cause No filesystem for scheme: null errors
[ https://issues.apache.org/jira/browse/HADOOP-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HADOOP-8087: - Target Version/s: 2.5.0 (was: 0.23.3, 2.0.0-alpha, 3.0.0) Paths that start with a double slash cause No filesystem for scheme: null errors -- Key: HADOOP-8087 URL: https://issues.apache.org/jira/browse/HADOOP-8087 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.0, 0.24.0 Reporter: Daryn Sharp Assignee: Colin Patrick McCabe Attachments: HADOOP-8087.001.patch, HADOOP-8087.002.patch {{Path}} is incorrectly parsing {{//dir/path}} in a very unexpected way. While it should translate to the directory {{$fs.default.name}/dir/path}}, it instead discards the {{//dir}} and returns {{$fs.default.name/path}}. The problem is {{Path}} is trying to parsing an authority even when a scheme is not present. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10430) KeyProvider Metadata should have an optional description, there should be a method to retrieve the metadata from all keys
[ https://issues.apache.org/jira/browse/HADOOP-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10430: Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) committed to trunk. KeyProvider Metadata should have an optional description, there should be a method to retrieve the metadata from all keys - Key: HADOOP-10430 URL: https://issues.apache.org/jira/browse/HADOOP-10430 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10430.patch, HADOOP-10430.patch, HADOOP-10430.patch, HADOOP-10430.patch Being able to attach an optional description (and show it when displaying metadata) will enable giving some context on the keys. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10431) Change visibility of KeyStore.Options getter methods to public
[ https://issues.apache.org/jira/browse/HADOOP-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10431: Summary: Change visibility of KeyStore.Options getter methods to public (was: Change visibility of KeyStore KeyVersion/Metadata/Options constructor and methods to public) Change visibility of KeyStore.Options getter methods to public -- Key: HADOOP-10431 URL: https://issues.apache.org/jira/browse/HADOOP-10431 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10431.patch, HADOOP-10431.patch, HADOOP-10431.patch Making KeyVersion/Metadata/Options constructor and methods public will facilitate {{KeyProvider}} implementations to use those classes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10431) Change visibility of KeyStore.Options getter methods to public
[ https://issues.apache.org/jira/browse/HADOOP-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10431: Description: Making Options getter methods public will enable {{KeyProvider}} implementations to use those classes. (was: Making KeyVersion/Metadata/Options constructor and methods public will facilitate {{KeyProvider}} implementations to use those classes.) Change visibility of KeyStore.Options getter methods to public -- Key: HADOOP-10431 URL: https://issues.apache.org/jira/browse/HADOOP-10431 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10431.patch, HADOOP-10431.patch, HADOOP-10431.patch Making Options getter methods public will enable {{KeyProvider}} implementations to use those classes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: HADOOP-10433.patch updated patch. Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: COMBO.patch, HADOOP-10433-v2.patch, HADOOP-10433-v3.patch, HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, KMS-ALL-PATCHES-v2.patch, KMS-ALL-PATCHES-v3.patch, KMS-ALL-PATCHES.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10431) Change visibility of KeyStore.Options getter methods to public
[ https://issues.apache.org/jira/browse/HADOOP-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966913#comment-13966913 ] Hudson commented on HADOOP-10431: - SUCCESS: Integrated in Hadoop-trunk-Commit #5504 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5504/]) HADOOP-10431. Change visibility of KeyStore.Options getter methods to public. (tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586732) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java Change visibility of KeyStore.Options getter methods to public -- Key: HADOOP-10431 URL: https://issues.apache.org/jira/browse/HADOOP-10431 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10431.patch, HADOOP-10431.patch, HADOOP-10431.patch Making Options getter methods public will enable {{KeyProvider}} implementations to use those classes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10430) KeyProvider Metadata should have an optional description, there should be a method to retrieve the metadata from all keys
[ https://issues.apache.org/jira/browse/HADOOP-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966912#comment-13966912 ] Hudson commented on HADOOP-10430: - SUCCESS: Integrated in Hadoop-trunk-Commit #5504 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5504/]) HADOOP-10430. KeyProvider Metadata should have an optional description, there should be a method to retrieve the metadata from all keys. (tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586730) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/UserProvider.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyShell.java KeyProvider Metadata should have an optional description, there should be a method to retrieve the metadata from all keys - Key: HADOOP-10430 URL: https://issues.apache.org/jira/browse/HADOOP-10430 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10430.patch, HADOOP-10430.patch, HADOOP-10430.patch, HADOOP-10430.patch Being able to attach an optional description (and show it when displaying metadata) will enable giving some context on the keys. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10431) Change visibility of KeyStore.Options getter methods to public
[ https://issues.apache.org/jira/browse/HADOOP-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved HADOOP-10431. - Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed committed to trunk. Change visibility of KeyStore.Options getter methods to public -- Key: HADOOP-10431 URL: https://issues.apache.org/jira/browse/HADOOP-10431 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 3.0.0 Attachments: HADOOP-10431.patch, HADOOP-10431.patch, HADOOP-10431.patch Making Options getter methods public will enable {{KeyProvider}} implementations to use those classes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966976#comment-13966976 ] Alejandro Abdelnur commented on HADOOP-10433: - [~lmccay] the KMS server is meant to be a proxy to any KeyProvider implementation with the following benefits: * Single entry point from Hadoop to the KeyProvider implementation * Supports Hadoop Kerberos Authentication, making secure integration easier * Provides caching, to support load from Hadoop services and jobs Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: COMBO.patch, HADOOP-10433-v2.patch, HADOOP-10433-v3.patch, HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, KMS-ALL-PATCHES-v2.patch, KMS-ALL-PATCHES-v3.patch, KMS-ALL-PATCHES.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966988#comment-13966988 ] Hadoop QA commented on HADOOP-10433: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12639842/HADOOP-10433.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files. {color:red}-1 javac{color}. The applied patch generated 1289 javac compiler warnings (more than the trunk's current 1288 warnings). {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms hadoop-dist. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3787//testReport/ Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/3787//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3787//console This message is automatically generated. Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: COMBO.patch, HADOOP-10433-v2.patch, HADOOP-10433-v3.patch, HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, KMS-ALL-PATCHES-v2.patch, KMS-ALL-PATCHES-v3.patch, KMS-ALL-PATCHES.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967006#comment-13967006 ] Larry McCay commented on HADOOP-10433: -- [~tucu00] - that sounds like the right direction to me! Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: COMBO.patch, HADOOP-10433-v2.patch, HADOOP-10433-v3.patch, HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, KMS-ALL-PATCHES-v2.patch, KMS-ALL-PATCHES-v3.patch, KMS-ALL-PATCHES.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: KMS-ALL-PATCHES-v2.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: COMBO.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: HADOOP-10433.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: HADOOP-10433-v2.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: HADOOP-10433.patch patch fixing javac warning. Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: HADOOP-10433-v3.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: KMS-ALL-PATCHES.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: HADOOP-10433.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-10433: Attachment: (was: KMS-ALL-PATCHES-v3.patch) Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10433) Key Management Server based on KeyProvider API
[ https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967103#comment-13967103 ] Hadoop QA commented on HADOOP-10433: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12639861/HADOOP-10433.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms hadoop-dist. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3788//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3788//console This message is automatically generated. Key Management Server based on KeyProvider API -- Key: HADOOP-10433 URL: https://issues.apache.org/jira/browse/HADOOP-10433 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-10433.patch, HADOOP-10433.patch, KMS-doc.pdf (from HDFS-6134 proposal) Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying KMS. It provides an interface that works with existing Hadoop security components (authenticatication, confidentiality). Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 and HADOOP-10177. Hadoop KMS will provide an additional implementation of the Hadoop KeyProvider class. This implementation will be a client-server implementation. The client-server protocol will be secure: * Kerberos HTTP SPNEGO (authentication) * HTTPS for transport (confidentiality and integrity) * Hadoop ACLs (authorization) The Hadoop KMS implementation will not provide additional ACL to access encrypted files. For sophisticated access control requirements, HDFS ACLs (HDFS-4685) should be used. Basic key administration will be supported by the Hadoop KMS via the, already available, Hadoop KeyShell command line tool There are minor changes that must be done in Hadoop KeyProvider functionality: The KeyProvider contract, and the existing implementations, must be thread-safe KeyProvider API should have an API to generate the key material internally JavaKeyStoreProvider should use, if present, a password provided via configuration KeyProvider Option and Metadata should include a label (for easier cross-referencing) To avoid overloading the underlying KeyProvider implementation, the Hadoop KMS will cache keys using a TTL policy. Scalability and High Availability of the Hadoop KMS can achieved by running multiple instances behind a VIP/Load-Balancer. For High Availability, the underlying KeyProvider implementation used by the Hadoop KMS must be High Available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10322) Add ability to read principal names from a keytab
[ https://issues.apache.org/jira/browse/HADOOP-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benoy Antony updated HADOOP-10322: -- Attachment: HADOOP-10322.patch Attaching the patch with only getMatchingPrincipalNames as public. Add ability to read principal names from a keytab - Key: HADOOP-10322 URL: https://issues.apache.org/jira/browse/HADOOP-10322 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.2.0 Reporter: Benoy Antony Assignee: Benoy Antony Attachments: HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, javadoc-warnings.txt It will be useful to have an ability to enumerate the principals stored in a keytab. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10322) Add ability to read principal names from a keytab
[ https://issues.apache.org/jira/browse/HADOOP-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967180#comment-13967180 ] Hadoop QA commented on HADOOP-10322: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12639875/HADOOP-10322.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-auth. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3789//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3789//console This message is automatically generated. Add ability to read principal names from a keytab - Key: HADOOP-10322 URL: https://issues.apache.org/jira/browse/HADOOP-10322 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.2.0 Reporter: Benoy Antony Assignee: Benoy Antony Attachments: HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, HADOOP-10322.patch, javadoc-warnings.txt It will be useful to have an ability to enumerate the principals stored in a keytab. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10389) Native RPCv9 client
[ https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HADOOP-10389: -- Attachment: HADOOP-10389.004.patch this patch updates the code to use libuv 0.12 rather than libuv 0.11. the APIs changed a bit Native RPCv9 client --- Key: HADOOP-10389 URL: https://issues.apache.org/jira/browse/HADOOP-10389 Project: Hadoop Common Issue Type: Sub-task Affects Versions: HADOOP-10388 Reporter: Binglin Chang Assignee: Colin Patrick McCabe Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, HADOOP-10389.004.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-7601) Move common fs implementations to a hadoop-fs module
[ https://issues.apache.org/jira/browse/HADOOP-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967281#comment-13967281 ] jay vyas commented on HADOOP-7601: -- What is the status on this? I think it relates to our JIRA to move test implementations into fs/*/ packages (HADOOP-10461), so that test implementations are separated from HCFS Test utilities? Move common fs implementations to a hadoop-fs module Key: HADOOP-7601 URL: https://issues.apache.org/jira/browse/HADOOP-7601 Project: Hadoop Common Issue Type: Improvement Components: fs Reporter: Luke Lu Much of the hadoop-common dependencies is from the fs implementations. We have more fs implementations on the way (ceph, lafs etc). I propose that we move all the fs implementations to a hadoop-fs module under hadoop-common-project. -- This message was sent by Atlassian JIRA (v6.2#6252)