[jira] [Created] (HADOOP-16522) Encrypt buffered data on disk

2019-08-20 Thread Mike Yoder (Jira)
Mike Yoder created HADOOP-16522:
---

 Summary: Encrypt buffered data on disk
 Key: HADOOP-16522
 URL: https://issues.apache.org/jira/browse/HADOOP-16522
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Mike Yoder


This came out of discussions with [~ste...@apache.org], [~irashid] and 
[~vanzin].

Imran:
{quote}
Steve pointed out to me that the s3 libraries buffer data to disk.  This is 
pretty much arbitrary user data.
 
Spark has some settings to encrypt data that it writes to local disk (shuffle 
files etc.).  Spark never has control of what arbitrary libraries are doing 
with data, so it doesn't guarantee that nothing ever ends up on disk -- but to 
the end user, they'd view those s3 libraries as part of the same system.  So if 
a user is turning on spark's local-disk encryption, the users would be pretty 
surprised to find out that the data they're writing to S3 ends up on 
local-disk, unencrypted.
{quote}

Me:
{quote}
... Regardless, this is still an s3a bug.
{quote}
 
Steve:
{quote}
I disagree

we need to save intermediate data "somewhere" -people get a choice of disk or 
memory.

encrypting data on disk was never considered as needed, on the basis that 
anyone malicious with read access under your home dir could lift the hadoop 
token file which YARN provides and so have full R/W access to all your data in 
the cluster filesystems until those tokens expire. If you don't have a good 
story there then the buffering of a few tens of MB of data during upload is a 
detail. 

There's also the extra complication that when uploading file blocks, we pass in 
the filename to the AWS SDK and let it do the uploads, rather than create the 
output stream; the SDK code has, in the past, been better at recovering 
failures there than output stream + mark and reset. that was a while back; 
things may change. But it is why I'd prefer any encrypted temp store as a new 
buffer option, rather than just silently change the "disk" buffer option to 
encrypt

Be interesting to see where else in the code this needs to be addressed; I'd 
recommend looking at all uses if org.apache.hadoop.fs.LocalDirAllocator and 
making sure that Spark YARN launch+execute didn't use this indirectly

JIRAs under HADOOP-15620 welcome; do look at the test policy in the hadoop-aws 
docs; we'd need a new subclass of AbstractSTestS3AHugeFiles for integration 
testing a different buffering option, plus whatever unit tests the encryption 
itself needed.
{quote}

Me:
{quote}
I get it. But ... there are a couple of subtleties here. One is that the tokens 
expire, while the data is still data. (This might or might not matter, 
depending on the threat...) Another is that customer policies in this area do 
not always align well with common sense. There are blanket policies like "data 
shall never be written to disk unencrypted" which we have come up against, 
which we'd like to be able to honestly answer in the affirmative.  We have 
encrypted MR shuffle as one historical example, and encrypted impala memory 
spills as another.
{quote}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-28 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15844190#comment-15844190
 ] 

Mike Yoder commented on HADOOP-13433:
-

This is a fascinating issue. Nice work catching and fixing it. Has anyone 
reported this to Oracle?

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at 

[jira] [Updated] (HADOOP-13864) KMS should not require truststore password

2016-12-03 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13864:

Status: Patch Available  (was: Open)

> KMS should not require truststore password
> --
>
> Key: HADOOP-13864
> URL: https://issues.apache.org/jira/browse/HADOOP-13864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13864.000.patch
>
>
> Trust store passwords are actually not required for read operations. They're 
> only needed for writing to the trust store; in reads they serve as an 
> integrity check. Normal hadoop sslclient.xml files don't require the 
> truststore password, but when the KMS is used it's required. 
> If I don't specify a hadoop trust store password I get:
> {noformat}
> Failed to start namenode.
> java.io.IOException: java.security.GeneralSecurityException: The property 
> 'ssl.client.truststore.password' has not been set in the ssl configuration 
> file.
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95)
>   at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65)
>   at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920)
>   at 
> org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616)
> Caused by: java.security.GeneralSecurityException: The property 
> 'ssl.client.truststore.password' has not been set in the ssl configuration 
> file.
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:426)
>   ... 14 more
> {noformat}
> Note that this _does not_ happen to the namenode when the kms isn't in use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13864) KMS should not require truststore password

2016-12-03 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13864:

Attachment: HADOOP-13864.000.patch

> KMS should not require truststore password
> --
>
> Key: HADOOP-13864
> URL: https://issues.apache.org/jira/browse/HADOOP-13864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13864.000.patch
>
>
> Trust store passwords are actually not required for read operations. They're 
> only needed for writing to the trust store; in reads they serve as an 
> integrity check. Normal hadoop sslclient.xml files don't require the 
> truststore password, but when the KMS is used it's required. 
> If I don't specify a hadoop trust store password I get:
> {noformat}
> Failed to start namenode.
> java.io.IOException: java.security.GeneralSecurityException: The property 
> 'ssl.client.truststore.password' has not been set in the ssl configuration 
> file.
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95)
>   at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65)
>   at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920)
>   at 
> org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616)
> Caused by: java.security.GeneralSecurityException: The property 
> 'ssl.client.truststore.password' has not been set in the ssl configuration 
> file.
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:426)
>   ... 14 more
> {noformat}
> Note that this _does not_ happen to the namenode when the kms isn't in use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13864) KMS should not require truststore password

2016-12-03 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15719053#comment-15719053
 ] 

Mike Yoder commented on HADOOP-13864:
-

To the question of "you're not requiring a password now, isn't that a bad (less 
secure) thing?" I reply:

My first agrument is that of symmetry. For C/C++/Python programs (anything 
using openssl), a trust store is a plain-text file containing certificates. No 
password required, and indeed there is not even a way to password-protect it. 
So this "protection" in java has never been thought worthy of a feature in 
openssl. Note that since all our certificates need to be in both pem and jks 
format the passwordless trust stores will continue to exist in pem format 
regardless of what we do in java programs.

My second argument is that the truststore password is worthless anyway. It 
could in theory be useful in the limited world of keytool generating a 
truststore, but when you actually go to use that truststore it all falls apart. 
The reason is that hadoop clients need the trust store in order to trust the 
server that they're talking to. Since the client needs it, the client has to be 
able to fully use the trust store. If the trust store password is given, then 
the client (anyone who connects to the hadoop cluster, that is) then knows the 
trust store password. There is no way around this: even if we try to encrypt 
that password, we would have to give the client the decryption key. Even if we 
tried to obfuscate that password, we'd have to unobfuscate the password before 
using it.

The other thing to consider here is that customers frequently re-use the trust 
store password to be the same as the keystore password. This is dumb, but it 
happens, and now the password is spread far and wide. The "benefit" is that the 
integrity of the truststore is cryptographically verified. But since 
essentially anyone can learn that password, anyone could write to the 
truststore, so... who cares?

My third argument is that the global trust store on the system has a well known 
password of "changeit" (even though changing it is pointless) and no software 
ever accesses the global trust store using this password - because it would 
provide no benefit.


> KMS should not require truststore password
> --
>
> Key: HADOOP-13864
> URL: https://issues.apache.org/jira/browse/HADOOP-13864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>
> Trust store passwords are actually not required for read operations. They're 
> only needed for writing to the trust store; in reads they serve as an 
> integrity check. Normal hadoop sslclient.xml files don't require the 
> truststore password, but when the KMS is used it's required. 
> If I don't specify a hadoop trust store password I get:
> {noformat}
> Failed to start namenode.
> java.io.IOException: java.security.GeneralSecurityException: The property 
> 'ssl.client.truststore.password' has not been set in the ssl configuration 
> file.
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95)
>   at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65)
>   at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920)
>   at 
> org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616)
> Caused by: java.security.GeneralSecurityException: The property 
> 'ssl.client.truststore.password' has not been set in the ssl configuration 
> file.
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
>   at 

[jira] [Created] (HADOOP-13864) KMS should not require truststore password

2016-12-03 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-13864:
---

 Summary: KMS should not require truststore password
 Key: HADOOP-13864
 URL: https://issues.apache.org/jira/browse/HADOOP-13864
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, security
Reporter: Mike Yoder
Assignee: Mike Yoder


Trust store passwords are actually not required for read operations. They're 
only needed for writing to the trust store; in reads they serve as an integrity 
check. Normal hadoop sslclient.xml files don't require the truststore password, 
but when the KMS is used it's required. 

If I don't specify a hadoop trust store password I get:

{noformat}
Failed to start namenode.
java.io.IOException: java.security.GeneralSecurityException: The property 
'ssl.client.truststore.password' has not been set in the ssl configuration file.
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324)
at 
org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95)
at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65)
at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920)
at 
org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616)
Caused by: java.security.GeneralSecurityException: The property 
'ssl.client.truststore.password' has not been set in the ssl configuration file.
at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199)
at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:426)
... 14 more
{noformat}

Note that this _does not_ happen to the namenode when the kms isn't in use.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13732:

Attachment: HADOOP-13732.002.patch

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch, HADOOP-13732.002.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13732:

Status: Patch Available  (was: Open)

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch, HADOOP-13732.002.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13732:

Status: Open  (was: Patch Available)

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch, HADOOP-13732.002.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586420#comment-15586420
 ] 

Mike Yoder commented on HADOOP-13732:
-

I'd have to make a dependency-check specific note in BUILDING.txt, which seems 
a little awkard. (The normal build isn't affected, of course.) I'll see what I 
can do. My only alternative idea is a comment around this plugin in pom.xml. I 
do agree it needs to be documented somewhere.

* I don't even think that maven is _available_ on RHEL 6.6
* My RHEL 7.2 machine looks like it would use version 3.0.5-16
* My Ubuntu 16.04 machine is using 3.3.9
* Looks like Ubuntu 14.04 uses 3.0.5-1

The maven release history page is at https://maven.apache.org/docs/history.html



> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13732:

Status: Patch Available  (was: Open)

Ping [~andrew.wang]

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13732:

Attachment: HADOOP-13732.001.patch

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-13732:
---

 Summary: Upgrade OWASP dependency-check plugin version
 Key: HADOOP-13732
 URL: https://issues.apache.org/jira/browse/HADOOP-13732
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Mike Yoder
Assignee: Mike Yoder
Priority: Minor


For reasons I don't fully understand, the current version (1.3.6) of the OWASP 
dependency-check plugin produces an essentially empty report on trunk (3.0.0).  
After some research, it appears that this plugin has undergone significant work 
in the latest version, 1.4.3. Upgrading to this version produces the expected 
full report.

The only gotcha is that a new-ish version of maven is required. I'm using 
3.2.2; I know that 3.0.x fails with a strange error.

This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299176#comment-15299176
 ] 

Mike Yoder commented on HADOOP-13198:
-

Another thing to consider with a precommit hook is that the data that 
dependency-check uses for CVEs is, quite literally, the CVE database. If 
something pops up there, the results of dependency-check will change shortly 
thereafter - potentially blocking innocent submittals because suddenly thinks 
look worse.

To get serious about things, we'd want to somehow lock down the ability to add 
new dependencies. IIRC Solr does something with jar signing.

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13198.001.patch, 
> hadoop-all-dependency-check-report.html
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299122#comment-15299122
 ] 

Mike Yoder commented on HADOOP-13198:
-

(pre|post)commit integration seems rather excessive to me; hopefully third 
party libraries change slowly.  Occasional runs (monthly? per release?) seem 
fine to me.

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13198.001.patch, 
> hadoop-all-dependency-check-report.html
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13198:

Status: Patch Available  (was: Open)

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13198.001.patch, 
> hadoop-all-dependency-check-report.html
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13198:

Description: 
OWASP's Dependency-Check is a utility that identifies project
dependencies and checks if there are any known, publicly disclosed,
vulnerabilities.

See https://www.owasp.org/index.php/OWASP_Dependency_Check

This is very useful to stay on top of known vulnerabilities in third party 
jars. Since it's a maven plugin it's pretty easy to drop in.

  was:
OWASP's Dependency-Check is a utility that identifies project
dependencies and checks if there are any known, publicly disclosed,
vulnerabilities.

See https://www.owasp.org/index.php/OWASP_Dependency_Check

This is very useful to stay on top of known vulnerabilities in third party 
jars. Since it's a maven plugin 


> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13198.001.patch, 
> hadoop-all-dependency-check-report.html
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298911#comment-15298911
 ] 

Mike Yoder commented on HADOOP-13198:
-

Pinging [~andrew.wang], [~atm], and [~ste...@apache.org]

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13198.001.patch, 
> hadoop-all-dependency-check-report.html
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13198:

Attachment: HADOOP-13198.001.patch

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13198.001.patch, 
> hadoop-all-dependency-check-report.html
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13198:

Attachment: hadoop-all-dependency-check-report.html

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: hadoop-all-dependency-check-report.html
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-24 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-13198:
---

 Summary: Add support for OWASP's dependency-check
 Key: HADOOP-13198
 URL: https://issues.apache.org/jira/browse/HADOOP-13198
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Mike Yoder
Assignee: Mike Yoder
Priority: Minor


OWASP's Dependency-Check is a utility that identifies project
dependencies and checks if there are any known, publicly disclosed,
vulnerabilities.

See https://www.owasp.org/index.php/OWASP_Dependency_Check

This is very useful to stay on top of known vulnerabilities in third party 
jars. Since it's a maven plugin 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-18 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Attachment: HADOOP-13157.004.branch-2.patch

Attaching patch 4 for branch-2. Sorry for the trouble.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Fix For: 2.9.0
>
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch, 
> HADOOP-13157.003.branch-2.8.patch, HADOOP-13157.004.branch-2.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Attachment: HADOOP-13157.003.branch-2.8.patch

Attaching HADOOP-13157.003.branch-2.8.patch for branch-2.8. Looks like I ran 
into Robert's environment removal change.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Fix For: 2.9.0
>
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch, 
> HADOOP-13157.003.branch-2.8.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287645#comment-15287645
 ] 

Mike Yoder commented on HADOOP-13157:
-

Added this in patch 2.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Attachment: HADOOP-13157.002.patch

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Status: Patch Available  (was: Open)

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Status: Open  (was: Patch Available)

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287470#comment-15287470
 ] 

Mike Yoder commented on HADOOP-13157:
-

Failure was
{noformat}
TestIPC.testConnectionIdleTimeouts:941 expected:<7> but was:<4>
{noformat}
Hard to see how this code has anything to do with that...

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286884#comment-15286884
 ] 

Mike Yoder commented on HADOOP-13157:
-

[~andrew.wang] - I take it that there's something amiss with the build/test 
infrastructure? My failures are all:
{noformat}
Detected JDK Version: 1.7.0-95 is not in the allowed range [1.8,).
{noformat}


> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286877#comment-15286877
 ] 

Mike Yoder commented on HADOOP-13157:
-

OK, I agree your title is much better.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to HADOOP-12942

2016-05-16 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Status: Patch Available  (was: Open)

{quote}
File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
Line 147:
Could this be a static helper?
{quote}
Changed. Now the env var and filename are passed in.

{quote}
Line 161: new
The javadoc says it returns null in this situation. This is also a difference 
from the implementation in the AbstractJKSP. Intentional?
{quote}
This line came in as a part of 
https://issues.apache.org/jira/browse/HADOOP-10224. With that work, the 
JavaKeyStoreProvider was given a more sophisticated old/new corruption 
prevention dance that the AbstractJKSP lacks. I'd lean towards leaving it alone 
and using this version for both.

{quote}
Line 175: private void locateKeystore() throws IOException {
static helper? for the construct*Path methods too?
{quote}
locateKeystore hits a bunch of member variables: password, path, keyStore, fs, 
permissions... so please no. construct*Path() - sure, changed.

{quote}
File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
Line 50: @VisibleForTesting public static final String NO_VALID_PROVIDERS =
FYI for the future, our coding style is to put annotations on their own 
separate line.
File 
{quote}
Done. Can this rule be added to the checkstyle rules?

{quote}
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
Line 326: private char[] locatePassword() throws IOException {
this method looks very similar to the one in JavaKeyStoreProvider, except the 
env var it looks for is different, is there potential for code reuse?
{quote}
Yes. Moved to ProviderUtils along with some other stuff.

{quote}
Line 394: " o In the environment variable " +
Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
syntax.
{quote}
Fixed, both here and JavaKeyStoreProvider

{quote}
Line 399: "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
This link is not tied to a version, so could be inaccurate.
{quote}
Made generic without link.


> Follow-on improvements to HADOOP-12942
> --
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to HADOOP-12942

2016-05-16 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Attachment: HADOOP-13157.001.patch

> Follow-on improvements to HADOOP-12942
> --
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to HADOOP-12942

2016-05-16 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285289#comment-15285289
 ] 

Mike Yoder commented on HADOOP-13157:
-

Two questions:
# "locatePassword ... looks very similar to the one in JavaKeyStoreProvider ... 
potential for code reuse?"  Sure, and this crossed my mind, too. But where 
would such a function live?
# "This link is not tied to a version..." Is there a canonical way of referring 
to links that we can use?


> Follow-on improvements to HADOOP-12942
> --
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13157) Follow-on improvements to HADOOP-12942

2016-05-16 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-13157:
---

 Summary: Follow-on improvements to HADOOP-12942
 Key: HADOOP-13157
 URL: https://issues.apache.org/jira/browse/HADOOP-13157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Mike Yoder
Assignee: Mike Yoder


[~andrew.wang] had some follow-up code review comments from HADOOP-12942. Hence 
this issue.

Ping [~lmccay] as well.  

The comments:

{quote}
Overall this looks okay, the only correctness question I have is about the 
difference in behavior when the pwfile doesn't exist.

The rest are all nits, would be nice to do these cleanups though.

File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:

Line 147:
Could this be a static helper?

Line 161: new
The javadoc says it returns null in this situation. This is also a difference 
from the implementation in the AbstractJKSP. Intentional?

Line 175:   private void locateKeystore() throws IOException {
static helper? for the construct*Path methods too?

File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:

Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
FYI for the future, our coding style is to put annotations on their own 
separate line.

File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:

Line 326:   private char[] locatePassword() throws IOException {
this method looks very similar to the one in JavaKeyStoreProvider, except the 
env var it looks for is different, is there potential for code reuse?

Line 394:   "o In the environment variable " +
Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
syntax.

Line 399:   
"http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
This link is not tied to a version, so could be inaccurate.
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.008.patch

Hopefully fixing checkstyle and whitespace issues in patch 8. I would have 
thought they'd have been detected in patch 6, but... oh well.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.007.patch

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13112) Change CredentialShell to use CommandShell base class

2016-05-09 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277309#comment-15277309
 ] 

Mike Yoder commented on HADOOP-13112:
-

Your analysis and suggestion seem completely reasonable. Either one of us can 
do it, it just depends on who makes it in first.

> Change CredentialShell to use CommandShell base class
> -
>
> Key: HADOOP-13112
> URL: https://issues.apache.org/jira/browse/HADOOP-13112
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>Priority: Minor
> Attachments: HADOOP-13112.01.patch, HADOOP-13112.02.patch
>
>
> org.apache.hadoop.tools.CommandShell is a base class created for use by 
> DtUtilShell.  It was inspired by CredentialShell and much of it was taken 
> verbatim.  It should be a simple change to get CredentialShell to use the 
> base class and simplify its code without changing its functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13112) Change CredentialShell to use CommandShell base class

2016-05-09 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277101#comment-15277101
 ] 

Mike Yoder commented on HADOOP-13112:
-

Thanks for the heads up. So which one of us gets to go first? :-)

> Change CredentialShell to use CommandShell base class
> -
>
> Key: HADOOP-13112
> URL: https://issues.apache.org/jira/browse/HADOOP-13112
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>Priority: Minor
> Attachments: HADOOP-13112.01.patch, HADOOP-13112.02.patch
>
>
> org.apache.hadoop.tools.CommandShell is a base class created for use by 
> DtUtilShell.  It was inspired by CredentialShell and much of it was taken 
> verbatim.  It should be a simple change to get CredentialShell to use the 
> base class and simplify its code without changing its functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-09 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-09 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277098#comment-15277098
 ] 

Mike Yoder commented on HADOOP-12942:
-

Patch 6: now only show the warnings on the create command.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-09 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.006.patch

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-09 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.005.patch

Patch 005 is identical to 004, but adds documentation in CommandsManual.md

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240196#comment-15240196
 ] 

Mike Yoder commented on HADOOP-12942:
-

So it's not just the absolute number of checkstyle violations, it knows which 
ones were yours. Ow!

Regarding the latest patch... it differs in only 4 whitespace characters from 
the previous patch, which did pass the unit tests.  The 
hadoop.security.ssl.TestReloadingX509TrustManager failure passes for me; looks 
unrelated.


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.004.patch

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-12 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-12 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.003.patch

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-12 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-08 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232519#comment-15232519
 ] 

Mike Yoder commented on HADOOP-12942:
-

OK, I'll take your last suggestion.  Will have the patch in a bit.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-06 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228718#comment-15228718
 ] 

Mike Yoder commented on HADOOP-12942:
-

Thanks for having a look.

{quote}
WARN messages (without strict flag): read too much like ERRORs \[...] It is 
perfectly legitimate to use the static/hardcoded password.
{quote}
See, here's where we disagree. Using the CredentialProvider or KeyProvider 
indicates that the user cares about security.  Otherwise they wouldn't use the 
feature at all - for example just providing a cleartext password instead of 
getting it through the CredentialProvider.  So if the user cares about 
security, they are going to care that the provider is actually protecting the 
information. 

Or to come at this a different way - I can think of no other secure system 
involving a password where the use of a default hardcoded password is common.

So yeah, given my assumptions above the WARN messages are pretty severe on 
purpose.  It's difficult for me to fathom a (security conscious) user who, upon 
learning that they were using a static hardcoded password, would say "meh".

{quote}
a provider *requires* a password
{quote}
Well, it requires a password for an attempt at secure operation.

{quote}
 It would also be good to let the user know that when a custom password is 
being used that it must be available to the runtime consumers of it as well. 
The trick is communicating all of this without spitting out a book.
{quote}
Quite true.  How about the following two new lines:
{noformat}
WARNING: The provider cannot find a password in the expected locations.
Please supply a password using one of the following two mechanisms:
o In the environment variable ...
o In a file referred to by the configuration entry ...
Please note that when this provider is used in the future, the password must
also be available to it in the same manner.
Continuing with default provider password "none"
{noformat}

{quote}
I'm not sure that the hardcoded password needs to be emitted on the command 
line in order to satisfy "obviousness".
{quote}
My thinking was that the user might want to figure out what the default 
password is, and so if the information is public, I might as well be helpful 
right on the command line.

{quote}
 I think we should take this opportunity to revisit the 700 file permissions 
and change it to 600
{quote}
OK, makes me a little nervous to lump that in, but sure.

{quote}
the consolidation of some caught keystore exceptions
{quote}

There was one place I changed 
{noformat}
-} catch (NoSuchAlgorithmException e) {
-  throw new IOException("Can't load keystore " + getPathAsString(), e);
-} catch (CertificateException e) {
-  throw new IOException("Can't load keystore " + getPathAsString(), e);
-}
{noformat}
to this
{noformat}
+} catch (GeneralSecurityException e) {
+  throw new IOException("Can't load keystore " + getPathAsString(), e);
+}
{noformat}
just to collapse the to dups into one.



> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's 

[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-01 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.002.patch

New patch removes password-entry code and replaces it with warnings/errors 
about the password. A new "-strict" flag is introduced, which will cause the 
commands to fail without a password.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-01 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-25 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212489#comment-15212489
 ] 

Mike Yoder commented on HADOOP-12942:
-

Oh, hey, I didn't see your second comment before posting.  We're getting 
closer...

You say
{quote}
provision a password to a file for your credential providers
{quote}

So that means that the config file would have to change so that the name of the 
file is provided... and the command can't do that itself.  Right?  This has to 
be an independent step taken by the user I assume.

{quote}
use the hadoop credential CLI to provision the actual credential required by MR 
jobs, etc
{quote}
by this you mean creating the file with the password, assuming that the config 
file mentions a file?

{quote}
I wouldn't be opposed to a -strict switch that doesn't allow the default 
password to be used either.
{quote}
Yeah that's a good idea.

{quote}
Prompting for a password that has not been provisioned yet will lead to runtime 
problems.
{quote}
Well, it does give the user the flexibility to set up the password in the file 
or use the environment variable at their leisure at a later date.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-25 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-25 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212473#comment-15212473
 ] 

Mike Yoder commented on HADOOP-12942:
-

I guess I didn't explain my intent to prompt the user for a password very 
clearly. My (admittedly simplistic) thinking was "hey there's no password".  
"We should therefore make sure there's a password."

{quote}
If we are leveraging the password-in-a-file approach then why are we going to 
prompt the user for a password? It should be in the config if that is what is 
going to be used. 
{quote}
So if there is in fact a password in a file referred to in the config, it takes 
priority and the user will never be prompted for a password. That's why the 
providers' needsPassword() has to exist.  We aren't doing anything new with the 
password-in-a-file approach with this patch; it's has been there and continues 
to be there.

{quote}
Additionally, how is the MR job going be assured to have the password to access 
the keystore by this?
{quote}
They aren't - but they never were assured of this in the first place. If you're 
reading from a file pointed to by the config, you're assuming that the same 
config will exist in the context in which it's later used (and that the file 
exists, too).  If you're using an environment variable, you're assuming the 
environment variable is going to exist in the future context in which it's 
later used. Neither of these are guaranteed.

{quote}
If you are setting a password without it being first provisioned in the file 
then you are setting them up for a credential store that can't be opened. 
{quote}
There is a higher probability of that with my patch, yes. I believe this to be 
better than setting the user up for unintentional insecure storage of secrets. 
I don't know how to handle this better, and I'm not sure that we can since we 
don't know how the cred store will be accessed in the future.  

{quote}
The current behavior should find the provisioned keystore password from the 
file and create the credential store appropriately with no need to prompt the 
user. This is the intended behavior by design and keeps the config aligned with 
the keystore password.
{quote}
I see what you're getting at, but I guess I have not felt that they are as 
"aligned" as you feel they are.  

So instead of prompting the user for a password, you would instead check for 
either the password-in-a-file or the environment variable, and if they don't 
exist, error out with a message stating that the provider couldn't find the 
password and here's how to provide it?

That would achieve the same sort of goal, but it just seemed easier and a 
better interface to just ask the user for the password. I suppose my patch 
doesn't give the user any hints on how to set things up so future stuff could 
read the keystore, though, which isn't great.

{quote}
The current behavior isn't a bug "none" is a real password
{quote}
See if I agreed with this I never would have filed this jira. :-)  I feel that 
it is a bug to give the user the impression that a value is being securely 
stored when in fact it is not.  Hardcoded "none" provides no protection.

{quote}
I also see the backward compatibility issue as a non-starter
{quote}
I view the current _interface_ as having the bug - that interface being the 
non-obvious use of a password "none".  As such, the interface ought to change, 
and as such that means a backwards compatibility issue. But...ergh. If we must 
keep the interface safe for scripts and the like... how about the following 
algorithm
.
* if there are no new command line arguments
** if file or env var found
*** great, continue as before
** else
*** print a big WARNING that they are using a password of "none" and 
instructions on how to set it; continue as before
* else if "-password" or "-askMeTheProviderPassword" is found on the command 
line obtain the provider password, and
** if the provider _already_ has a password via file or env var, print a 
WARNING that the file or env var exists, and that the user supplied password 
will be ignored
** else pass the given password into the provider.

This gives us backwards compatibility, notification to the user that they're 
doing something insecure, and a way to provide the password in the command 
itself.  Your thoughts?


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the 

[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-25 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212291#comment-15212291
 ] 

Mike Yoder commented on HADOOP-12942:
-

At last, a patch.  I fixed both the KeyShell and CredentialShell, since they 
have the same problem.  I also noticed that the CredentialShell threw an NPE 
with the "-help" commands, so I fixed that while I was in there.  The new code 
will prompt for the password for the provider if one is needed, and it will 
also accept "-password xxx" on the command line.  Note that there is a 
backwards compatibility issue here: the user has to give a password where none 
was required before. I don't see a way around this, however, since not having a 
real password was the root cause of this bug. I did set it up so that if the 
user just hits 'enter' (no password) when prompted, the default "none" is used 
instead, which is the prior behavior.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-25 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-25 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.001.patch

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-25 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder reassigned HADOOP-12942:
---

Assignee: Mike Yoder

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-22 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207603#comment-15207603
 ] 

Mike Yoder commented on HADOOP-12942:
-

Oh goodness. When you expand it to the general paradigm of "a password in a 
file..." yeah, I do recognize most of those. I was just thinking of the concept 
as applied to the providers in the discussion so far. Let me start without the 
pwdfile command at all. On some level, an "echo asdf > file && chmod 400 file" 
isn't that hard. Or at least not implement in the first pass - it's a separate 
problem from the rest.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-22 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207098#comment-15207098
 ] 

Mike Yoder commented on HADOOP-12942:
-

{quote}
I have heard reluctance from folks in the past for having commands prompt for 
passwords and would certainly break the scriptability of it. We would have to 
add a switch that enabled the prompting for a password - if we were to add it 
to the credential create subcommand.
{quote}

Agreed. Today as you know the credential create command prompts for a password 
but there is an undocumented "-value" argument that can be used.  I'd stick 
with the same scheme where either a prompt or command line argument were 
possible.

{quote}
This same password file is used in lots of scenarios though: KMS, javakeystore 
providers for key provider API, oozie, signing secret providers,e tc. I wonder 
whether a separate command for it would make sense.
{quote}
Conceptually, yes, but aren't config values different?  I'm aware of two:
* alias/AbstractJavaKeyStoreProvider: 
hadoop.security.credstore.java-keystore-provider.password-file
* key/JavaKeyStoreProvider: 
hadoop.security.keystore.java-keystore-provider.password-file

{quote}
Keep in mind that we would need to do a number of things for this.
1. prompt for the password
2. persist it
3. set appropriate permissions on the file
4. somehow determine the filename to use (probably based on the password file 
name configuration) which would need to be provided by the user as well
5. allow for use of the same password file for multiple keystores or scenarios
6. allow for random-ish generated password without prompt
{quote}
I think it's even more complicated. :-) The user could want to use the 
environment variable when the credential is consumed, and so would want to 
provide it to the command but would not want to deal with anything 
file-related. 

Also it's conceivable that the user could have constructed the file themselves; 
although this doesn't seem particularly user friendly. 

So we have scenarios for hadoop credential create|list|etc that look like
# Here is the credstore password from a prompt
# Here is the credstore password on the command line
# The credstore password is already in a file in the "expected" location (set 
up either by hand or via your new pwdfile command).

Making a command to manage the password file makes sense. I think that we 
shouldn't ask the user to give it the property name though: you could modify 
KeyShell and CredentialShell to have a new subcommand of 'pwdfile', thusly:
* hadoop credential pwdfile \[args\]
* hadoop key pwdfile \[args\]

And they could share an implementation. This way the user does not have to 
remember "hadoop.security.credstore.java-keystore-provider.password-file" or 
the like. This also means that the provider selected needs a new interface to 
create said file, if applicable.

I like the auto-generate-password option for the file. I think the default 
would be to still prompt for the password, though.  So yeah, adding a pwdfile 
command seems like a good idea.

The thing about the existing design that I'm going back and forth on is that 
the CredentialShell is high-level, and selects a provider and then simply 
passes information to the provider. The password is implied and not passed 
directly, so the CredentialShell has no notion of whether or not the underlying 
provider actually has a password or not.

So, for example, it would be daft of CredentialShell to accept a password on 
the command line if one is provided in a file, and it would also be even more 
daft if no password was specifed on the command line and the password wasn't in 
the password file either. Furthermore it would be silly to accept a password 
when the underlying provider does not need a password at all for proper 
operation (example: the UserProvider). There has to be some amount of 
communication between the CredentialShell and the provider in order to get the 
"is a password required" and "where precisely is the password" cases correct.  

To make this even more interesting, in the various providers with a key store, 
the keyStore is either created or opened in the constructor, requiring that all 
the information be presented up front - without scope for the back and forth of 
"do you need a password and where" from the provider.

So... one way to deal with this is to move the keyStore.load() call out of the 
constructor and defer it until the first get/set/delete credential entry call. 
Then expose interfaces along the lines of "does this provider already have the 
password somehow?" and "set the password directly". We'd have to add default 
behavior in CredentialProvider (and KeyProvider) and then implement in the ones 
that matter.

The downside to this approach is that we move around a few error conditions. 
However everything can throw an IOException, so maybe this isn't a big deal. 
Seem 

[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-21 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205487#comment-15205487
 ] 

Mike Yoder commented on HADOOP-12942:
-

{quote}
We could:
{quote}
This is becoming bigger than the intended scope of this jira. :-)

{quote}
Add a command that provisions an encrypted master secret to a well-known 
location in HDFS
{quote}
We'd have to carefully think through what users would be able to perform this 
action. And if something like this could be automated instead. And where that 
"well-known location" might be - could it be configured (I think we'd have to). 
And what about recursion issues if that location was inside an Encryption Zone? 

{quote}
Obviously, this approach would require KMS to be in use and a new manual step 
to provision a master secret.
{quote}
I think what you propose is workable, but these new requirements do concern me. 
We'd also have to think through what users could perform this action (for this 
action and for making the key in the KMS). There are lot of moving parts. Seems 
like a case for a credential server (or credential server functionality in the 
KMS).

Back to the issue in this jira - regardless of the difficulty of handling the 
credential store password throughout the entire workflow, I still believe that 
the credential shell should ask for that password. It's got to be better than 
silently using "none" everywhere. And given that the key store provider has the 
ability to get the password from a file, it seems like it would be possible to 
put the password into a file for basically all use cases.


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-19 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200639#comment-15200639
 ] 

Mike Yoder commented on HADOOP-12942:
-

Need some advice with this one [~lmccay]. I'm going to attempt a patch for this.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-19 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202320#comment-15202320
 ] 

Mike Yoder commented on HADOOP-12942:
-

{quote}
Otherwise, we just keep moving the problem.
{quote}
Oh, I agree. It's turtles all the way down. And you're right - as part of this 
work I'm looking at where this is used (in the use case we saw, at least) and 
how we can protect the password. I'm not sure we will be able to solve that 
problem, though.

{quote}
more or less obfuscated
{quote}
I don't know if encrypting with the same hardcoded password meets the level of 
even "obfuscation". :-) Of course, you could probably direct the same charge 
against using a password that's easy to find.

{quote}
I would love it if you have an idea for something else.
{quote}
Yeah, me too.

I think that one of the problems I want to call out here is that the command, 
as is, gives the user a false sense of security. Since there's no way to 
obviously specify the credential provider password, it's easy for the user to 
believe that whatever is going on behind the scenes is secure, because hey we 
must know what we're doing. If our position is that the security of that jceks 
file is no better than that of a plaintext file then I think we've done the 
user a disservice.

I mean, let's imagine that the command outputted a warning saying "hey, that 
provider you just used encrypted the file with a hardcoded default password". 
Of course that will prompt the user to not be happy and demand a patch or 
something. But at least we'd be up front about the issue. :-)

Better, I think, to do the right thing from the perspective of this command, 
and then work on making the later consumers of the provider do "something". But 
you're right, we have to think hard about end to end security with the 
password. I don't know if we will have a really good answer, though.


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-19 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-12942:
---

 Summary: hadoop credential commands non-obviously use password of 
"none"
 Key: HADOOP-12942
 URL: https://issues.apache.org/jira/browse/HADOOP-12942
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Mike Yoder


The "hadoop credential create" command, when using a jceks provider, defaults 
to using the value of "none" for the password that protects the jceks file.  
This is not obvious in the command or in documentation - to users or to other 
hadoop developers - and leads to jceks files that essentially are not protected.

In this example, I'm adding a credential entry with name of "foo" and a value 
specified by the password entered:

{noformat}
# hadoop credential create foo -provider localjceks://file/bar.jceks
Enter password: 
Enter password again: 
foo has been successfully created.
org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
{noformat}

However, the password that protects the file bar.jceks is "none", and there is 
no obvious way to change that. The practical way of supplying the password at 
this time is something akin to

{noformat}
HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
{noformat}

That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
command. 

This is more than a documentation issue. I believe that the password ought to 
be _required_.  We have three implementations at this point, the two 
JavaKeystore ones and the UserCredential. The latter is "transient" which does 
not make sense to use in this context. The former need some sort of password, 
and it's relatively easy to envision that any non-transient implementation 
would need a mechanism by which to protect the store that it's creating.  

The implementation gets interesting because the password in the 
AbstractJavaKeyStoreProvider is determined in the constructor, and changing it 
after the fact would get messy. So this probably means that the 
CredentialProviderFactory should have another factory method like the first 
that additionally takes the password, and an additional constructor exist in 
all the implementations that takes the password. 

Then we just ask for the password in getCredentialProvider() and that gets 
passed down to via the factory to the implementation. The code does have logic 
in the factory to try multiple providers, but I don't really see how multiple 
providers would be rationaly be used in the command shell context.

This issue was brought to light when a user stored credentials for a Sqoop 
action in Oozie; upon trying to figure out where the password was coming from 
we discovered it to be the default value of "none".




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176640#comment-15176640
 ] 

Mike Yoder commented on HADOOP-12862:
-

Patch looks reasonable based on a quick skim.  One nit is that

{quote}
+File path to the SSL truststore that contains the SSL certificate of the
+LDAP server.
{quote}
Should be along the lines of the "File path to the SSL truststore that contains 
the root certificate used to sign the LDAP server's certificate. Specify this 
if the LDAP server's certificate is not signed by a well known certificate 
authority."


> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176476#comment-15176476
 ] 

Mike Yoder commented on HADOOP-12862:
-

Traditionally we have focused only on the server side of the TLS connection 
with regards to the POODLE attack. Strictly speaking, the AD server is not our 
code, so primary responsibility is on the AD server side. But you do raise a 
good point about SSLv3 and weak ciphers - it would be excellent if we had one 
place in hadoop to specify which TLS versions were permissible and which 
ciphers to use (or not use...) and then have all TLS connections default to 
that.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176388#comment-15176388
 ] 

Mike Yoder commented on HADOOP-12862:
-

Makes perfect sense here. Thanks for the investigation; I actually thought the 
"keystore" was misnamed, but you're right that it really is being used as a 
keystore. I haven't heard of an AD server requiring client-side certs before... 
so you're right, we need new config parameters 
hadoop.security.group.mapping.ldap.ssl.truststore and friends.


> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12577) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014713#comment-15014713
 ] 

Mike Yoder commented on HADOOP-12577:
-

You may well be correct that there is no vulnerability in hadoop - but in some 
sense that almost does not matter. There are many corporate security 
departments that are going to raise red flags about the presence of this 
library in the classpath. Explaining to them why you think you're not 
vulnerable may or may not work, and it's hard to prove a negative. In my 
experience it's easiest to just do the upgrade.


> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: HADOOP-12577
> URL: https://issues.apache.org/jira/browse/HADOOP-12577
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HADOOP-12577.001.patch
>
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-08-14 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12324:

Attachment: HADOOP-12324.000.patch

 Better exception reporting in SaslPlainServer
 -

 Key: HADOOP-12324
 URL: https://issues.apache.org/jira/browse/HADOOP-12324
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.8.0
Reporter: Mike Yoder
Assignee: Mike Yoder
Priority: Minor
 Attachments: HADOOP-12324.000.patch


 This is a follow up from HADOOP-12318.  The review comment from 
 [~ste...@apache.org]:
 {quote}
 -1. It's critical to use Exception.toString() and not .getMessage(), as some 
 exceptions (NPE) don't have messages.
 {quote}
 This is the promised follow-up Jira.
 CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-08-14 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12324:

Status: Patch Available  (was: Open)

 Better exception reporting in SaslPlainServer
 -

 Key: HADOOP-12324
 URL: https://issues.apache.org/jira/browse/HADOOP-12324
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.8.0
Reporter: Mike Yoder
Assignee: Mike Yoder
Priority: Minor
 Attachments: HADOOP-12324.000.patch


 This is a follow up from HADOOP-12318.  The review comment from 
 [~ste...@apache.org]:
 {quote}
 -1. It's critical to use Exception.toString() and not .getMessage(), as some 
 exceptions (NPE) don't have messages.
 {quote}
 This is the promised follow-up Jira.
 CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-08-14 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-12324:
---

 Summary: Better exception reporting in SaslPlainServer
 Key: HADOOP-12324
 URL: https://issues.apache.org/jira/browse/HADOOP-12324
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.8.0
Reporter: Mike Yoder
Assignee: Mike Yoder
Priority: Minor


This is a follow up from HADOOP-12318.  The review comment from 
[~ste...@apache.org]:

{quote}
-1. It's critical to use Exception.toString() and not .getMessage(), as some 
exceptions (NPE) don't have messages.
{quote}

This is the promised follow-up Jira.

CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-12 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-12318:
---

 Summary: Expose underlying LDAP exceptions in SaslPlainServer
 Key: HADOOP-12318
 URL: https://issues.apache.org/jira/browse/HADOOP-12318
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Mike Yoder
Priority: Minor


In the code of class 
[SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
 the underlying exception is not included in the {{SaslException}}, which leads 
to below error message in HiveServer2:
{noformat}
2015-07-22 11:50:28,433 DEBUG org.apache.thrift.transport.TSaslServerTransport: 
failed to open server transport
org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
validating LDAP user
at 
org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at 
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Make COEs very hard to understand what the real error is.

Can we change that line as:
{code}
} catch (Exception e) {
  throw new SaslException(PLAIN auth failed:  + e.getMessage(), e);
}
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-12 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12318:

Attachment: HADOOP-12318.000.patch

 Expose underlying LDAP exceptions in SaslPlainServer
 

 Key: HADOOP-12318
 URL: https://issues.apache.org/jira/browse/HADOOP-12318
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Mike Yoder
Priority: Minor
 Attachments: HADOOP-12318.000.patch


 In the code of class 
 [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
  the underlying exception is not included in the {{SaslException}}, which 
 leads to below error message in HiveServer2:
 {noformat}
 2015-07-22 11:50:28,433 DEBUG 
 org.apache.thrift.transport.TSaslServerTransport: failed to open server 
 transport
 org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
 validating LDAP user
   at 
 org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
   at 
 org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
   at 
 org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
   at 
 org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
   at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 Make COEs very hard to understand what the real error is.
 Can we change that line as:
 {code}
 } catch (Exception e) {
   throw new SaslException(PLAIN auth failed:  + e.getMessage(), e);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-12 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12318:

Assignee: Mike Yoder
  Status: Patch Available  (was: Open)

 Expose underlying LDAP exceptions in SaslPlainServer
 

 Key: HADOOP-12318
 URL: https://issues.apache.org/jira/browse/HADOOP-12318
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Mike Yoder
Assignee: Mike Yoder
Priority: Minor
 Attachments: HADOOP-12318.000.patch


 In the code of class 
 [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
  the underlying exception is not included in the {{SaslException}}, which 
 leads to below error message in HiveServer2:
 {noformat}
 2015-07-22 11:50:28,433 DEBUG 
 org.apache.thrift.transport.TSaslServerTransport: failed to open server 
 transport
 org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
 validating LDAP user
   at 
 org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
   at 
 org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
   at 
 org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
   at 
 org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
   at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 Make COEs very hard to understand what the real error is.
 Can we change that line as:
 {code}
 } catch (Exception e) {
   throw new SaslException(PLAIN auth failed:  + e.getMessage(), e);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-14 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14543864#comment-14543864
 ] 

Mike Yoder commented on HADOOP-11934:
-

No more infinite loop on the namenode with the ldap bind user password set. 
Looking good. Although I would not consider what I did in any way an exhaustive 
test - I started the namenode and saw lots of messages saying that groups were 
all weird, as I expected that they would be. Didn't see any ldap is screwed 
up exceptions. But please don't rely on me alone for testing. :-)

Had a look at the code. Just two comments
* createPermissions() seems to violate the principle of least surprise when it 
silently converts input longer than three chars to 700. I would've expected 
it to throw an error of some form. (And why 700 instead of 600?). And beyond 
that, all sorts of invalid input is silently ignored.
* There's a LOT of code to convert three characters (9 bits of information!) 
into a set of PosixFilePermissions. Can't you convert the three chars to one 
int and do some bit manipulation?

Thanks for addressing this bug so quickly!


 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-13 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14542707#comment-14542707
 ] 

Mike Yoder commented on HADOOP-11934:
-

Sorry for the delay on my side. Had some unrelated cluster troubles and got 
distracted. Will get back to this soon.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-11 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538670#comment-14538670
 ] 

Mike Yoder commented on HADOOP-11934:
-

Thanks - I'll give this a try. Stay tuned...

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-07 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14532163#comment-14532163
 ] 

Mike Yoder commented on HADOOP-11934:
-

Yeah, the same sort of provider path works fine for the HS2 keystore password 
and the hadoop_ssl_server_keystore_(key)password.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay

 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-07 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14532932#comment-14532932
 ] 

Mike Yoder commented on HADOOP-11934:
-

Sorry, it's not in the log.  The log shows

{noformat}
STARTUP_MSG:   java = 1.7.0_67
/
2015-05-06 17:00:26,732 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
registered UNIX signal handlers for [TERM, HUP, INT]
2015-05-06 17:00:26,742 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
createNameNode []
2015-05-06 17:00:27,157 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
loaded properties from hadoop-metrics2.properties
2015-05-06 17:00:27,343 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Scheduled snapshot period at 10 second(s).
2015-05-06 17:00:27,343 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
NameNode metrics system started
2015-05-06 17:00:27,348 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
fs.defaultFS is hdfs://mey-may-4.vpc.cloudera.com:8020
2015-05-06 17:00:27,348 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Clients are to use mey-may-4.vpc.cloudera.com:8020 to access this 
namenode/service.
2015-05-06 17:00:32,144 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.lang.StackOverflowError
at java.lang.String.indexOf(String.java:1698)
at java.net.URLStreamHandler.parseURL(URLStreamHandler.java:272)
at sun.net.www.protocol.file.Handler.parseURL(Handler.java:67)
at java.net.URL.init(URL.java:614)
at java.net.URL.init(URL.java:482)
at sun.misc.URLClassPath$FileLoader.getResource(URLClassPath.java:1057)
at sun.misc.URLClassPath$FileLoader.findResource(URLClassPath.java:1047)
at sun.misc.URLClassPath.findResource(URLClassPath.java:176)
at java.net.URLClassLoader$2.run(URLClassLoader.java:551)
at java.net.URLClassLoader$2.run(URLClassLoader.java:549)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findResource(URLClassLoader.java:548)
at java.lang.ClassLoader.getResource(ClassLoader.java:1147)
at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:227)
at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87)
at 
javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283)
at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
at 
javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2425)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2402)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2319)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1146)
at 
org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:605)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
{noformat}

 a lot of repetition 

{noformat}
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
at 
org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
at 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-07 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14532989#comment-14532989
 ] 

Mike Yoder commented on HADOOP-11934:
-

Yeah, if I remove the credential provider from the LdapGroupsMapping everything 
is fine.

A local keystore provider that's basically identical to the 
JavaKeyStoreProvider but only looks on the local file system fits my use case 
just fine.

Might there be a way, inside JavaKeyStoreProvider, to look at the URI before 
calling path.getFilesystem() and do something different if it's not in HDFS?


 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay

 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-07 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533038#comment-14533038
 ] 

Mike Yoder commented on HADOOP-11934:
-

When above I said do something different wasn't implying that we should 
ignore permission checks.  I see what you're saying about this very use case 
being problematic. What I was attempting to say above what that you could put 
some of the local keystore logic into the existing JavaKeystoreProvider if 
the file is local.  But either way works.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay

 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-06 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531798#comment-14531798
 ] 

Mike Yoder commented on HADOOP-11934:
-

[~lmccay] [~brandonli] - mentioning you guys since your names are on 
HADOOP-10905. Thanks for having a peek at this issue.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder

 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 

[jira] [Created] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-06 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-11934:
---

 Summary: Use of JavaKeyStoreProvider in LdapGroupsMapping causes 
infinite loop
 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder


I was attempting to use the LdapGroupsMapping code and the JavaKeyStoreProvider 
at the same time, and hit a really interesting, yet fatal, issue.  The code 
goes into what ought to have been an infinite loop, were it not for it 
overflowing the stack and Java ending the loop.  Here is a snippet of the 
stack; my annotations are at the bottom.

{noformat}
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
at 
org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
at 
org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
at 
org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
at 
org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.security.Groups.init(Groups.java:70)
at org.apache.hadoop.security.Groups.init(Groups.java:66)
at 
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
at 
org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
at 
org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
at 
org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
at 
org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.security.Groups.init(Groups.java:70)
at org.apache.hadoop.security.Groups.init(Groups.java:66)
at 
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
at 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-06 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531974#comment-14531974
 ] 

Mike Yoder commented on HADOOP-11934:
-

It looks like

jceks://file/full/path/to/creds.jceks



 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder

 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 

[jira] [Commented] (HADOOP-11343) Overflow is not properly handled in caclulating final iv for AES CTR

2014-12-03 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1427#comment-1427
 ] 

Mike Yoder commented on HADOOP-11343:
-

{quote}
Just described as above, combination of the current caculateIV and other Cipher 
counter increment will cause problem if these two are not consistent.
{quote}
Yeah, you're right.  This is a good catch.  Let me see if I can state this 
problem differently.

If the underlying java (or openssl) ctr calculation is different than 
calculateIV, there is a problem IF
- assume an initial IV of 00 00 00 00 00 00 00 00 ff ff ff ff ff ff ff ff 
- the file is 32 bytes
- File A is written, all 32 bytes at once (one call to calculateIV with counter 
of 0)
- File B is written, the first 16 bytes and then the second 16 bytes (two calls 
to calculateIV with counter of 0 and 1)
- Then the last 16 bytes of files A and B will be different

This actually isn't a problem *if* the files are read back _exactly_ as they 
are written.  But if you try to read file A in two steps, or read file B in one 
step, the second block will look corrupted.  It seems possible to construct a 
test case for this.

The code in the patch looks reasonable, although I haven't sat down with paper 
and pencil to work through the math.  The test cases are convincing.  Have you 
tested with both the openssl and java crypto implementations?

I do believe that you still need to provide an upgrade path.  This means 
defining a new crypto SUITE and make it the default.  Existing files will use 
the old SUITE; the upgrade path is to simply copy all the files in an EZ; when 
writing the new files the new SUITE will be used and everything will work out. 

 Overflow is not properly handled in caclulating final iv for AES CTR
 

 Key: HADOOP-11343
 URL: https://issues.apache.org/jira/browse/HADOOP-11343
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: trunk-win, 2.7.0
Reporter: Jerry Chen
Assignee: Jerry Chen
Priority: Blocker
 Attachments: HADOOP-11343.patch


 In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16 
 bytes, 
 final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()];
   cc.generateSecureRandom(iv);
 Then the following calculation of iv and counter on 8 bytes (64bit) space 
 would easily cause overflow and this overflow gets lost.  The result would be 
 the 128 bit data block was encrypted with a wrong counter and cannot be 
 decrypted by standard aes-ctr.
 /**
* The IV is produced by adding the initial IV to the counter. IV length 
* should be the same as {@link #AES_BLOCK_SIZE}
*/
   @Override
   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
 Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
 Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
 
 System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
 long l = 0;
 for (int i = 0; i  8; i++) {
   l = ((l  8) | (initIV[CTR_OFFSET + i]  0xff));
 }
 l += counter;
 IV[CTR_OFFSET + 0] = (byte) (l  56);
 IV[CTR_OFFSET + 1] = (byte) (l  48);
 IV[CTR_OFFSET + 2] = (byte) (l  40);
 IV[CTR_OFFSET + 3] = (byte) (l  32);
 IV[CTR_OFFSET + 4] = (byte) (l  24);
 IV[CTR_OFFSET + 5] = (byte) (l  16);
 IV[CTR_OFFSET + 6] = (byte) (l  8);
 IV[CTR_OFFSET + 7] = (byte) (l);
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11343) Overflow is not properly handled in caclulating final iv for AES CTR

2014-12-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14231868#comment-14231868
 ] 

Mike Yoder commented on HADOOP-11343:
-

What was the proposed solution?  I didn't see a difference in the code above 
vs. what's in git right now.  (I could've missed it...)

The overflow happens at the l += counter line, yes?  The statement above is 
this overflow gets lost.  Well, it doesn't actually get lost, since this is 
integer arithmetic the MAX LONG wraps around to a MIN LONG and keeps going up.  
 So the effects of the counter are constrained to bytes 8-15 of IV, and bytes 
0-7 are fixed at the randomly generated value.  We are not concerned with 
re-using IVs by wrapping a long - a long is 2^64 bits, and the counter is for 
each 16-byte block (or 2^4) so in order to wrap we'd have to have a file that's 
2^68 bytes long.  Not going to happen.

There actually is no standard aes-ctr.  From 
http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf,

{quote}
The Counter (CTR) mode is a confidentiality mode that features the application 
of the forward
cipher to a set of input blocks, called counters, to produce a sequence of 
output blocks that are
exclusive-ORed with the plaintext to produce the ciphertext, and vice versa. 
The sequence of
counters must have the property that each block in the sequence is different 
from every other
block. This condition is not restricted to a single message: across all of the 
messages that are
encrypted under the given key, all of the counters must be distinct. 
{quote}

And from Appendix B of that document:

{quote}
B.2 Choosing Initial Counter Blocks

The initial counter blocks, T1, for each message that is encrypted under the 
given key must be
chosen in a manner than ensures the uniqueness of all the counter blocks across 
all the messages.
Two examples of approaches to choosing the initial counter blocks are given in 
this section.

In the first approach, for a given key, all plaintext messages are encrypted 
sequentially. Within
the messages, the same fixed set of m bits of the counter block is incremented 
by the standard
incrementing function. The initial counter block for the initial plaintext 
message may be any
string of b bits. The initial counter block for any subsequent message can be 
obtained by
applying the standard incrementing function to the fixed set of m bits of the 
final counter block
of the previous message. In effect, all of the plaintext messages that are ever 
encrypted under the
given key are concatenated into a single message; consequently, the total 
number of plaintext
blocks must not exceed 2^m . Procedures should be established to ensure the 
maintenance of the
state of the final counter block of the latest encrypted message, and to ensure 
the proper
sequencing of the messages.

A second approach to satisfying the uniqueness property across messages is to 
assign to each
message a unique string of b/2 bits (rounding up, if b is odd), in other words, 
a message nonce,
and to incorporate the message nonce into every counter block for the message. 
The leading b/2
bits (rounding up, if b is odd) of each counter block would be the message 
nonce, and the
standard incrementing function would be applied to the remaining m bits to 
provide an index to
the counter blocks for the message. Thus, if N is the message nonce for a given 
message, then
the jth counter block is given by Tj = N | \[j], for j = 1…n. The number of 
blocks, n, in any
message must satisfy n  2^m . A procedure should be established to ensure the 
uniqueness of the
message nonces.

This recommendation allows other methods and approaches for achieving the 
uniqueness
property. Validation that an implementation of the CTR mode conforms to this 
recommendation
will typically include an examination of the procedures for assuring the 
uniqueness of counter
blocks within messages and across all messages that are encrypted under a given 
key. 
{quote}

There are two recommendations, and what's implemented follows the second 
recommendation.  I believe that most CTR implementations (including Java) do 
something like this.  (As an aside, I've seen an openssl implementation that 
follows the first recommendation.)

Another note is that if this function is altered in any way, it will break the 
decryption of data that has already been encrypted.  I don't know if that's 
much at all at this point, but one would still have to tread with care.

I recommend that we leave this function alone... unless I misunderstood 
something about the problem.


 Overflow is not properly handled in caclulating final iv for AES CTR
 

 Key: HADOOP-11343
 URL: https://issues.apache.org/jira/browse/HADOOP-11343
 Project: Hadoop Common
  Issue Type: Bug
  

[jira] [Commented] (HADOOP-11343) Overflow is not properly handled in caclulating final iv for AES CTR

2014-12-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14232434#comment-14232434
 ] 

Mike Yoder commented on HADOOP-11343:
-

Let's back up a bit.  I don't understand what the problem is that you are 
trying to address.  You keep mentioning overflow.  Yes, bytes 8-15, when 
treated as a long and added to the counter, can wrap around instead of carrying 
the results into bytes 0-7.  But what's the problem with that? The values of 
the IV will still be different for each block that's encrypted. No information 
is lost, and we still have to have a file larger than 2^68 bytes before we wrap 
the counter.

You mention the openssl source code for AES_ctr128_encrypt() - that was exactly 
the code I was referring to when I said (As an aside, I've seen an openssl 
implementation that follows the first recommendation.)  That's just one way to 
handle the IV.

So I just went digging around for Java code that does AES-CTR, and found
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/com/sun/crypto/provider/CounterMode.java#CounterMode

Look for increment().  It does the same thing as the openssl implementation 
(which is actually a surprise to me, I thought it worked like calculateIV).  I 
might be mistaken in this, but I _thought_ that calculateIV() was written the 
way it was written completely on purpose.  We should seek comment from 
[~tucu00] if he's still paying attention to emails that come from apache jira.

You do need to address the upgrade question: changing this code will render 
useless any data encrypted with the current scheme, unless that data is first 
copied out of an EZ to clear text, the upgrade performed, and then the data 
copied back into the EZ.  This is a *very* heavy price to pay.

I'd also like to know what the use case is. You mention the Java Cipher - is 
your use case that you want to get the raw encrypted data and somehow decrypt 
it by hand outside of the normal path?  If so, how would you get the key to 
decrypt it with?  Why would you want to do this?


 Overflow is not properly handled in caclulating final iv for AES CTR
 

 Key: HADOOP-11343
 URL: https://issues.apache.org/jira/browse/HADOOP-11343
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: trunk-win
Reporter: Jerry Chen

 In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16 
 bytes, 
 final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()];
   cc.generateSecureRandom(iv);
 Then the following calculation of iv and counter on 8 bytes (64bit) space 
 would easily cause overflow and this overflow gets lost.  The result would be 
 the 128 bit data block was encrypted with a wrong counter and cannot be 
 decrypted by standard aes-ctr.
 /**
* The IV is produced by adding the initial IV to the counter. IV length 
* should be the same as {@link #AES_BLOCK_SIZE}
*/
   @Override
   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
 Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
 Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
 
 System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
 long l = 0;
 for (int i = 0; i  8; i++) {
   l = ((l  8) | (initIV[CTR_OFFSET + i]  0xff));
 }
 l += counter;
 IV[CTR_OFFSET + 0] = (byte) (l  56);
 IV[CTR_OFFSET + 1] = (byte) (l  48);
 IV[CTR_OFFSET + 2] = (byte) (l  40);
 IV[CTR_OFFSET + 3] = (byte) (l  32);
 IV[CTR_OFFSET + 4] = (byte) (l  24);
 IV[CTR_OFFSET + 5] = (byte) (l  16);
 IV[CTR_OFFSET + 6] = (byte) (l  8);
 IV[CTR_OFFSET + 7] = (byte) (l);
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11260) Patch up Jetty to disable SSLv3

2014-11-04 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196368#comment-14196368
 ] 

Mike Yoder commented on HADOOP-11260:
-

Thanks, [~tucu00].  I tried writing a unit test - having the java client 
specify only SSLv3 as an enabled protocol - but it connected anyway.  So I 
think there is some java crypto thing I don't quite understand going on.  So I 
think the answer is yes, difficult, but I'll poke at it a little more this 
morning to see if anything turns up.

 Patch up Jetty to disable SSLv3
 ---

 Key: HADOOP-11260
 URL: https://issues.apache.org/jira/browse/HADOOP-11260
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.1
Reporter: Karthik Kambatla
Assignee: Mike Yoder
Priority: Blocker
 Attachments: HADOOP-11260.001.patch, HADOOP-11260.002.patch


 Hadoop uses an older version of Jetty that allows SSLv3. We should fix it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11260) Patch up Jetty to disable SSLv3

2014-11-04 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196454#comment-14196454
 ] 

Mike Yoder commented on HADOOP-11260:
-

Ah, well there's my answer.  The docs for SSLContext say

{quote}
Every implementation of the Java platform is required to support the following 
standard SSLContext protocol: TLSv1
{quote}

And all of the SSLContext algorithms at 
http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html#SSLContext
 say may support other versions.

In SSLFactory's init(), if I explicitly set the enabled protocols to SSLv3 
the internal default client protocol list still has TLSv1 in it.  Looks like 
it's possible to remove SSLv3, but not possible to remove TLSv1.  So nope, no 
easy way to test. 



 Patch up Jetty to disable SSLv3
 ---

 Key: HADOOP-11260
 URL: https://issues.apache.org/jira/browse/HADOOP-11260
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.1
Reporter: Karthik Kambatla
Assignee: Mike Yoder
Priority: Blocker
 Attachments: HADOOP-11260.001.patch, HADOOP-11260.002.patch


 Hadoop uses an older version of Jetty that allows SSLv3. We should fix it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11260) Patch up Jetty to disable SSLv3

2014-11-03 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-11260:

Attachment: HADOOP-11260.001.patch

 Patch up Jetty to disable SSLv3
 ---

 Key: HADOOP-11260
 URL: https://issues.apache.org/jira/browse/HADOOP-11260
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.1
Reporter: Karthik Kambatla
Assignee: Mike Yoder
Priority: Blocker
 Attachments: HADOOP-11260.001.patch


 Hadoop uses an older version of Jetty that allows SSLv3. We should fix it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11260) Patch up Jetty to disable SSLv3

2014-11-03 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-11260:

Attachment: (was: HADOOP-11260.001.patch)

 Patch up Jetty to disable SSLv3
 ---

 Key: HADOOP-11260
 URL: https://issues.apache.org/jira/browse/HADOOP-11260
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.1
Reporter: Karthik Kambatla
Assignee: Mike Yoder
Priority: Blocker

 Hadoop uses an older version of Jetty that allows SSLv3. We should fix it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >