[jira] [Commented] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-15 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823575#comment-15823575
 ] 

John Zhuge commented on HADOOP-13987:
-

[~lmccay] Thanks for making me realize my wrong impression that 
{{ssl-MODE.xml}} can be an add-on resource to {{core-site.xml}}. All existing 
SSL loading code are consistent in that {{ssl-MODE.xml}} is loaded into an 
empty configuration:
* DFSUtil#loadSslConfiguration
* WebAppUtils#loadSslConfiguration
* SSLFactory#readSSLConfiguration

I made a mistake in my recent {{HttpServer2#loadSSLConfiguration}}. Filed 
HADOOP-13992 to fix it.

> Enhance SSLFactory support for Credential Provider
> --
>
> Key: HADOOP-13987
> URL: https://issues.apache.org/jira/browse/HADOOP-13987
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Testing CredentialProvider with KMS: populated the credentials file, added 
> "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
> list" failed due to incorrect password. So I added 
> "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key 
> list" worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13992) Fix HttpServer2#loadSSLConfiguration

2017-01-15 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13992:
---

 Summary: Fix HttpServer2#loadSSLConfiguration
 Key: HADOOP-13992
 URL: https://issues.apache.org/jira/browse/HADOOP-13992
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, security
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge


HADOOP-13597 added {{HttpServer2#loadSSLConfiguration}} that deviated from the 
existing way of loading SSL configuration. See these methods:
* DFSUtil#loadSslConfiguration
* WebAppUtils#loadSslConfiguration
* SSLFactory#readSSLConfiguration

Fix {{HttpServer2#loadSSLConfiguration}} and related code in {{KMSWebServer}} 
and {{MiniKMS}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin -getAllServiceState option to get the HA state of all the NameNodes/ResourceManagers

2017-01-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823455#comment-15823455
 ] 

Akira Ajisaka commented on HADOOP-13933:


I'll commit this tomorrow if there are no objections.

> Add haadmin -getAllServiceState option to get the HA state of all the 
> NameNodes/ResourceManagers
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-13933.002.patch, HADOOP-13933.003.patch, 
> HADOOP-13933.003.patch, HADOOP-13933.004.patch, HADOOP-13933.005.patch, 
> HADOOP-13933.006.patch, HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin -getAllServiceState option to get the HA state of all the NameNodes/ResourceManagers

2017-01-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823447#comment-15823447
 ] 

Akira Ajisaka commented on HADOOP-13933:


LGTM, +1.

> Add haadmin -getAllServiceState option to get the HA state of all the 
> NameNodes/ResourceManagers
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-13933.002.patch, HADOOP-13933.003.patch, 
> HADOOP-13933.003.patch, HADOOP-13933.004.patch, HADOOP-13933.005.patch, 
> HADOOP-13933.006.patch, HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12463) TestShell.testGetSignalKillCommand failing on windows

2017-01-15 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823403#comment-15823403
 ] 

Brahma Reddy Battula commented on HADOOP-12463:
---

HADOOP-10775 is fixed this issue.. if you all agree, will close as dupe of 
HADOOP-10775..

> TestShell.testGetSignalKillCommand failing on windows
> -
>
> Key: HADOOP-12463
> URL: https://issues.apache.org/jira/browse/HADOOP-12463
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-12463-001.patch, HADOOP-12463.1.branch-2.patch
>
>
> TestShell.testGetSignalKillCommand is failing on windows; the command to 
> query a process isn't that which the test expects.
> Maybe we need to have some policy that nothing goes into Shell without being 
> tested on Windows first: its where things meet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-15 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823189#comment-15823189
 ] 

Greg Senia commented on HADOOP-13988:
-

[~lmccay] made the changes sorry for delay. I think the test error is not 
related to my patch can you verify also:

stGracefulFailoverMultipleZKfcs(org.apache.hadoop.ha.TestZKFailoverController)  
Time elapsed: 70.289 sec  <<< ERROR!
org.apache.hadoop.ha.ServiceFailedException: Unable to become active. Local 
node did not get an opportunity to do so from ZooKeeper, or the local node took 
too long to transition to active.
at 
org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:693)
at org.apache.hado

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13991) Retry management in NativeS3FileSystem to avoid file upload problem

2017-01-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823174#comment-15823174
 ] 

Steve Loughran commented on HADOOP-13991:
-

Musaddique —thank your for your post and details on a fix.

I'm sorry to say we aren't going to take this. That's not because there's 
anything wrong with it, but because we've stopped doing any work on s3n other 
than any emergency security work, putting all our effort into S3a. Leaving s3n 
alone means that we have a reference s3 connector that is pretty much 
guaranteed not to have any regressions, while in s3a we can do more leading 
edge stuff.

S3a does have retry logic, a lot built into the Amazon S3 library itself, with 
some extra bits to deal with things that aren't retried that well (e.g. final 
commit of a multipart upload).

# please switch to s3a as soon as you can. If you are using Hadoop 2.7.3, its 
stable enough for use.
# and, if you want to improve s3a, please get involved on that code, ideally 
look at the work in HADOOP-11694 to see what to look forward to in Hadoop 2.8, 
and HADOOP-13204 to see the todo list where help is really welcome —and that 
includes help testing.

thanks,

> Retry management in NativeS3FileSystem to avoid file upload problem
> ---
>
> Key: HADOOP-13991
> URL: https://issues.apache.org/jira/browse/HADOOP-13991
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Musaddique Hossain
>Priority: Minor
>
> NativeS3FileSystem does not support any retry management for failed uploading 
> to S3.
> If due to socket timeout or any other network exception, file uploading to S3 
> bucket fails, then uploading fails and temporary file gets deleted. 
> java.net.SocketException: Connection reset
>   at java.net.SocketInputStream.read(SocketInputStream.java:196)
>   at java.net.SocketInputStream.read(SocketInputStream.java:122)
>   at org.jets3t.service.S3Service.putObject(S3Service.java:2265)
>   at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:122)
>   at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.fs.s3native.$Proxy8.storeFile(Unknown Source)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.close(NativeS3FileSystem.java:284)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
>   at 
> org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream.close(CBZip2OutputStream.java:737)
>   at 
> org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionOutputStream.close(BZip2Codec.java:336)
>   at 
> org.apache.flume.sink.hdfs.HDFSCompressedDataStream.close(HDFSCompressedDataStream.java:155)
>   at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:312)
>   at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:308)
>   at 
> org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
>   at 
> org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
>   at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
> This can be solved by using asynchronous retry management.
> We have made following modifications to NativeS3FileSystem to add the retry 
> management, which is working fine in our product system, without any 
> uploading failure:
> {code:title=NativeS3FileSystem.java|borderStyle=solid}
> @@ -36,6 +36,7 @@
>  import java.util.Map;
>  import java.util.Set;
>  import java.util.TreeSet;
> +import java.util.concurrent.Callable;
>  import java.util.concurrent.TimeUnit;
>  import com.google.common.base.Preconditions;
> @@ -279,9 +280,19 @@
>backupStream.close();
>LOG.info("OutputStream for key '{}' closed. Now beginning upload", 
> key);
> +  Callable task = new Callable() {
> + private final byte[] md5Hash = digest == null ? null : 
> digest.digest();
> + public Void call() throws IOException {
> +store.storeFile(key, backupFile, md5Hash);
> +return null;
> + }
> +  };
> +  RetriableTask r = new RetriableTask(task);
> +  
>try {
> -byte[] md5Hash = digest == null ? null : digest.digest();
> -store.storeFile(key, backupFile, md5Hash);
> + r.call();
> +  } catch (Exception e) {
> + throw new IOException(e);
>} finally {
>  if 

[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823161#comment-15823161
 ] 

Hadoop QA commented on HADOOP-13988:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 18s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847537/HADOOP-13988.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4e272a3cb982 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed09c14 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> 

[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-15 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Status: Patch Available  (was: Open)

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3, 2.8.0
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-15 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Attachment: HADOOP-13988.patch

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-15 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Attachment: (was: HADOOP-13988.patch)

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-15 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Status: Open  (was: Patch Available)

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3, 2.8.0
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated

2017-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823079#comment-15823079
 ] 

Hadoop QA commented on HADOOP-13805:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 94 unchanged - 2 fixed = 96 total (was 96) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestUGIWithMiniKdc |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13805 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847530/HADOOP-13805.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc2685988b5a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed09c14 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11442/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11442/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11442/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11442/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated

2017-01-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823075#comment-15823075
 ] 

Yongjun Zhang commented on HADOOP-13805:


I have been looking at rev006 quite a bit, it looks good to me, except for two 
things:

1. The change to constructor
{code}
 UserGroupInformation(Subject subject) {
this(subject, true);
  }
{code}
now changed the original behavior, even though it's really fixing a wrong 
behavior, it's an incompatible change. Other application use this API may 
break. Hi [~tucu00], thanks for reporting the issue and review so far. How do 
you think we should address that?

2. The test suggested by Alejandro at 
https://issues.apache.org/jira/browse/HADOOP-13805?focusedCommentId=15653489=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15653489
is better included with the patch.

Thanks.







> UGI.getCurrentUser() fails if user does not have a keytab associated
> 
>
> Key: HADOOP-13805
> URL: https://issues.apache.org/jira/browse/HADOOP-13805
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Attachments: HADOOP-13805.006.patch, HADOOP-13805.01.patch, 
> HADOOP-13805.02.patch, HADOOP-13805.03.patch, HADOOP-13805.04.patch, 
> HADOOP-13805.05.patch
>
>
> HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the 
> UGI is created from an existing Subject as in that case the keytab is not 
> 'own' by UGI but by the creator of the Subject.
> In HADOOP-13558 we introduced a new private UGI constructor 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and 
> we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}.
> The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created 
> via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new 
> UserGroupInformation(subject)}} which will delegate to 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}}  and 
> that will use externalKeyTab == *FALSE*. 
> Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using 
> a non-existing keytab if the TGT expired.
> This problem is experienced in {{KMSClientProvider}} when used by the HDFS 
> filesystem client accessing an an encryption zone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org