[jira] [Commented] (HADOOP-11867) Add a high-performance vectored read API.

2023-10-24 Thread Yuanbo Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17779307#comment-17779307
 ] 

Yuanbo Liu commented on HADOOP-11867:
-

[~mthakur] [~ste...@apache.org]  Thanks to ship this feature, nice work.
When I go through the implement of readVectored in S3AInputStream, it's pretty 
much like a pre-fetching with a threadpool. 
If there is a inputstream which already has pre-fetching mechanism, then 
bringing vectored read feature into the inputstream will not benefit much, 
right?

> Add a high-performance vectored read API.
> -
>
> Key: HADOOP-11867
> URL: https://issues.apache.org/jira/browse/HADOOP-11867
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3, hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: performance, pull-request-available
> Fix For: 3.3.5
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> The most significant way to read from a filesystem in an efficient way is to 
> let the FileSystem implementation handle the seek behaviour underneath the 
> API to be the most efficient as possible.
> A better approach to the seek problem is to provide a sequence of read 
> locations as part of a single call, while letting the system schedule/plan 
> the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows 
> for potentially optimizing away the seek-gaps within the FSDataInputStream 
> implementation.
> For seek+read systems with even more latency than locally-attached disks, 
> something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would 
> take of the seeks internally while reading chunk.remaining() bytes into each 
> chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into 
> ByteBuffers, without forcing each FS implementation to override this in any 
> way.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Deleted] (HADOOP-18225) Consider attaching block location info from client when closing a completed file

2022-05-05 Thread Yuanbo Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu deleted HADOOP-18225:



> Consider attaching block location info from client when closing a completed 
> file
> 
>
> Key: HADOOP-18225
> URL: https://issues.apache.org/jira/browse/HADOOP-18225
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Priority: Major
>
> when a file is finished, client will not close it until DNs send 
> RECEIVED_BLOCK by ibr or client is timeout. we can always see such kind of 
> log in namenode
> {code:java}
> is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file{code}
> Since client already has the last block locations, it's not necessary to rely 
> on the ibr from DN when closing file.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18225) Consider attaching block location info from client when closing a completed file

2022-05-05 Thread Yuanbo Liu (Jira)
Yuanbo Liu created HADOOP-18225:
---

 Summary: Consider attaching block location info from client when 
closing a completed file
 Key: HADOOP-18225
 URL: https://issues.apache.org/jira/browse/HADOOP-18225
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yuanbo Liu


when a file is finished, client will not close it until DNs send RECEIVED_BLOCK 
by ibr or client is timeout. we can always see such kind of log in namenode


{code:java}
is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file{code}

Since client already has the last block locations, it's not necessary to rely 
on the ibr from DN when closing file.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2022-03-16 Thread Yuanbo Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17507576#comment-17507576
 ] 

Yuanbo Liu commented on HADOOP-15864:
-

how about rerty resolving the ip address instead of returing empty text?

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Critical
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, 
> HADOOP-15864.004.patch, HADOOP-15864.005.patch, 
> HADOOP-15864.branch.2.7.004.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665)
> ... 35 more
> Caused by: 

[jira] [Updated] (HADOOP-14327) KerberosAuthenticationHandler#authenticate throws meaningless exception when server principals set is empty

2017-04-24 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14327:

Status: Patch Available  (was: Open)

> KerberosAuthenticationHandler#authenticate throws meaningless exception when 
> server principals set is empty
> ---
>
> Key: HADOOP-14327
> URL: https://issues.apache.org/jira/browse/HADOOP-14327
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14327.001.patch
>
>
> If somehow KerberosAuthenticationHandler#authenticate gets an empty service 
> principal set, it throws a useless exception like the following:
> {noformat}
> 2017-04-19 10:11:39,812 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Authentication exception: 
> org.apache.hadoop.security.authentication.client.AuthenticationExceptio
> n
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException
> at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:452)
> at 
> org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(MultiSchemeAuthenticationHandler.java:193)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:400)
> at 
> org.apache.hadoop.security.token.delegation.web.MultiSchemeDelegationTokenAuthenticationHandler.authenticate(MultiSchemeDelegationTokenAuthenticationHandler.java:180)
> at 
> org.apache.solr.security.RequestContinuesRecorderAuthenticationHandler.authenticate(RequestContinuesRecorderAuthenticationHandler.java:69)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:532)
> {noformat}
> The following code has a logic error. If serverPrincipals is empty, token 
> remains null in the end, but lastException is also null too, so throwing it 
> is meaningless. It should throw with a more meaningful message.
> {code:title=KerberosAuthenticationHandler#authenticate}
> AuthenticationToken token = null;
> Exception lastException = null;
> for (String serverPrincipal : serverPrincipals) {
>   try {
> token = runWithPrincipal(serverPrincipal, clientToken,
> base64, response);
>   } catch (Exception ex) {
> lastException = ex;
> LOG.trace("Auth {} failed with {}", serverPrincipal, ex);
>   } finally {
>   if (token != null) {
> LOG.trace("Auth {} successfully", serverPrincipal);
> break;
> }
>   }
> }
> if (token != null) {
>   return token;
> } else {
>   throw new AuthenticationException(lastException);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14327) KerberosAuthenticationHandler#authenticate throws meaningless exception when server principals set is empty

2017-04-24 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14327:

Attachment: HADOOP-14327.001.patch

attach v1 patch for this JIRA.

> KerberosAuthenticationHandler#authenticate throws meaningless exception when 
> server principals set is empty
> ---
>
> Key: HADOOP-14327
> URL: https://issues.apache.org/jira/browse/HADOOP-14327
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14327.001.patch
>
>
> If somehow KerberosAuthenticationHandler#authenticate gets an empty service 
> principal set, it throws a useless exception like the following:
> {noformat}
> 2017-04-19 10:11:39,812 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Authentication exception: 
> org.apache.hadoop.security.authentication.client.AuthenticationExceptio
> n
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException
> at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:452)
> at 
> org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(MultiSchemeAuthenticationHandler.java:193)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:400)
> at 
> org.apache.hadoop.security.token.delegation.web.MultiSchemeDelegationTokenAuthenticationHandler.authenticate(MultiSchemeDelegationTokenAuthenticationHandler.java:180)
> at 
> org.apache.solr.security.RequestContinuesRecorderAuthenticationHandler.authenticate(RequestContinuesRecorderAuthenticationHandler.java:69)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:532)
> {noformat}
> The following code has a logic error. If serverPrincipals is empty, token 
> remains null in the end, but lastException is also null too, so throwing it 
> is meaningless. It should throw with a more meaningful message.
> {code:title=KerberosAuthenticationHandler#authenticate}
> AuthenticationToken token = null;
> Exception lastException = null;
> for (String serverPrincipal : serverPrincipals) {
>   try {
> token = runWithPrincipal(serverPrincipal, clientToken,
> base64, response);
>   } catch (Exception ex) {
> lastException = ex;
> LOG.trace("Auth {} failed with {}", serverPrincipal, ex);
>   } finally {
>   if (token != null) {
> LOG.trace("Auth {} successfully", serverPrincipal);
> break;
> }
>   }
> }
> if (token != null) {
>   return token;
> } else {
>   throw new AuthenticationException(lastException);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14327) KerberosAuthenticationHandler#authenticate throws meaningless exception when server principals set is empty

2017-04-20 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14327:
---

Assignee: Yuanbo Liu

> KerberosAuthenticationHandler#authenticate throws meaningless exception when 
> server principals set is empty
> ---
>
> Key: HADOOP-14327
> URL: https://issues.apache.org/jira/browse/HADOOP-14327
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
>
> If somehow KerberosAuthenticationHandler#authenticate gets an empty service 
> principal set, it throws a useless exception like the following:
> {noformat}
> 2017-04-19 10:11:39,812 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Authentication exception: 
> org.apache.hadoop.security.authentication.client.AuthenticationExceptio
> n
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException
> at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:452)
> at 
> org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(MultiSchemeAuthenticationHandler.java:193)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:400)
> at 
> org.apache.hadoop.security.token.delegation.web.MultiSchemeDelegationTokenAuthenticationHandler.authenticate(MultiSchemeDelegationTokenAuthenticationHandler.java:180)
> at 
> org.apache.solr.security.RequestContinuesRecorderAuthenticationHandler.authenticate(RequestContinuesRecorderAuthenticationHandler.java:69)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:532)
> {noformat}
> The following code has a logic error. If serverPrincipals is empty, token 
> remains null in the end, but lastException is also null too, so throwing it 
> is meaningless. It should throw with a more meaningful message.
> {code:title=KerberosAuthenticationHandler#authenticate}
> AuthenticationToken token = null;
> Exception lastException = null;
> for (String serverPrincipal : serverPrincipals) {
>   try {
> token = runWithPrincipal(serverPrincipal, clientToken,
> base64, response);
>   } catch (Exception ex) {
> lastException = ex;
> LOG.trace("Auth {} failed with {}", serverPrincipal, ex);
>   } finally {
>   if (token != null) {
> LOG.trace("Auth {} successfully", serverPrincipal);
> break;
> }
>   }
> }
> if (token != null) {
>   return token;
> } else {
>   throw new AuthenticationException(lastException);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14314) The OpenSolaris taxonomy link is dead in InterfaceClassification.md

2017-04-17 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972026#comment-15972026
 ] 

Yuanbo Liu commented on HADOOP-14314:
-

[~templedf] Thanks for filing this JIRA.
Since we didn't find official link to replace the old one, suggest to simply 
remove it. 

> The OpenSolaris taxonomy link is dead in InterfaceClassification.md
> ---
>
> Key: HADOOP-14314
> URL: https://issues.apache.org/jira/browse/HADOOP-14314
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>
> Unfortunately, Oracle took down opensolaris.org, so the link is dead.  The 
> only replacement I could find with a quick search was this PDF: 
> http://cuddletech.com/opensolaris/osdevref.pdf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr

2017-04-14 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14295:

Attachment: HADOOP-14295.004.patch

> Authentication proxy filter may fail authorization because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch, 
> HADOOP-14295.003.patch, HADOOP-14295.004.patch
>
>
> When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy 
> (Knox) would get an Authorization failure and it hosts would should as 
> 127.0.0.1 even though Knox wasn't in local host to Datanode, error message:
> {quote}
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> {quote}
> We were able to figure out that Datanode have Jetty listening on localhost 
> and that Netty is used to server request to DataNode, this was a measure to 
> improve performance because of Netty Async NIO design.
> I propose to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr

2017-04-14 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15968786#comment-15968786
 ] 

Yuanbo Liu commented on HADOOP-14295:
-

[~jojochuang] Thanks for your review.
Attach v4 patch for this JIRA.

> Authentication proxy filter may fail authorization because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch, 
> HADOOP-14295.003.patch, HADOOP-14295.004.patch
>
>
> When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy 
> (Knox) would get an Authorization failure and it hosts would should as 
> 127.0.0.1 even though Knox wasn't in local host to Datanode, error message:
> {quote}
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> {quote}
> We were able to figure out that Datanode have Jetty listening on localhost 
> and that Netty is used to server request to DataNode, this was a measure to 
> improve performance because of Netty Async NIO design.
> I propose to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr

2017-04-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14295:

Description: 
When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy 
(Knox) would get an Authorization failure and it hosts would should as 
127.0.0.1 even though Knox wasn't in local host to Datanode, error message:
{quote}
"2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
(AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
{quote}
We were able to figure out that Datanode have Jetty listening on localhost and 
that Netty is used to server request to DataNode, this was a measure to improve 
performance because of Netty Async NIO design.

I propose to add a check for x-forwarded-for header since proxys usually inject 
that header before we do a getRemoteAddr




  was:
Many production environments use firewalls to protect network traffic. In the 
specific case of DataNode UI and other Hadoop server for which their ports may 
fall on the list of firewalled ports the 
org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd 
(HttpServletRequest) which may return the firewall host such as 127.0.0.1.
This is unfortunately bad since if you are using a proxy in addition to do 
perimeter protection, and you have added your proxy as a super user when  
checking for the proxy IP to authorize user this would fail since getRemoteAdd 
would return the IP of the firewall (127.0.0.1).

"2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
(AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"

I propese to add a check for x-forwarded-for header since proxys usually inject 
that header before we do a getRemoteAddr





> Authentication proxy filter may fail authorization because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch, 
> HADOOP-14295.003.patch
>
>
> When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy 
> (Knox) would get an Authorization failure and it hosts would should as 
> 127.0.0.1 even though Knox wasn't in local host to Datanode, error message:
> {quote}
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> {quote}
> We were able to figure out that Datanode have Jetty listening on localhost 
> and that Netty is used to server request to DataNode, this was a measure to 
> improve performance because of Netty Async NIO design.
> I propose to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr

2017-04-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14295:

Summary: Authentication proxy filter may fail authorization because of 
getRemoteAddr  (was: Authentication proxy filter on firewall cluster may fail 
authorization because of getRemoteAddr)

> Authentication proxy filter may fail authorization because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch, 
> HADOOP-14295.003.patch
>
>
> Many production environments use firewalls to protect network traffic. In the 
> specific case of DataNode UI and other Hadoop server for which their ports 
> may fall on the list of firewalled ports the 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd 
> (HttpServletRequest) which may return the firewall host such as 127.0.0.1.
> This is unfortunately bad since if you are using a proxy in addition to do 
> perimeter protection, and you have added your proxy as a super user when  
> checking for the proxy IP to authorize user this would fail since 
> getRemoteAdd would return the IP of the firewall (127.0.0.1).
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> I propese to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr

2017-04-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966980#comment-15966980
 ] 

Yuanbo Liu commented on HADOOP-14295:
-

[~jojochuang] Thanks for your review.
{quote}
could you fix the checkstyle warning
{quote}
Sure, I could do that.
{quote}
As you said this is for accessing...
{quote}
If we use a proxy server(Knox) to access Namenode log locally, it doesn't print 
the warning log. If we access Namenode log directly, then we should attach 
"x-forwarded-server", otherwise the warning log is unavoidable. It doesn't have 
impact on RM/NM because they don't use 
{{AuthenticationWithProxyUserFilter.java}} when they construct the filter 
chains.
But I think the warning log is harmless, right? After all, it will ignore 
"x-forwarded-server" and fallback to getRemoteAddr if the value is empty.

> Authentication proxy filter on firewall cluster may fail authorization 
> because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch
>
>
> Many production environments use firewalls to protect network traffic. In the 
> specific case of DataNode UI and other Hadoop server for which their ports 
> may fall on the list of firewalled ports the 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd 
> (HttpServletRequest) which may return the firewall host such as 127.0.0.1.
> This is unfortunately bad since if you are using a proxy in addition to do 
> perimeter protection, and you have added your proxy as a super user when  
> checking for the proxy IP to authorize user this would fail since 
> getRemoteAdd would return the IP of the firewall (127.0.0.1).
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> I propese to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr

2017-04-11 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965343#comment-15965343
 ] 

Yuanbo Liu commented on HADOOP-14295:
-

[~jeffreyr97] Thanks for filing this JIRA and good summary.
[~jojochuang] Thanks for looking into this JIRA.
Wei-chui, If you look into {{DatanodeHttpServer.java}}, you can find that it 
uses a Netty to set up a internal proxy server. I also take a look at the http 
server in NameNode, there is no such kind of proxy server. So getRemoteAddr 
doesn't work as expected if users access some links in Datanode. Hope this info 
can help you get the background of this JIRA.
The patch from Jeff looks nice and we've tested it in our personal cluster. 
After Wei-chui's comments are addressed, I'm +1(no-binding) for your patch.

> Authentication proxy filter on firewall cluster may fail authorization 
> because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: hadoop-14295.001.patch
>
>
> Many production environments use firewalls to protect network traffic. In the 
> specific case of DataNode UI and other Hadoop server for which their ports 
> may fall on the list of firewalled ports the 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd 
> (HttpServletRequest) which may return the firewall host such as 127.0.0.1.
> This is unfortunately bad since if you are using a proxy in addition to do 
> perimeter protection, and you have added your proxy as a super user when  
> checking for the proxy IP to authorize user this would fail since 
> getRemoteAdd would return the IP of the firewall (127.0.0.1).
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> I propese to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14287) Compiling trunk with -DskipShade fails

2017-04-06 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14287:
---

Assignee: Arun Suresh

> Compiling trunk with -DskipShade fails 
> ---
>
> Key: HADOOP-14287
> URL: https://issues.apache.org/jira/browse/HADOOP-14287
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Arpit Agarwal
>Assignee: Arun Suresh
> Attachments: HADOOP-14287.001.patch
>
>
> Get the following errors when compiling trunk with -DskipShade. It succeeds 
> with shading.
> {code}
> [ERROR] COMPILATION ERROR :
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[41,30]
>  cannot find symbol
>   symbol:   class HdfsConfiguration
>   location: package org.apache.hadoop.hdfs
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[45,34]
>  cannot find symbol
>   symbol:   class WebHdfsConstants
>   location: package org.apache.hadoop.hdfs.web
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[71,36]
>  cannot find symbol
>   symbol:   class HdfsConfiguration
>   location: class org.apache.hadoop.example.ITUseMiniCluster
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[85,53]
>  cannot access org.apache.hadoop.hdfs.DistributedFileSystem
>   class file for org.apache.hadoop.hdfs.DistributedFileSystem not found
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[109,38]
>  cannot find symbol
>   symbol:   variable WebHdfsConstants
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14287) Compiling trunk with -DskipShade fails

2017-04-06 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960185#comment-15960185
 ] 

Yuanbo Liu commented on HADOOP-14287:
-

+1(no-binding) sorry for breaking compile.

> Compiling trunk with -DskipShade fails 
> ---
>
> Key: HADOOP-14287
> URL: https://issues.apache.org/jira/browse/HADOOP-14287
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Arpit Agarwal
> Attachments: HADOOP-14287.001.patch
>
>
> Get the following errors when compiling trunk with -DskipShade. It succeeds 
> with shading.
> {code}
> [ERROR] COMPILATION ERROR :
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[41,30]
>  cannot find symbol
>   symbol:   class HdfsConfiguration
>   location: package org.apache.hadoop.hdfs
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[45,34]
>  cannot find symbol
>   symbol:   class WebHdfsConstants
>   location: package org.apache.hadoop.hdfs.web
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[71,36]
>  cannot find symbol
>   symbol:   class HdfsConfiguration
>   location: class org.apache.hadoop.example.ITUseMiniCluster
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[85,53]
>  cannot access org.apache.hadoop.hdfs.DistributedFileSystem
>   class file for org.apache.hadoop.hdfs.DistributedFileSystem not found
> [ERROR] 
> /hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[109,38]
>  cannot find symbol
>   symbol:   variable WebHdfsConstants
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14257) hadoop-auth and hadoop-annotations jars are in lib directory

2017-03-30 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14257:
---

Assignee: Yuanbo Liu

> hadoop-auth and hadoop-annotations jars are in lib directory
> 
>
> Key: HADOOP-14257
> URL: https://issues.apache.org/jira/browse/HADOOP-14257
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> Poking around in the 3.0.0-alpha2 tarball, noticed that the auth and 
> annotations JARs seem to be in the wrong place (lib dir):
> {noformat}
> ./share/hadoop/common/lib/hadoop-annotations-3.0.0-alpha2.jar
> ./share/hadoop/common/lib/hadoop-auth-3.0.0-alpha2.jar
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15906823#comment-15906823
 ] 

Yuanbo Liu commented on HADOOP-14120:
-

[~ste...@apache.org] Thanks for your review.
Not sure whether I get your point, I assume that you're asking why I didn't 
write test case for my patch.
This patch is trying to get rid of {{setOptionalPutRequestParameters}} in 
{{S3ABlockOutputStream#putObject}}, because {{setOptionalPutRequestParameters}} 
has been used in {{S3AFileSystem#newPutObjectRequest}}, it's a duplicated 
method in {{putObject}} and make users confuse.
If "fs.s3a.fast.upload" is true, {{S3ABlockOutputStream}} will be used in 
{{S3AFileSystem#create}}, then when the output stream is closed, the test cases 
will cover my code change. I've seen a lot of create operation of 
{{S3AFileSystem}} in many test cases, I believe there is no need to add test 
case for {{S3ABlockOutputStream#putObject}}.

Triggering s3 test cases in local env seems to need some configurations and s3 
account, right? 

> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14120.001.patch
>
>
> There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
> S3ABlockOutputStream putObject()}}
> The put request has already been created by the FS; this call is only 
> superflous and potentially confusing.
> Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method 
> private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14111:

Status: Patch Available  (was: Open)

> cut some obsolete, ignored s3 tests in TestS3Credentials
> 
>
> Key: HADOOP-14111
> URL: https://issues.apache.org/jira/browse/HADOOP-14111
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14111.001.patch
>
>
> There's a couple of tests in {{TestS3Credentials}} which are tagged 
> {{@ignore}}. they aren't running, still have maintenance cost and appear in 
> test runs as skipped. 
> Proposed: Cut them out entirely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14120:

Attachment: HADOOP-14120.001.patch

upload v1 patch for this JIRA

> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14120.001.patch
>
>
> There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
> S3ABlockOutputStream putObject()}}
> The put request has already been created by the FS; this call is only 
> superflous and potentially confusing.
> Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method 
> private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14120:

Status: Patch Available  (was: Open)

> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14120.001.patch
>
>
> There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
> S3ABlockOutputStream putObject()}}
> The put request has already been created by the FS; this call is only 
> superflous and potentially confusing.
> Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method 
> private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14120:
---

Assignee: Yuanbo Liu

> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14120.001.patch
>
>
> There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
> S3ABlockOutputStream putObject()}}
> The put request has already been created by the FS; this call is only 
> superflous and potentially confusing.
> Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method 
> private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14111:

Attachment: HADOOP-14111.001.patch

upload v1 patch

> cut some obsolete, ignored s3 tests in TestS3Credentials
> 
>
> Key: HADOOP-14111
> URL: https://issues.apache.org/jira/browse/HADOOP-14111
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14111.001.patch
>
>
> There's a couple of tests in {{TestS3Credentials}} which are tagged 
> {{@ignore}}. they aren't running, still have maintenance cost and appear in 
> test runs as skipped. 
> Proposed: Cut them out entirely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14111:
---

Assignee: Yuanbo Liu

> cut some obsolete, ignored s3 tests in TestS3Credentials
> 
>
> Key: HADOOP-14111
> URL: https://issues.apache.org/jira/browse/HADOOP-14111
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
>
> There's a couple of tests in {{TestS3Credentials}} which are tagged 
> {{@ignore}}. they aren't running, still have maintenance cost and appear in 
> test runs as skipped. 
> Proposed: Cut them out entirely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2017-03-08 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902624#comment-15902624
 ] 

Yuanbo Liu commented on HADOOP-13759:
-

[~andrew.wang] Thanks for your response.
{quote}
So we could work on getting rid of SSH fencing, and then doing this split to 
move out jsch
{quote}
Not sure the background of SSH fencing, but when I google hadoop ssh fencing, 
there are still some fresh discussions about this topic showing up. If we need 
to get rid of this legacy code, I'd like to raise another JIRA to discuss it, 
any thoughts?

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2017-03-08 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901142#comment-15901142
 ] 

Yuanbo Liu commented on HADOOP-13759:
-

[~ste...@apache.org] Thanks for your response.
{quote}
The fact that nobody else has noticed is probably a metric of its use
{quote}
Probably the best way to let people notice FTP and SFTP file system is just 
deleting them :)

Anyway I will try to split them into hadoop-tools/hadoop-ftp since you agree to 
do so. But we should know that splitting them cannot really save jsch 
dependency in Hadoop Common, so both the title and description need to be 
changed if we want to keep this JIRA.

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2017-03-08 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900945#comment-15900945
 ] 

Yuanbo Liu commented on HADOOP-13759:
-

Sorry for the late response.
[~andrew.wang] We cannot get rid of jsch by splitting SFTP into a module of 
hadoop-tools, because {{SshFenceByTcpPort.java}} in {{hadoop-common}} is 
dependent on jsch.
[~coleferrier] You're right, jsch is committed before SFTP.
But I like the idea that we need to spilit SFTP and FTP from {{hadoop-common}} 
to {{hadoop-tools}}, [~andrew.wang], if you agree, I'd like to close this JIRA 
and open another JIRA to track the migration of SFTP and FTP.

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14060) KMS /logs servlet should have access control

2017-02-21 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14060:

Attachment: HADOOP-14060-tmp.001.patch

Add a temp patch for your reference.
highlight that this patch is not a complete solution for this defect.
kms use web.xml to define filter chain for web path, it works for 
{{WebAppContext}}, but {{/logs}} doesn't use {{WebAppContext}} to define its 
filter chain. So there are two ways to fix this defect:
* Change the context of {{/logs}} in {{HttpServer2.java}}, just like my temp 
patch
* Use a filter initializer to setup global filter chain for all the web paths 
in {{HttpServer2.java}}

Since {{HttpServer2.java}} is used widely in different hadoop components, I'd 
prefer #2 to reduce compatibility issues. But #2 is a bit complex and need some 
time to generate a patch.

> KMS /logs servlet should have access control
> 
>
> Key: HADOOP-14060
> URL: https://issues.apache.org/jira/browse/HADOOP-14060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14060-tmp.001.patch
>
>
> HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine 
> for /conf, /jmx, /logLevel, and /stacks, but not for /logs.
> The code in {{AdminAuthorizedServlet#doGet}} for /logs and 
> {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that 
> /logs should subject to the same access control as intended by the original 
> developer.
> IMHO this could either be my misconfiguration or there is a bug somewhere in 
> {{HttpServer2}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14060) KMS /logs servlet should have access control

2017-02-21 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875737#comment-15875737
 ] 

Yuanbo Liu commented on HADOOP-14060:
-

The filter chain of {{/logs}} is 
{code}
NoCacheFilter->safety->static_user_filter
{code}
while the filter chain of {{/jmx}} is
{code}
NoCacheFilter->safety->static_user_filter->authFilter->MDCFilter
{code}
The authFilter requires kerberos authentication, that's why the responses of 
{{/jmx}} and {{/logs}} are different.

> KMS /logs servlet should have access control
> 
>
> Key: HADOOP-14060
> URL: https://issues.apache.org/jira/browse/HADOOP-14060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine 
> for /conf, /jmx, /logLevel, and /stacks, but not for /logs.
> The code in {{AdminAuthorizedServlet#doGet}} for /logs and 
> {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that 
> /logs should subject to the same access control as intended by the original 
> developer.
> IMHO this could either be my misconfiguration or there is a bug somewhere in 
> {{HttpServer2}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14060) KMS /logs servlet should have access control

2017-02-21 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875708#comment-15875708
 ] 

Yuanbo Liu commented on HADOOP-14060:
-

[~jzhuge] Thanks for your details.
I can reproduce this defect with your steps
>From a quick sight, I can tell that this defect is not related to access 
>control, and it belongs to filter chain issue. I'm trying to enable kms debug 
>mode and compare the filter chains of {{/jmx}} and {{/logs}} to find out the 
>differences. 

> KMS /logs servlet should have access control
> 
>
> Key: HADOOP-14060
> URL: https://issues.apache.org/jira/browse/HADOOP-14060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine 
> for /conf, /jmx, /logLevel, and /stacks, but not for /logs.
> The code in {{AdminAuthorizedServlet#doGet}} for /logs and 
> {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that 
> /logs should subject to the same access control as intended by the original 
> developer.
> IMHO this could either be my misconfiguration or there is a bug somewhere in 
> {{HttpServer2}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14073) Document default HttpServer2 servlets

2017-02-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875375#comment-15875375
 ] 

Yuanbo Liu commented on HADOOP-14073:
-

[~jzhuge]
{{/logs}} requires authorization to access while {{/jmx}} and {{/conf}} don't 
by default in {{HttpServer2.java}}. I think if kms inherits 
{{HttpServer2.java}} or take advantage of it, the access behaviors in hdfs,yarn 
and kms should be consistent.

It's would be great if you provide some steps/test cases to reproduce the 
defect that {{/logs}} doesn't have access control in HADOOP-14060. If the 
defect exists, we should take care of it.

> Document default HttpServer2 servlets
> -
>
> Key: HADOOP-14073
> URL: https://issues.apache.org/jira/browse/HADOOP-14073
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: Yuanbo Liu
>Priority: Minor
>
> Since many components (NN Web UI, YARN RM/JH, KMS, HttpFS, etc) now use 
> HttpServer2 which provides default servlets /conf, /jmx, /logLevel, /stacks, 
> /logs, and /static, it'd nice to have an independent markdown doc to describe 
> authentication and authorization of these servlets. The docs for the related 
> components can just link to this markdown doc.
> Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: 
> https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132.
> I also made a poor attempt in 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm#L1086-L1129
>  and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm#L153-L197.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14073) Document default HttpServer2 servlets

2017-02-20 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14073:
---

Assignee: Yuanbo Liu

> Document default HttpServer2 servlets
> -
>
> Key: HADOOP-14073
> URL: https://issues.apache.org/jira/browse/HADOOP-14073
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: Yuanbo Liu
>Priority: Minor
>
> Since many components (NN Web UI, YARN RM/JH, KMS, HttpFS, etc) now use 
> HttpServer2 which provides default servlets /conf, /jmx, /logLevel, /stacks, 
> /logs, and /static, it'd nice to have an independent markdown doc to describe 
> authentication and authorization of these servlets. The docs for the related 
> components can just link to this markdown doc.
> Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: 
> https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132.
> I also made a poor attempt in 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm#L1086-L1129
>  and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm#L153-L197.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-18 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15873427#comment-15873427
 ] 

Yuanbo Liu commented on HADOOP-14077:
-

[~eyang] Thanks for your review and commit.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch, 
> HADOOP-14077.003.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14037) client.handleSaslConnectionFailure needlessly wraps IOEs

2017-02-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14037:

Attachment: HADOOP-14037.001.patch

It's some sort of refactor work, initiate v1 patch for this JIRA.
I don't have the permission to create a wiki entry. It would be great to 
provide some guide here. Thanks in advance.

> client.handleSaslConnectionFailure needlessly wraps IOEs
> 
>
> Key: HADOOP-14037
> URL: https://issues.apache.org/jira/browse/HADOOP-14037
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14037.001.patch
>
>
> {{client.handleSaslConnectionFailure}} needlessly wraps IOEs, including
> SaslException as IOE, when SaslException is already an IOE.
> This complicates stack traces and hides the fact that a connect problem is 
> due to auth, not network



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14037) client.handleSaslConnectionFailure needlessly wraps IOEs

2017-02-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14037:

Status: Patch Available  (was: Open)

> client.handleSaslConnectionFailure needlessly wraps IOEs
> 
>
> Key: HADOOP-14037
> URL: https://issues.apache.org/jira/browse/HADOOP-14037
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14037.001.patch
>
>
> {{client.handleSaslConnectionFailure}} needlessly wraps IOEs, including
> SaslException as IOE, when SaslException is already an IOE.
> This complicates stack traces and hides the fact that a connect problem is 
> due to auth, not network



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14037) client.handleSaslConnectionFailure needlessly wraps IOEs

2017-02-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14037:
---

Assignee: Yuanbo Liu

> client.handleSaslConnectionFailure needlessly wraps IOEs
> 
>
> Key: HADOOP-14037
> URL: https://issues.apache.org/jira/browse/HADOOP-14037
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
>
> {{client.handleSaslConnectionFailure}} needlessly wraps IOEs, including
> SaslException as IOE, when SaslException is already an IOE.
> This complicates stack traces and hides the fact that a connect problem is 
> due to auth, not network



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-15 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869013#comment-15869013
 ] 

Yuanbo Liu commented on HADOOP-14077:
-

The test failure is tracked by HADOOP-14030. So the failure is not related to 
my patch.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch, 
> HADOOP-14077.003.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14077:

Attachment: HADOOP-14077.003.patch

[~eyang] Thanks for your response.
The test failure seem not to be related.
Concerning the check style failure, it says the lines of java method can not 
exceed 150 lines. So I refactor the method a bit.
Upload v3 patch, please review it

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch, 
> HADOOP-14077.003.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865021#comment-15865021
 ] 

Yuanbo Liu commented on HADOOP-14077:
-

[~eyang] Sorry to interrupt, would you mind reviewing patch. Thanks in advance.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14077:

Attachment: HADOOP-14077.001.patch

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14077:

Attachment: HADOOP-14077.002.patch

upload v2 patch to address findbugs issue.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HADOOP-14077:
---

 Summary: Improve the patch of HADOOP-13119
 Key: HADOOP-14077
 URL: https://issues.apache.org/jira/browse/HADOOP-14077
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yuanbo Liu
Assignee: Yuanbo Liu


For some links(such as "/jmx, /stack"), blocking the links in filter chain due 
to impersonation issue is not friendly for users. For example, user "sam" is 
not allowed to be impersonated by user "knox", and the link "/jmx" doesn't need 
any user to do authorization by default. It only needs user "knox" to do 
authentication, in this case, it's not right to  block the access in SPNEGO 
filter. We intend to check impersonation permission when the method 
"getRemoteUser" of request is used, so that such kind of links("/jmx, /stack") 
would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863150#comment-15863150
 ] 

Yuanbo Liu commented on HADOOP-14077:
-

Also fix some inappropriate operation of null point condition in YARN app 
controller.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14077:

Status: Patch Available  (was: Open)

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863121#comment-15863121
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

[~aw] and [~eyang] Thanks for your response.
I will raise another JIRA to fix it. Thanks again!

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 3.0.0-alpha2, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-10 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860906#comment-15860906
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

It would be great if any committer can help me revert my patch so that I can 
provide a new patch for this issue. Thanks in advance!

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-09 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860844#comment-15860844
 ] 

Yuanbo Liu edited comment on HADOOP-13119 at 2/10/17 6:54 AM:
--

Reopen it. For some links(such as "/jmx, /stack"), blocking the links in filter 
chain because of impersonation issue is not friendly for users. For example, 
user "sam" is not allowed to be impersonated by user "knox", the link "/jmx" 
doesn't need any user to do authorization by default, and it only needs user 
"knox" to do authentication, in this case, it's not right to  block the access 
in SPNEGO filter. We intend to verify the impersonation when the method 
"getRemoteUser" of request is used, so that such kind of links would not be 
blocked by mistake. I will attach a new patch ASAP.


was (Author: yuanbo):
Reopen it. Because because for some links(such as "/jmx, /stack"), blocking the 
links in filter chain because of impersonation issue is not friendly for users. 
For example, user "sam" is not allowed to be impersonated by user "knox", the 
link "/jmx" doesn't need any user to do authorization by default, and it only 
needs user "knox" to do authentication, in this case, it's not right to  block 
the access in SPNEGO filter. We intend to verify the impersonation when the 
method "getRemoteUser" of request is used, so that such kind of links would not 
be blocked by mistake. I will attach a new patch ASAP.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reopened HADOOP-13119:
-

Reopen it. Because because for some links(such as "/jmx, /stack"), blocking the 
links in filter chain because of impersonation issue is not friendly for users. 
For example, user "sam" is not allowed to be impersonated by user "knox", the 
link "/jmx" doesn't need any user to do authorization by default, and it only 
needs user "knox" to do authentication, in this case, it's not right to  block 
the access in SPNEGO filter. We intend to verify the impersonation when the 
method "getRemoteUser" of request is used, so that such kind of links would not 
be blocked by mistake. I will attach a new patch ASAP.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-20 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: HADOOP-13119.005.patch

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-19 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831324#comment-15831324
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

[~aw] Thanks for your response.
{quote}
I did a quick read through the...
{quote}
Sorry, the old discussion may be a little confusing here, so I'd like to 
clarify it at first.
When security is enabled in Hadoop, Knox cannot access "/logs", and adding user 
"knox" to "dfs.cluster.administrators" seems to be the only way to let customer 
access the link by Knox. As you mentioned above, the users in this group should 
almost certainly not be proxiable accounts, which I agree with you. So we 
should extend the ability of http filter to support proxy user. That means when 
user "sam" wants to access "/logs" of secure hadoop by Knox, we just need to 
add "sam" to "dfs.cluster.administrators" and make user "knox" impersonate sam, 
user "knox" responses to authentication requirement while user "sam" responses 
to authorization requirement. In the end, user "sam" can access the link 
"/logs".
{quote}
 allow anyone to run as any other user
{quote}
The answer is absolutely no, this is not the purpose of this JIRA. I just want 
to extent the function of SPNEGO filter and let it support impersonation.
{quote}
extremely limited circumstances why proxying might be necessary
{quote} 
When I dig it more, I find the filter chains in different Hadoop components are 
quite variable and of course we want to uniform them. When comes to YARN or Job 
History Server, we want to use SPENGO filter instead of delegation filter, 
which is clearly supported by Hadoop(We can find the introduction in Hadoop 
docs), then the proxying becomes quite hot, because there're a lot of 
application users in YARN. From the security perspective, when Knox accesses 
Yarn application links, we don't want to have only one user "knox", and we need 
user "knox" impersonates different users. So extending SPNEGO filter's function 
is needed.
Hope my rely can answer your doubts. Any further comment will be appreciated. 
Thanks a lot!

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-17 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: HADOOP-13119.005.patch

[~eyang] Thanks for reviewing. Upload v5 patch.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-17 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: HADOOP-13119.004.patch

upload v4 patch to address failure test case.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-17 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: HADOOP-13119.003.patch

upload v3 patch for this JIRA.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-16 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825531#comment-15825531
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

[~eyang] Thanks for your comments
{quote}
there is still some tests to guard against breakage.
{quote}
Make sense to me, I'll update my patch asap.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin command to get HA state of all the namenodes

2017-01-04 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800656#comment-15800656
 ] 

Yuanbo Liu commented on HADOOP-13933:
-

[~surendrasingh] FYI, you just uploaded another 003 patch and the issue I 
mentioned before seems not addressed.

> Add haadmin command to get HA state of all the namenodes
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-13933.002.patch, HADOOP-13933.003.patch, 
> HADOOP-13933.003.patch, HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin command to get HA state of all the namenodes

2016-12-26 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779422#comment-15779422
 ] 

Yuanbo Liu commented on HADOOP-13933:
-

[~surendrasingh] Thanks for the new patch.
There is a tiny mistake in you patch
{code}
+| -getServiceState \ | Returns the state of all the services. |
{code}
After it's addressed, I'm +1(no-binding) for your patch. Thanks for your work.


> Add haadmin command to get HA state of all the namenodes
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-13933.002.patch, HADOOP-13933.003.patch, 
> HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin command to get HA state of all the namenodes

2016-12-24 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15775795#comment-15775795
 ] 

Yuanbo Liu commented on HADOOP-13933:
-

[~surendrasingh] Thanks for your new patch.
{code}
.put("-getAllServiceState",
new UsageInfo(null, "Returns the state of all the services"))
{code}
I'm wondering if we can use {{""}} instead of {{null}} so that we can get rid 
of some null condition code, but I'm cool with the {{null}} parameter if you 
want to keep it.

> Add haadmin command to get HA state of all the namenodes
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-13933.002.patch, HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13933) Add haadmin command to get HA state of all the namenodes

2016-12-22 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769529#comment-15769529
 ] 

Yuanbo Liu edited comment on HADOOP-13933 at 12/22/16 9:00 AM:
---

[~surendrasingh] Thanks for filing this JIRA.
I'm +1 for adding this command, it will be very useful for monitor scripts and 
auto test-case scripts.
Here're my comments for your patch.
{code}
HAServiceProtocol proto = target.getProxy(getConf(), 5000);
{code}
* Would you mind using "rpcTimeoutForChecks" instead of "5000" here?
{code}
catch (IOException e) {
out.println(String.format("%-50s %-10s", target.getAddress(),
"Failed to connect."));
  }
{code}
* The exception message in {{getAllServiceState}} is ignored, I think it's good 
to expose it.


was (Author: yuanbo):
[~surendrasingh] Thanks for filing this JIRA.
I'm +1 for adding this command, it will be very useful for monitor scripts and 
auto test-case scripts.
* Here're my comments for your patch.
{code}
HAServiceProtocol proto = target.getProxy(getConf(), 5000);
{code}
* Would you mind using "rpcTimeoutForChecks" instead of "5000" here?
{code}
catch (IOException e) {
out.println(String.format("%-50s %-10s", target.getAddress(),
"Failed to connect."));
  }
{code}
The exception message in {{getAllServiceState}} is ignored, I think it's good 
to expose it.

> Add haadmin command to get HA state of all the namenodes
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13933) Add haadmin command to get HA state of all the namenodes

2016-12-22 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769529#comment-15769529
 ] 

Yuanbo Liu edited comment on HADOOP-13933 at 12/22/16 9:00 AM:
---

[~surendrasingh] Thanks for filing this JIRA.
I'm +1 for adding this command, it will be very useful for monitor scripts and 
auto test-case scripts.
Here're my comments for your patch.
* Would you mind using "rpcTimeoutForChecks" instead of "5000" here?
{code}
HAServiceProtocol proto = target.getProxy(getConf(), 5000);
{code}
* The exception message in {{getAllServiceState}} is ignored, I think it's good 
to expose it.
{code}
catch (IOException e) {
out.println(String.format("%-50s %-10s", target.getAddress(),
"Failed to connect."));
  }
{code}



was (Author: yuanbo):
[~surendrasingh] Thanks for filing this JIRA.
I'm +1 for adding this command, it will be very useful for monitor scripts and 
auto test-case scripts.
Here're my comments for your patch.
{code}
HAServiceProtocol proto = target.getProxy(getConf(), 5000);
{code}
* Would you mind using "rpcTimeoutForChecks" instead of "5000" here?
{code}
catch (IOException e) {
out.println(String.format("%-50s %-10s", target.getAddress(),
"Failed to connect."));
  }
{code}
* The exception message in {{getAllServiceState}} is ignored, I think it's good 
to expose it.

> Add haadmin command to get HA state of all the namenodes
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13933) Add haadmin command to get HA state of all the namenodes

2016-12-22 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769529#comment-15769529
 ] 

Yuanbo Liu edited comment on HADOOP-13933 at 12/22/16 8:59 AM:
---

[~surendrasingh] Thanks for filing this JIRA.
I'm +1 for adding this command, it will be very useful for monitor scripts and 
auto test-case scripts.
* Here're my comments for your patch.
{code}
HAServiceProtocol proto = target.getProxy(getConf(), 5000);
{code}
* Would you mind using "rpcTimeoutForChecks" instead of "5000" here?
{code}
catch (IOException e) {
out.println(String.format("%-50s %-10s", target.getAddress(),
"Failed to connect."));
  }
{code}
The exception message in {{getAllServiceState}} is ignored, I think it's good 
to expose it.


was (Author: yuanbo):
[~surendrasingh] Thanks for filing this JIRA.
I'm +1 for adding this command, it will be very useful for monitor scripts and 
auto test-case scripts.
Here're my comments for your patch.
{code}
HAServiceProtocol proto = target.getProxy(getConf(), 5000);
{code}
Would you mind using "rpcTimeoutForChecks" instead of "5000" here?
{code}
catch (IOException e) {
out.println(String.format("%-50s %-10s", target.getAddress(),
"Failed to connect."));
  }
{code}
The exception message in {{getAllServiceState}} is ignored, I think it's good 
to expose it.

> Add haadmin command to get HA state of all the namenodes
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin command to get HA state of all the namenodes

2016-12-22 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769529#comment-15769529
 ] 

Yuanbo Liu commented on HADOOP-13933:
-

[~surendrasingh] Thanks for filing this JIRA.
I'm +1 for adding this command, it will be very useful for monitor scripts and 
auto test-case scripts.
Here're my comments for your patch.
{code}
HAServiceProtocol proto = target.getProxy(getConf(), 5000);
{code}
Would you mind using "rpcTimeoutForChecks" instead of "5000" here?
{code}
catch (IOException e) {
out.println(String.format("%-50s %-10s", target.getAddress(),
"Failed to connect."));
  }
{code}
The exception message in {{getAllServiceState}} is ignored, I think it's good 
to expose it.

> Add haadmin command to get HA state of all the namenodes
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-14 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13890:

Attachment: test_failure_1.txt

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, HADOOP-13890.04.patch, 
> test-failure.txt, test_failure_1.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-14 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747616#comment-15747616
 ] 

Yuanbo Liu commented on HADOOP-13890:
-

[~xyao] Thanks for your new patch and explanation.
My java version is oracle-1.8, the details are here:
{code}
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

ll |grep java_sdk_1.8.0
lrwxrwxrwx. 1 root root 58 Sep  6 16:06 java_sdk_1.8.0 -> 
/usr/lib/jvm/java-1.8.0-oracle-1.8.0.101-1jpp.1.el7.x86_64
{code}
The exception from IBM JDK seems not to be "Invalid SPNEGO sequence" exception.
FYI, I have attached a new log file: test_failure_1.txt.
Since [~jzhuge], you and Apache Jenkins have reported that the test failures 
are passed, I'm strongly doubting these failures are related to my laptop 
environment. Please go ahead if you're suspended by my comment.

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, HADOOP-13890.04.patch, 
> test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747144#comment-15747144
 ] 

Yuanbo Liu commented on HADOOP-13890:
-

[~xyao] Thanks for your response.
I've attached my test failure information and this is my git status info:
{code}
# Changes not staged for commit:
#   (use "git add ..." to update what will be committed)
#   (use "git checkout -- ..." to discard changes in working directory)
#
#   modified:   
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
#   modified:   
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
#   modified:   
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
#
no changes added to commit (use "git add" and/or "git commit -a")
{code}


> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13890:

Attachment: test-failure.txt

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747009#comment-15747009
 ] 

Yuanbo Liu commented on HADOOP-13890:
-

[~xyao] Thanks for working on this JIRA.
After applying your v3 patch to my local trunk branch, the "Invalid SPNEGO 
sequence" exception still exist in {{TestWebDelegationToken}}.
Have I missed something?

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13891) KerberosName#KerberosName cannot parse principle without realm

2016-12-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu resolved HADOOP-13891.
-
Resolution: Resolved

> KerberosName#KerberosName cannot parse principle without realm
> --
>
> Key: HADOOP-13891
> URL: https://issues.apache.org/jira/browse/HADOOP-13891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Xiaoyu Yao
> Attachments: testKerberosName.patch
>
>
> Given a principal string like "HTTP/localhost", the returned KerberosName 
> object contains a null hostname and null realm name. The service name is 
> incorrectly parsed as whole as "HTTP/localhost".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13891) KerberosName#KerberosName cannot parse principle without realm

2016-12-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743917#comment-15743917
 ] 

Yuanbo Liu commented on HADOOP-13891:
-

[~xyao] I have went through your patch in HADOOP-13890,  and it seems better to 
address KerberosName's issue in that JIRA.
I will mark this JIRA as resolved shortly if you don't mind. 
Look forwards to your patch in HADOOP-13890 since the test failures are quite 
often.

> KerberosName#KerberosName cannot parse principle without realm
> --
>
> Key: HADOOP-13891
> URL: https://issues.apache.org/jira/browse/HADOOP-13891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Xiaoyu Yao
> Attachments: testKerberosName.patch
>
>
> Given a principal string like "HTTP/localhost", the returned KerberosName 
> object contains a null hostname and null realm name. The service name is 
> incorrectly parsed as whole as "HTTP/localhost".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-21 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685889#comment-15685889
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

[~eyang]/[~xiaochen]/[~liuml07] Sorry to interrupt. Would you mind taking a 
look at this issue and give some thoughts? Thanks in advance!

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: HADOOP-13119.002.patch

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15662636#comment-15662636
 ] 

Yuanbo Liu edited comment on HADOOP-13119 at 11/14/16 3:24 AM:
---

Deleting {{HttpServer2#initSpnego}} will cause come findbugs issues and test 
failures. It's not worthy of doing it in this JIRA. But I still recommend to 
delete {{HttpServer2#initSpnego}}, it's misleading and not working. Maybe I 
will file another JIRA to discuss it.

Upload v2 patch to address code style issue.


was (Author: yuanbo):
Deleting {{HttpServer2#initSpnego}} will cause come findbugs issues and test 
failures. It's not worthy of doing it in this JIRA. But I still recommend to 
delete {{HttpServer2#initSpnego}}, it's misleading and not working.

Upload v2 patch to address code style issue.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15662636#comment-15662636
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

Deleting {{HttpServer2#initSpnego}} will cause come findbugs issues and test 
failures. It's not worthy of doing it in this JIRA. But I still recommend to 
delete {{HttpServer2#initSpnego}}, it's misleading and not working.

Upload v2 patch to address code style issue.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-08 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Status: Patch Available  (was: Reopened)

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-08 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: HADOOP-13119.001.patch

Upload first patch for this issue. Any comment will be welcome.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-06 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642960#comment-15642960
 ] 

Yuanbo Liu edited comment on HADOOP-13119 at 11/7/16 3:18 AM:
--

[~Wancy]
Thanks for your response.
I have two concerns about using delegation token initializer:
* delegation filter and SPENGO filter are different, using delegation filter 
which supports proxy user will change url rules and the way you request those 
urls. I believe it will bring a lot of code changes in Knox since the current 
code is based on SPENGO filter, right?
* delegation filter and SPENGO filter cannot coexist. If we replace SPENGO 
initializer with delegation initializer, it will bring incompatibility issue in 
some downstream components because of such piece of code here:
{code}
if (initializer.getName().equals(
  AuthenticationFilterInitializer.class.getName())) {
  hasHadoopAuthFilterInitializer = true;
}
{code}

Thus, I'd prefer extending SPENGO filter and make it support proxy user.
  



was (Author: yuanbo):
[~Wancy]
Thanks for your response.
I have to concerns about using delegation token initializer:
* delegation filter and SPENGO filter are different, using delegation filter 
which supports proxy user will change url rules and the way you request those 
urls. I believe it will bring a lot of code changes in Knox since the current 
code is based on SPENGO filter, right?
* delegation filter and SPENGO filter cannot coexist. If we replace SPENGO 
initializer with delegation initializer, it will bring incompatibility issue in 
some downstream components because of such piece of code here:
{code}
if (initializer.getName().equals(
  AuthenticationFilterInitializer.class.getName())) {
  hasHadoopAuthFilterInitializer = true;
}
{code}

Thus, I'd prefer extending SPENGO filter and make it support proxy user.
  


> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-06 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642960#comment-15642960
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

[~Wancy]
Thanks for your response.
I have to concerns about using delegation token initializer:
* delegation filter and SPENGO filter are different, using delegation filter 
which supports proxy user will change url rules and the way you request those 
urls. I believe it will bring a lot of code changes in Knox since the current 
code is based on SPENGO filter, right?
* delegation filter and SPENGO filter cannot coexist. If we replace SPENGO 
initializer with delegation initializer, it will bring incompatibility issue in 
some downstream components because of such piece of code here:
{code}
if (initializer.getName().equals(
  AuthenticationFilterInitializer.class.getName())) {
  hasHadoopAuthFilterInitializer = true;
}
{code}

Thus, I'd prefer extending SPENGO filter and make it support proxy user.
  


> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-03 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635132#comment-15635132
 ] 

Yuanbo Liu edited comment on HADOOP-13119 at 11/4/16 4:00 AM:
--

[~jeffreyr97]/[~eyang]
I've read through the implements of {{HttpServer2.java}} and some filters, here 
is my investigation.
!screenshot-1.png!
>From the picture, we can see that /logs access is also controlled by SPENGO 
>filter(the authentication in the filter chain is a SPENGO filter).
{{HttpServer2#initSpnego}} is confusing because this method is not working and 
also is not the way SPENGO filter is added. The right steps of enabling SPENGO 
are here:
{code}
hadoop.http.authentication.simple.anonymous.allowedfalse
hadoop.http.authentication.signature.secret.file   /etc/security/http_secret
hadoop.http.authentication.type kerberos
hadoop.http.authentication.kerberos.keytab  
/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal   HTTP/_h...@example.com
hadoop.http.filter.initializers 
org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain   EXAMPLE.COM
{code}
The SPENGO filter is added by the method {{HttpServer2#addFilter}}.

[~jeffreyr97] The reason why you cannot access {{/logs}} is that {{/logs}} 
doesn't only require authentication but also require authorization by default. 
And authorization is controlled by the the property 
*dfs.cluster.administrators*. The user knox succeeds in authentication but 
fails in authorization. Adding the user knox to dfs.cluster.administrators is 
an expected behavior because this configuration is used to control who can 
access the default servlets.
On the other hand, I love the idea that make SPENGO filter support proxy user. 
Proxy user is a basic function in Hadoop, and SPENGO filter should support it. 
By the way, I need to apologize that I mix the concepts of proxy user and 
delegation filter in the internal discussion, they're quite different.

To the conclusion, I propose:
* Erasing {{HttpServer2#initSpnego}}. The code is useless and misleading.
* Extending the feature of {{org.apache.hadoop.security.AuthenticationFilter}} 
and making SPENGO filter support proxy user by default.
* Deleting the redundant filter NoCacheFilter (see the pic) in the 
WebAppContext, adding NoCacheFilter into the LogContext's filter chain.

[~zjshen]/[~atm]/[~daryn]/[~vinodkv], I tag you guys here since you contribute 
a lot of security filters in Hadoop.
If you and people in the watching list have any thoughts about this JIRA, 
please let me know. Thanks in advance.


was (Author: yuanbo):
[~jeffreyr97]/[~eyang]
I've read through the implements of {{HttpServer2.java}} and some filters, here 
is my investigation result.
!screenshot-1.png!
>From the picture, we can see that /logs access is also controlled by SPENGO 
>filter(the authentication in the filter chain is a SPENGO filter).
{{HttpServer2#initSpnego}} is confusing because this method is not working and 
also is not the way SPENGO filter is added. The right steps of enabling SPENGO 
are here:
{code}
hadoop.http.authentication.simple.anonymous.allowedfalse
hadoop.http.authentication.signature.secret.file   /etc/security/http_secret
hadoop.http.authentication.type kerberos
hadoop.http.authentication.kerberos.keytab  
/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal   HTTP/_h...@example.com
hadoop.http.filter.initializers 
org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain   EXAMPLE.COM
{code}
The SPENGO filter is added by the method {{HttpServer2#addFilter}}.

[~jeffreyr97] The reason why you cannot access {{/logs}} is that {{/logs}} 
doesn't only require authentication but also require authorization by default. 
And authorization is controlled by the the property 
*dfs.cluster.administrators*. The user knox succeeds in authentication but 
fails in authorization. Adding the user knox to dfs.cluster.administrators is 
an expected behavior because this configuration is used to control who can 
access the default servlets.
On the other hand, I love the idea that make SPENGO filter support proxy user. 
Proxy user is a basic function in Hadoop, and SPENGO filter should support it. 
By the way, I need to apologize that I mix the concepts of proxy user and 
delegation filter in the internal discussion, they're quite different.

To the conclusion, I propose:
* Erasing {{HttpServer2#initSpnego}}. The code is useless and misleading.
* Extending the feature of {{org.apache.hadoop.security.AuthenticationFilter}} 
and making SPENGO filter support proxy user by default.
* Deleting the redundant filter NoCacheFilter (see the pic) in the 
WebAppContext, adding NoCacheFilter into the LogContext's filter chain.

[~zjshen]/[~atm]/[~daryn]/[~vinodkv], 

[jira] [Comment Edited] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-03 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635132#comment-15635132
 ] 

Yuanbo Liu edited comment on HADOOP-13119 at 11/4/16 3:59 AM:
--

[~jeffreyr97]/[~eyang]
I've read through the implements of {{HttpServer2.java}} and some filters, here 
is my investigation result.
!screenshot-1.png!
>From the picture, we can see that /logs access is also controlled by SPENGO 
>filter(the authentication in the filter chain is a SPENGO filter).
{{HttpServer2#initSpnego}} is confusing because this method is not working and 
also is not the way SPENGO filter is added. The right steps of enabling SPENGO 
are here:
{code}
hadoop.http.authentication.simple.anonymous.allowedfalse
hadoop.http.authentication.signature.secret.file   /etc/security/http_secret
hadoop.http.authentication.type kerberos
hadoop.http.authentication.kerberos.keytab  
/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal   HTTP/_h...@example.com
hadoop.http.filter.initializers 
org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain   EXAMPLE.COM
{code}
The SPENGO filter is added by the method {{HttpServer2#addFilter}}.

[~jeffreyr97] The reason why you cannot access {{/logs}} is that {{/logs}} 
doesn't only require authentication but also require authorization by default. 
And authorization is controlled by the the property 
*dfs.cluster.administrators*. The user knox succeeds in authentication but 
fails in authorization. Adding the user knox to dfs.cluster.administrators is 
an expected behavior because this configuration is used to control who can 
access the default servlets.
On the other hand, I love the idea that make SPENGO filter support proxy user. 
Proxy user is a basic function in Hadoop, and SPENGO filter should support it. 
By the way, I need to apologize that I mix the concepts of proxy user and 
delegation filter in the internal discussion, they're quite different.

To the conclusion, I propose:
* Erasing {{HttpServer2#initSpnego}}. The code is useless and misleading.
* Extending the feature of {{org.apache.hadoop.security.AuthenticationFilter}} 
and making SPENGO filter support proxy user by default.
* Deleting the redundant filter NoCacheFilter (see the pic) in the 
WebAppContext, adding NoCacheFilter into the LogContext's filter chain.

[~zjshen]/[~atm]/[~daryn]/[~vinodkv], I tag you guys here since you contribute 
a lot of security filters in Hadoop.
If you and people in the watching list have any thoughts about this JIRA, 
please let me know. Thanks in advance.


was (Author: yuanbo):
[~jeffreyr97]/[~eyang]
I've read through the implements of {{HttpServer2.java}} and some filters, here 
is my investigation result.
!screenshot-1.png!
>From the picture, we can see that /logs access is also controlled by SPENGO 
>filter(the authentication in the filter chain is a SPENGO filter).
{{HttpServer2#initSpnego}} is confusing because this method is not working and 
also is not the way SPENGO filter is added. The right steps of enabling SPENGO 
are here:
{code}
hadoop.http.authentication.simple.anonymous.allowedfalse
hadoop.http.authentication.signature.secret.file   /etc/security/http_secret
hadoop.http.authentication.type kerberos
hadoop.http.authentication.kerberos.keytab  
/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal   HTTP/_h...@example.com
hadoop.http.filter.initializers 
org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain   EXAMPLE.COM
{code}
The SPENGO filter is added by the method {{HttpServer2#addFilter}}.

[~jeffreyr97] The reason why you cannot access {{/logs}} is that {{/logs}} 
doesn't only require authentication but also require authorization by default. 
And authorization is controlled by the the property 
*dfs.cluster.administrators*. The user knox succeeds in authentication but 
fails in authorization. Adding the user knox to dfs.cluster.administrators is 
an expected behavior because this configuration is used to control who can 
access the default servlets.
On the other hand, I love the idea that make SPENGO filter support proxy user. 
Proxy user is a basic function in Hadoop, and SPENGO filter should support it. 
By the way, I need to apologize that I mix the concepts of proxy user and 
delegation filter in the internal discussion, they're quite different.

To the conclusion, I propose:
* Erasing {{HttpServer2#initSpnego}}. The code is useless and misleading.
* Extending the feature of {{org.apache.hadoop.security.AuthenticationFilter}} 
and making SPENGO filter support proxy user by default.
* Deleting the redundant filter(NoCacheFilter) in the WebAppContext, adding 
NoCacheFilter into the LogContext's filter chain.

[~zjshen]/[~atm]/[~daryn]/[~vinodkv], I tag 

[jira] [Comment Edited] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-03 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635132#comment-15635132
 ] 

Yuanbo Liu edited comment on HADOOP-13119 at 11/4/16 3:58 AM:
--

[~jeffreyr97]/[~eyang]
I've read through the implements of {{HttpServer2.java}} and some filters, here 
is my investigation result.
!screenshot-1.png!
>From the picture, we can see that /logs access is also controlled by SPENGO 
>filter(the authentication in the filter chain is a SPENGO filter).
{{HttpServer2#initSpnego}} is confusing because this method is not working and 
also is not the way SPENGO filter is added. The right steps of enabling SPENGO 
are here:
{code}
hadoop.http.authentication.simple.anonymous.allowedfalse
hadoop.http.authentication.signature.secret.file   /etc/security/http_secret
hadoop.http.authentication.type kerberos
hadoop.http.authentication.kerberos.keytab  
/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal   HTTP/_h...@example.com
hadoop.http.filter.initializers 
org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain   EXAMPLE.COM
{code}
The SPENGO filter is added by the method {{HttpServer2#addFilter}}.

[~jeffreyr97] The reason why you cannot access {{/logs}} is that {{/logs}} 
doesn't only require authentication but also require authorization by default. 
And authorization is controlled by the the property 
*dfs.cluster.administrators*. The user knox succeeds in authentication but 
fails in authorization. Adding the user knox to dfs.cluster.administrators is 
an expected behavior because this configuration is used to control who can 
access the default servlets.
On the other hand, I love the idea that make SPENGO filter support proxy user. 
Proxy user is a basic function in Hadoop, and SPENGO filter should support it. 
By the way, I need to apologize that I mix the concepts of proxy user and 
delegation filter in the internal discussion, they're quite different.

To the conclusion, I propose:
* Erasing {{HttpServer2#initSpnego}}. The code is useless and misleading.
* Extending the feature of {{org.apache.hadoop.security.AuthenticationFilter}} 
and making SPENGO filter support proxy user by default.
* Deleting the redundant filter(NoCacheFilter) in the WebAppContext, adding 
NoCacheFilter into the LogContext's filter chain.

[~zjshen]/[~atm]/[~daryn]/[~vinodkv], I tag you guys here since you contribute 
a lot of security filters in Hadoop.
If you and people in the watching list have any thoughts about this JIRA, 
please let me know. Thanks in advance.


was (Author: yuanbo):
[~jeffreyr97]/[~eyang]
I've read through the implements of {{HttpServer2.java}} and some filters, here 
is my investigation result.
!screenshot-1.png!
>From the picture, we can see that /logs access is also controlled by SPENGO 
>filter(the authentication in the filter chain is a SPENGO filter).
{{HttpServer2#initSpnego}} is confusing because this method is not working and 
also is not the way SPENGO filter is added. The right steps of enabling SPENGO 
are here:
{code}
hadoop.http.authentication.simple.anonymous.allowedfalse
hadoop.http.authentication.signature.secret.file   /etc/security/http_secret
hadoop.http.authentication.type kerberos
hadoop.http.authentication.kerberos.keytab  
/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal   HTTP/_h...@example.com
hadoop.http.filter.initializers 
org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain   EXAMPLE.COM
{code}
The SPENGO filter is added by the method {{HttpServer2#addFilter}}.

[~jeffreyr97] The reason why you cannot access {{/logs}} is that {{/logs}} 
doesn't only require authentication but also require authorization by default. 
And authorization is controlled by the the property 
*dfs.cluster.administrators*. The user knox succeeds in authentication but 
fails in authorization. Adding the user knox to dfs.cluster.administrators is 
an expected behavior because this configuration is used to control who can 
access the default servlets.
On the other hand, I love the idea that make SPENGO filter support proxy user. 
Proxy user is a basic function in Hadoop, and SPENGO filter should support it. 
By the way, I need to apologize that I mix the concepts of proxy user and 
delegation filter in the internal discussion, they're quite different.

To the conclusion, I propose:
* Erasing {{HttpServer2#initSpnego}}. The code is useless and misleading.
* Extending the feature of {{org.apache.hadoop.security.AuthenticationFilter}} 
and making SPENGO filter support proxy user by default.

[~zjshen]/[~atm]/[~daryn]/[~vinodkv], I tag you guys here since you contribute 
a lot of security filters in Hadoop.
If you and people in the watching list have any thoughts about this 

[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-03 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635132#comment-15635132
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

[~jeffreyr97]/[~eyang]
I've read through the implements of {{HttpServer2.java}} and some filters, here 
is my investigation result.
!screenshot-1.png!
>From the picture, we can see that /logs access is also controlled by SPENGO 
>filter(the authentication in the filter chain is a SPENGO filter).
{{HttpServer2#initSpnego}} is confusing because this method is not working and 
also is not the way SPENGO filter is added. The right steps of enabling SPENGO 
are here:
{code}
hadoop.http.authentication.simple.anonymous.allowedfalse
hadoop.http.authentication.signature.secret.file   /etc/security/http_secret
hadoop.http.authentication.type kerberos
hadoop.http.authentication.kerberos.keytab  
/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal   HTTP/_h...@example.com
hadoop.http.filter.initializers 
org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain   EXAMPLE.COM
{code}
The SPENGO filter is added by the method {{HttpServer2#addFilter}}.

[~jeffreyr97] The reason why you cannot access {{/logs}} is that {{/logs}} 
doesn't only require authentication but also require authorization by default. 
And authorization is controlled by the the property 
*dfs.cluster.administrators*. The user knox succeeds in authentication but 
fails in authorization. Adding the user knox to dfs.cluster.administrators is 
an expected behavior because this configuration is used to control who can 
access the default servlets.
On the other hand, I love the idea that make SPENGO filter support proxy user. 
Proxy user is a basic function in Hadoop, and SPENGO filter should support it. 
By the way, I need to apologize that I mix the concepts of proxy user and 
delegation filter in the internal discussion, they're quite different.

To the conclusion, I propose:
* Erasing {{HttpServer2#initSpnego}}. The code is useless and misleading.
* Extending the feature of {{org.apache.hadoop.security.AuthenticationFilter}} 
and making SPENGO filter support proxy user by default.

[~zjshen]/[~atm]/[~daryn]/[~vinodkv], I tag you guys here since you contribute 
a lot of security filters in Hadoop.
If you and people in the watching list have any thoughts about this JIRA, 
please let me know. Thanks in advance.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-03 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: screenshot-1.png

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624576#comment-15624576
 ] 

Yuanbo Liu commented on HADOOP-13773:
-

[~ferhui] Thanks for filing this jira.
{quote}
suggest uploading a patch file instead of github pull requests
{quote}
Agree with [~raviprak], please upload your patch.

Small suggestion about your code change:
{code}
if [ "$HADOOP_HEAPSIZE" == "" ];
{code}
please use "=" instead of "==" here. "==" is not defined in standard POSIX.

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614071#comment-15614071
 ] 

Yuanbo Liu commented on HADOOP-13765:
-

LGTM, [~byh0831] Thanks for filing this jira.
[~ste...@apache.org] I also checked the behavior of {{FTPFileSystem}}, it 
throws a runtime exception if there is a running error.
I'm not sure which behavior is more reasonable, looking forward to your 
thoughts.

> Return HomeDirectory if possible in SFTPFileSystem
> --
>
> Key: HADOOP-13765
> URL: https://issues.apache.org/jira/browse/HADOOP-13765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Yuhao Bi
> Attachments: HADOOP-13765.001.patch
>
>
> In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in 
> finally block.
> If we get the homeDir Path successfully but got an IOE in the finally block 
> we will return the null result.
> Maybe we can simply ignore this IOE and just return the result we have got.
> Related codes are shown below.
> {code:title=SFTPFileSystem.java|borderStyle=solid}
>   public Path getHomeDirectory() {
> ChannelSftp channel = null;
> try {
>   channel = connect();
>   Path homeDir = new Path(channel.pwd());
>   return homeDir;
> } catch (Exception ioe) {
>   return null;
> } finally {
>   try {
> disconnect(channel);
>   } catch (IOException ioe) {
> //Maybe we can just ignore this IOE and do not return null here.
> return null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2016-10-26 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-13759:
---

Assignee: Yuanbo Liu

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-21 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597000#comment-15597000
 ] 

Yuanbo Liu commented on HADOOP-12082:
-

[~hgadre] Thanks for your response.
{quote}
The jira addresses the requirement where more tha
{quote}
Now I understand what issue this JIRA address. The delegation filter which I'm 
looking for is similar with your idea, the first auth is SPENGO auth, and the 
second is proxy auth.

{quote}
The authentication handler is configured as part of configuring Hadoop 
AuthenticationFilter. This is typically done via web.xml
{quote}
I have went though Oozie configuration, also Configuration.md again. I guess 
it's hard for NameNode or ResourceManager's http server to take advantage of 
your work, since http server is a thread in NameNode or ResourceManager and the 
webapp is packaged into jar. It's not able to change web.xml unless the jar is 
replaced.
So I think it's designed for third-party projects which depend on Hadoop-Auth, 
right?

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2-002.patch, HADOOP-12082-branch-2.8-001.patch, 
> HADOOP-12082-branch-2.8-002.patch, HADOOP-12082-branch-2.8.patch, 
> HADOOP-12082-branch-2.patch, HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based 

[jira] [Comment Edited] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15593683#comment-15593683
 ] 

Yuanbo Liu edited comment on HADOOP-12082 at 10/21/16 3:28 AM:
---

[~hgadre] Thanks for your work.
It seems a  long time JIRA, and I doesn't catch up much context of this issue.
When you said LDAP based authentication, did you mean authentication filter 
which supports delegation?
If so, I'm looking forwards to your work, because it would help some proxy 
servers such as Knox to deal with more http requests which require proxy user.

I'm also confused about Configuration.md. I was anticipating there were some 
descriptions about configuration work of core-site/hdfs-site, but there 
weren't. Could you elaborate how to configure a real Hadoop cluster so that 
users can use your new handlers  {{LdapAuthenticationHandler}}, 
{{MultiSchemeAuthenticationHandler}}. I can't get the steps from test cases.

Thanks again for your time, please let me know your thoughts.


was (Author: yuanbo):
[~hgadre] Thanks for your work.
It seems a  long time JIRA, and I doesn't catch up much context of this issue.
When you said LDAP based authentication, did you mean authentication filter 
which supports delegation?
If so, I'm looking forwards to your work, because it would help some proxy 
servers such as Knox to deal with more http requests which require proxy user.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2-002.patch, HADOOP-12082-branch-2.8-001.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use 

[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15593683#comment-15593683
 ] 

Yuanbo Liu commented on HADOOP-12082:
-

[~hgadre] Thanks for your work.
It seems a  long time JIRA, and I doesn't catch up much context of this issue.
When you said LDAP based LDAP based authentication, did you mean authentication 
filter which supports delegation?
If so, I'm looking forwards to your work, because it would help some proxy 
servers such as Knox to deal with more http requests which require proxy user.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2-002.patch, HADOOP-12082-branch-2.8-001.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This 

[jira] [Comment Edited] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15593683#comment-15593683
 ] 

Yuanbo Liu edited comment on HADOOP-12082 at 10/21/16 2:06 AM:
---

[~hgadre] Thanks for your work.
It seems a  long time JIRA, and I doesn't catch up much context of this issue.
When you said LDAP based authentication, did you mean authentication filter 
which supports delegation?
If so, I'm looking forwards to your work, because it would help some proxy 
servers such as Knox to deal with more http requests which require proxy user.


was (Author: yuanbo):
[~hgadre] Thanks for your work.
It seems a  long time JIRA, and I doesn't catch up much context of this issue.
When you said LDAP based LDAP based authentication, did you mean authentication 
filter which supports delegation?
If so, I'm looking forwards to your work, because it would help some proxy 
servers such as Knox to deal with more http requests which require proxy user.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2-002.patch, HADOOP-12082-branch-2.8-001.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache 

[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-10-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Summary: Web UI error accessing links which need authorization when 
Kerberos  (was: Web UI authorization error accessing /logs/ when Kerberos)

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-17 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582521#comment-15582521
 ] 

Yuanbo Liu commented on HADOOP-13707:
-

[~eyang] Really sorry for not pointing out trunk patch and branch-2.8/branch-2 
patch are slightly different because of {{MetricsServlet.java}}. My patches in 
the attachment contain some changes about {{MetricsServlet.java}}. Hope my 
mistake won't bother you too much!

[~brahmareddy] Thanks a lot for your reminder!

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-15 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579120#comment-15579120
 ] 

Yuanbo Liu edited comment on HADOOP-13707 at 10/16/16 2:06 AM:
---

[~brahmareddy] I have no idea about how to re-establish Jenkins job. I was 
using "Resume Progress" -> "Submit patch", but it didn't work. It would be 
better if the dashboard contains something like "Rerun Jenkins" button.


was (Author: yuanbo):
[~brahmareddy] I have no idea about how re-establish Jenkins job. I was using 
"Resume Progress" -> "Submit patch", but it didn't work. It would be better if 
the dashboard contains something like "Rerun Jenkins" button.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-15 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579120#comment-15579120
 ] 

Yuanbo Liu commented on HADOOP-13707:
-

[~brahmareddy] I have no idea about how re-establish Jenkins job. I was using 
"Resume Progress" -> "Submit patch", but it didn't work. It would be better if 
the dashboard contains something like "Rerun Jenkins" button.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-15 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578151#comment-15578151
 ] 

Yuanbo Liu commented on HADOOP-13707:
-

[~eyang] Thanks for your commit
[~brahmareddy] Thanks for your review
I've prepared branch-2, branch-2.8 patches for this issue. please see the 
attachments and review them. Thanks in advance!

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Attachment: HADOOP-13707-branch-2.8.patch

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Attachment: HADOOP-13707-branch-2.patch

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13718) There is no filterInitializer for initializing DelegationTokenAuthenticationFilter in Hadoop common

2016-10-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu resolved HADOOP-13718.
-
Resolution: Duplicate

> There is no filterInitializer for initializing 
> DelegationTokenAuthenticationFilter in Hadoop common
> ---
>
> Key: HADOOP-13718
> URL: https://issues.apache.org/jira/browse/HADOOP-13718
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: There is no filterInitializer for initializing 
> DelegationTokenFilter in Hadoop common. Yarn implement its own 
> filterInitializer RMAuthenticationFilterInitializer to add 
> DelegationTokenFilter to support proxy user and delegation token 
> authentication.
>Reporter: Shi Wang
>
> There is no filterInitializer for initializing DelegationTokenFilter in 
> Hadoop common. 
> Yarn implement its own filterInitializer RMAuthenticationFilterInitializer to 
> add DelegationTokenFilter to support proxy user and delegation token 
> authentication.
> This is useful to use DelegationTokenAuthenticationFilter for HADOOP Web 
> Console authentication. Especially after KNOX-565, all quick link could go 
> through knox and we'll need DelegationTokenAuthenticationFilter to do the 
> "getDoAsUser" job and call kerberosAuthenticationHandler for us. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13718) There is no filterInitializer for initializing DelegationTokenAuthenticationFilter in Hadoop common

2016-10-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570441#comment-15570441
 ] 

Yuanbo Liu commented on HADOOP-13718:
-

[~Wancy] Thanks to file this issue. I'd like to fix it in HADOOP-13119. Eric 
didn't want to lose discussion there for filing another jira. I'm gonna mark 
this jira as duplicate. If you have any suggestion, please let me know. Thanks 
a lot!

> There is no filterInitializer for initializing 
> DelegationTokenAuthenticationFilter in Hadoop common
> ---
>
> Key: HADOOP-13718
> URL: https://issues.apache.org/jira/browse/HADOOP-13718
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: There is no filterInitializer for initializing 
> DelegationTokenFilter in Hadoop common. Yarn implement its own 
> filterInitializer RMAuthenticationFilterInitializer to add 
> DelegationTokenFilter to support proxy user and delegation token 
> authentication.
>Reporter: Shi Wang
>
> There is no filterInitializer for initializing DelegationTokenFilter in 
> Hadoop common. 
> Yarn implement its own filterInitializer RMAuthenticationFilterInitializer to 
> add DelegationTokenFilter to support proxy user and delegation token 
> authentication.
> This is useful to use DelegationTokenAuthenticationFilter for HADOOP Web 
> Console authentication. Especially after KNOX-565, all quick link could go 
> through knox and we'll need DelegationTokenAuthenticationFilter to do the 
> "getDoAsUser" job and call kerberosAuthenticationHandler for us. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Attachment: HADOOP-13707.004.patch

uploaded v4 patch to address checkstyle issue

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch, HADOOP-13707.002.patch, 
> HADOOP-13707.003.patch, HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568870#comment-15568870
 ] 

Yuanbo Liu commented on HADOOP-13707:
-

Adding SPENGO filter belongs to enabling SPENGO step.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch, HADOOP-13707.002.patch, 
> HADOOP-13707.003.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568753#comment-15568753
 ] 

Yuanbo Liu commented on HADOOP-13707:
-

[~jojochuang] Thanks for your comments.
{quote}
I feel like a correct approach is to add a SPENGO filter...
{quote}
Yes you're right, actually I'm ready to add a SPENGO filter with delegation 
feature in HADOOP-13119. But as I said, enabling Kerberos and SPENGO are two 
steps. If users enable Kerberos without SPENGO, that means the http sever of 
the cluster is in non-security environment. In this situation, static user's 
authorization shouldn't be checked.
In the very first installation of Hadoop, http sever  is also in non-security 
environment without any authorization check. So I think the behavior here 
should be consistent and "dr.who" issue should be avoid.
Thanks again for your comments, looking forward to your response. :)

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch, HADOOP-13707.002.patch, 
> HADOOP-13707.003.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >