[jira] [Commented] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506723#comment-16506723
 ] 

Aaron Fabbri commented on HADOOP-15525:
---

Actually, refreshing my memory here, [~ste...@apache.org] has already done most 
of this--though I'm proposing a more restrictive example.

The test case 
{{org.apache.hadoop.fs.s3a.auth.ITestAssumeRole#testRestrictedWriteSubdir}} 
([link|https://github.com/apache/hadoop/blob/fba1c42adc1c8ae57951e1865ec2ab05c8707bdf/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java#L389])
 is a good example.

I'm wondering if adding explicit create and delete permissions on parent 
directory keys (e.g. allow deleteObject on "/a/" "/a/b/" "/a/b/c/") would avoid 
the "cannot update directory markers" problem. Since there is no wildcard after 
these keys, they should only grant access to manipulating the actual empty 
directory markers, right?

Will try to play around with that when I get back from break.

> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to (1) document a an example with IAM ACLs policies that gets 
> this basic functionality, and consider (2) making improvements to make this 
> easier.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506712#comment-16506712
 ] 

Aaron Fabbri commented on HADOOP-15525:
---

The existing docs 
[here|https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/assumed_roles.html]
 give the basic permissions needed.  Will probably link to some additional 
examples from there.

> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to (1) document a an example with IAM ACLs policies that gets 
> this basic functionality, and consider (2) making improvements to make this 
> easier.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15471) Hdfs recursive listing operation is very slow

2018-06-08 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506689#comment-16506689
 ] 

Jitendra Nath Pandey commented on HADOOP-15471:
---

[~ajaysachdev], please review the javac/findbugs issues and test failures. 
Please fix them if they are related to this patch.

> Hdfs recursive listing operation is very slow
> -
>
> Key: HADOOP-15471
> URL: https://issues.apache.org/jira/browse/HADOOP-15471
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, HDFS-13398.002.patch, 
> HDFS-13398.003.patch, parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-15525:
--
Description: 
Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
an S3 bucket.

For example, assume Hadoop uses some IAM identity "hadoop", which they wish to 
grant full permission to everything under the following path:

s3a://bucket/a/b/c/hadoop-dir

they don't want hadoop user to be able to read/list/delete anything outside of 
the hadoop-dir "subdir"

Problems: 

To implement the "directory structure on flat key space" emulation logic we use 
to present a Hadoop FS on top of a blob store, we need to create / delete / 
list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
object with key ending in '/' exists iff empty directory is there and (2) files 
cannot live beneath files, only directories.)

I'd like us to (1) document a an example with IAM ACLs policies that gets this 
basic functionality, and consider (2) making improvements to make this easier.

We've discussed some of these issues before but I didn't see a dedicated JIRA.

  was:
Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
an S3 bucket.

For example, assume Hadoop uses some IAM identity "hadoop", which they wish to 
grant full permission to everything under the following path:

s3a://bucket/a/b/c/hadoop-dir

they don't want hadoop user to be able to read/list/delete anything outside of 
the hadoop-dir "subdir"

Problems: 

To implement the "directory structure on flat key space" emulation logic we use 
to present a Hadoop FS on top of a blob store, we need to create / delete / 
list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
object with key ending in '/' exists iff empty directory is there and (2) files 
cannot live beneath files, only directories.)

I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
this basic functionality, and/or (2) make improvements to make this less 
painful.

We've discussed some of these issues before but I didn't see a dedicated JIRA.


> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to (1) document a an example with IAM ACLs policies that gets 
> this basic functionality, and consider (2) making improvements to make this 
> easier.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506673#comment-16506673
 ] 

Aaron Fabbri commented on HADOOP-15525:
---

Thanks [~poeppt]. We do have assume role support in S3A (HADOOP-15176). I'd 
like to add:
 # Documentation with IAM policy examples on how to achieve the scenario listed 
here.
 # Probably: some integration tests that confirm it works as expected–and keeps 
working in the future.
 # Along the way, any features we think we need to simplify usage, etc. can get 
new JIRAs.

This will greatly simplify things for end users that are trying to achieve this 
because the way directories are emulated, and the implication for required 
permissions, is not obvious.

> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
> this basic functionality, and/or (2) make improvements to make this less 
> painful.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-08 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506669#comment-16506669
 ] 

Eric Yang commented on HADOOP-15518:


There seems to be a problem if multiple filters extended from 
AuthenticationFilter, the token casting are incompatible.

Browser shows this error message:
{code}
HTTP ERROR 500

Problem accessing /proxy/application_1528498597648_0001/. Reason:

Server Error
Caused by:

java.lang.ClassCastException: 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter$1$1
 cannot be cast to 
org.apache.hadoop.security.authentication.server.AuthenticationToken
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:250)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:597)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:649)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:597)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:98)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1608)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
{code}

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: 

[jira] [Commented] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Thomas Poepping (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506657#comment-16506657
 ] 

Thomas Poepping commented on HADOOP-15525:
--

[~fabbri] FWIW AWS IAM can do this with IAM policies.


https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/


Which is to say that a certain IAM policy attached to (probably) an IAM role 
can only read/list/delete something under a specific keyspace in S3.

The solution for this in AWS EMR is to provide a mapping in configuration that 
EmrFS (EMR's internal Hadoop-compatible S3 filesystem) uses to assume those 
roles to get different credentials before making requests to S3.


https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-emrfs-iam-roles.html


On EMR, this is a little simpler because the service controls provisioning of 
the cluster. It is likely to be a more ambiguous problem to solve in open 
source, where any users can deploy it anywhere.

I was responsible for the AWS EMR solution to this problem, I would be happy to 
involve myself as much as I'm allowed in preparation of this feature.

> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
> this basic functionality, and/or (2) make improvements to make this less 
> painful.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed

2018-06-08 Thread John Zhuge (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506606#comment-16506606
 ] 

John Zhuge commented on HADOOP-14435:
-

Saw it on 3.0.3 (RC0).

> TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed
> --
>
> Key: HADOOP-14435
> URL: https://issues.apache.org/jira/browse/HADOOP-14435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/adl
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> Saw the following assertion failure in branch-2 and trunk:
> {noformat}
> Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
>   Time elapsed: 0.71 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<461> but was:<456>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:219)
>   at junit.framework.Assert.assertEquals(Assert.java:226)
>   at junit.framework.TestCase.assertEquals(TestCase.java:392)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59)
> Results :
> Failed tests:
>   
> TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242
>  expected:<461> but was:<456>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed

2018-06-08 Thread John Zhuge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14435:

Issue Type: Bug  (was: Task)

> TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed
> --
>
> Key: HADOOP-14435
> URL: https://issues.apache.org/jira/browse/HADOOP-14435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/adl
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> Saw the following assertion failure in branch-2 and trunk:
> {noformat}
> Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
>   Time elapsed: 0.71 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<461> but was:<456>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:219)
>   at junit.framework.Assert.assertEquals(Assert.java:226)
>   at junit.framework.TestCase.assertEquals(TestCase.java:392)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59)
> Results :
> Failed tests:
>   
> TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242
>  expected:<461> but was:<456>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506574#comment-16506574
 ] 

Aaron Fabbri commented on HADOOP-15525:
---

Assigning to me for now. I'd like to write a doc here to describe an example 
with actual IAM policies so we can talk concretely about it.  Coincidentally, 
I'm about to go on vacation for two weeks but will try to post something when I 
get back. Meanwhile, comments welcomed.

> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
> this basic functionality, and/or (2) make improvements to make this less 
> painful.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-15525:
-

Assignee: Aaron Fabbri

> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
> this basic functionality, and/or (2) make improvements to make this less 
> painful.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-15525:
--
Description: 
Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
an S3 bucket.

For example, assume Hadoop uses some IAM identity "hadoop", which they wish to 
grant full permission to everything under the following path:

s3a://bucket/a/b/c/hadoop-dir

they don't want hadoop user to be able to read/list/delete anything outside of 
the hadoop-dir "subdir"

Problems: 

To implement the "directory structure on flat key space" emulation logic we use 
to present a Hadoop FS on top of a blob store, we need to create / delete / 
list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
object with key ending in '/' exists iff empty directory is there and (2) files 
cannot live beneath files, only directories.)

I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
this basic functionality, and/or (2) make improvements to make this less 
painful.

We've discussed some of these issues before but I didn't see a dedicated JIRA.

  was:
Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
an S3 bucket.

For example, assume Hadoop uses some IAM identity "hadoop", which they wish to 
grant full permission to everything under the following path:

s3a://bucket/a/b/c/hadoop-dir

they don't want hadoop user to be able to read/list/delete anything outside of 
the hadoop-dir "subdir"

Problems: 

To implement the "directory structure on flat key space" emulation logic we use 
to present a Hadoop FS on top of a blob store, we need to create / delete / 
list ancestors of {{hadoop-dir}}. 

I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
this basic functionality, and/or (2) make improvements to make this less 
painful.

We've discussed some of these issues before but I didn't see a dedicated JIRA.


> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
> this basic functionality, and/or (2) make improvements to make this less 
> painful.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2018-06-08 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-15525:
-

 Summary: s3a: clarify / improve support for mixed ACL buckets
 Key: HADOOP-15525
 URL: https://issues.apache.org/jira/browse/HADOOP-15525
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: Aaron Fabbri


Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
an S3 bucket.

For example, assume Hadoop uses some IAM identity "hadoop", which they wish to 
grant full permission to everything under the following path:

s3a://bucket/a/b/c/hadoop-dir

they don't want hadoop user to be able to read/list/delete anything outside of 
the hadoop-dir "subdir"

Problems: 

To implement the "directory structure on flat key space" emulation logic we use 
to present a Hadoop FS on top of a blob store, we need to create / delete / 
list ancestors of {{hadoop-dir}}. 

I'd like us to either (1) document a workaround (example IAM ACLs) that gets 
this basic functionality, and/or (2) make improvements to make this less 
painful.

We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11697) Use larger value for fs.s3a.connection.timeout.

2018-06-08 Thread Aaron Fabbri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri resolved HADOOP-11697.
---
   Resolution: Duplicate
Fix Version/s: 3.0.0

> Use larger value for fs.s3a.connection.timeout.
> ---
>
> Key: HADOOP-11697
> URL: https://issues.apache.org/jira/browse/HADOOP-11697
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: s3
> Fix For: 3.0.0
>
> Attachments: HADOOP-11697.001.patch, HDFS-7908.000.patch
>
>
> The default value of {{fs.s3a.connection.timeout}} is {{5}} milliseconds. 
> It causes many {{SocketTimeoutException}} when uploading large files using 
> {{hadoop fs -put}}. 
> Also, the units for {{fs.s3a.connection.timeout}} and 
> {{fs.s3a.connection.estaablish.timeout}} are milliseconds. For s3 
> connections, I think it is not necessary to have sub-seconds timeout value. 
> Thus I suggest to change the time unit to seconds, to easy sys admin's job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11697) Use larger value for fs.s3a.connection.timeout.

2018-06-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506552#comment-16506552
 ] 

Aaron Fabbri commented on HADOOP-11697:
---

I think we can close this, since HADOOP-12346 bumped these values a while back.

> Use larger value for fs.s3a.connection.timeout.
> ---
>
> Key: HADOOP-11697
> URL: https://issues.apache.org/jira/browse/HADOOP-11697
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: s3
> Attachments: HADOOP-11697.001.patch, HDFS-7908.000.patch
>
>
> The default value of {{fs.s3a.connection.timeout}} is {{5}} milliseconds. 
> It causes many {{SocketTimeoutException}} when uploading large files using 
> {{hadoop fs -put}}. 
> Also, the units for {{fs.s3a.connection.timeout}} and 
> {{fs.s3a.connection.estaablish.timeout}} are milliseconds. For s3 
> connections, I think it is not necessary to have sub-seconds timeout value. 
> Thus I suggest to change the time unit to seconds, to easy sys admin's job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-08 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506539#comment-16506539
 ] 

genericqa commented on HADOOP-15518:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 36m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 36m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
2s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926920/HADOOP-15518-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 61010ea62644 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a127244 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14748/testReport/ |
| Max. process+thread count | 291 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14748/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Authentication filter calling 

[jira] [Commented] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506517#comment-16506517
 ] 

genericqa commented on HADOOP-15520:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927080/HADOOP-15520.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 97eefd97f07c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a127244 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14747/testReport/ |
| Max. process+thread count | 1504 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14747/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
>   

[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-08 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506506#comment-16506506
 ] 

Eric Yang commented on HADOOP-15518:


One foot note about this change.  If multiple AuthenticationFilters are 
configured, and service principal names are different.  TGS granted to remote 
client is the principal name of the first AuthenticationFilter that gets 
triggered.  This may look unexpected when auditing where user has been through 
klist.  The ability to configure different HTTP principals on the same server 
port configuration shouldn't exist, but developer should be aware of the API 
imperfection to avoid getting to this hole.

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2018-06-08 Thread Joseph Smith (JIRA)
Joseph Smith created HADOOP-15524:
-

 Summary: BytesWritable causes OOME when array size reaches 
Integer.MAX_VALUE
 Key: HADOOP-15524
 URL: https://issues.apache.org/jira/browse/HADOOP-15524
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Joseph Smith


BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal array.  
On my environment, this causes an OOME
{code:java}
Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
exceeds VM limit
{code}
byte[Integer.MAX_VALUE-2] must be used to prevent this error.

Tested on OSX and CentOS 7 using Java version 1.8.0_131.

I noticed that java.util.ArrayList contains the following
{code:java}
/**
 * The maximum size of array to allocate.
 * Some VMs reserve some header words in an array.
 * Attempts to allocate larger arrays may result in
 * OutOfMemoryError: Requested array size exceeds VM limit
 */
private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
{code}
 

BytesWritable.setSize should use something similar to prevent an OOME from 
occurring.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8853) BytesWritable setsize unchecked

2018-06-08 Thread Joseph Smith (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506470#comment-16506470
 ] 

Joseph Smith commented on HADOOP-8853:
--

Looks like a duplicate to HADOOP-11901

> BytesWritable setsize unchecked
> ---
>
> Key: HADOOP-8853
> URL: https://issues.apache.org/jira/browse/HADOOP-8853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.1-alpha
>Reporter: Sven Meys
>Priority: Major
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When setting an array of length 1183230720 (in my case), the method will 
> return a negative array index exception.
> Cause is the following method.
> public void setSize(int size) {
> if (size > getCapacity()) {
>   setCapacity(size * 3 / 2);
> }
> this.size = size;
>   }
> size * 3 has precedence which means that for any value greater than 
> 715.827.882 (682,6 MB), the result will overflow and become negative. Thus 
> this method is unsafe.
> It would be nice to have this hidden feature documented or have a failsafe in 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-08 Thread Owen O'Malley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506425#comment-16506425
 ] 

Owen O'Malley commented on HADOOP-15518:


This looks good, [~kminder]. +1

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-08 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-15518:
---
Status: Patch Available  (was: Open)

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-08 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506411#comment-16506411
 ] 

genericqa commented on HADOOP-15307:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15307 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927067/HADOOP-15307.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f3f65520ca14 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c42dcc7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14744/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14744/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve NFS error handling: 

[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-08 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506410#comment-16506410
 ] 

genericqa commented on HADOOP-15521:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HADOOP-15521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927073/HADOOP-15521-branch-2-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux a741aab307f0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / b991b38 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14745/testReport/ |
| Max. process+thread count | 181 (vs. ulimit of 1) |
| modules | C: hadoop-project hadoop-tools/hadoop-azure U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14745/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-08 Thread Esfandiar Manii (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506400#comment-16506400
 ] 

Esfandiar Manii commented on HADOOP-15521:
--

Only the Azure SDK, initially I forgot to add "branch-2" so it caused merge 
conflict with trunk

> Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code 
> blocks
> ---
>
> Key: HADOOP-15521
> URL: https://issues.apache.org/jira/browse/HADOOP-15521
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.10.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15521-001.patch, HADOOP-15521-branch-2-001.patch
>
>
> Upgraded Azure Storage Sdk to 7.0.0
> Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-08 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506398#comment-16506398
 ] 

genericqa commented on HADOOP-15307:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15307 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926924/HADOOP-15307.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ecb8d8f5ec21 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c42dcc7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14743/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14743/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve NFS error handling: 

[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506351#comment-16506351
 ] 

Steve Loughran commented on HADOOP-15521:
-

Does this change any of the dependencies?

> Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code 
> blocks
> ---
>
> Key: HADOOP-15521
> URL: https://issues.apache.org/jira/browse/HADOOP-15521
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.10.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15521-001.patch, HADOOP-15521-branch-2-001.patch
>
>
> Upgraded Azure Storage Sdk to 7.0.0
> Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506341#comment-16506341
 ] 

Arash Nabili commented on HADOOP-15520:
---

Thank you for the feedback. I have addressed all of your suggestions, and made 
sure that the new tests still pass. I have attached the updated patch.

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.003.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: (was: HADOOP-15520.003.patch)

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.003.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: HADOOP-15520.003.patch
Status: Patch Available  (was: Open)

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.003.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15520 stopped by Arash Nabili.
-
> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.003.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Status: In Progress  (was: Patch Available)

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.003.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: HADOOP-15520.003.patch
Status: Patch Available  (was: Open)

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.003.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: (was: HADOOP-15520.002.patch)

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.003.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Status: Open  (was: Patch Available)

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-08 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-15521:
-
Attachment: HADOOP-15521-branch-2-001.patch

> Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code 
> blocks
> ---
>
> Key: HADOOP-15521
> URL: https://issues.apache.org/jira/browse/HADOOP-15521
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.10.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15521-001.patch, HADOOP-15521-branch-2-001.patch
>
>
> Upgraded Azure Storage Sdk to 7.0.0
> Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-08 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506267#comment-16506267
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15522:
---

Tested were added on HADOOP-15516.

> Deprecate Shell#ReadLink by using native java code
> --
>
> Key: HADOOP-15522
> URL: https://issues.apache.org/jira/browse/HADOOP-15522
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15522-HADOOP-15461.v1.patch
>
>
> Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java 
> 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-08 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-15521:
-
Affects Version/s: 2.10.0

> Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code 
> blocks
> ---
>
> Key: HADOOP-15521
> URL: https://issues.apache.org/jira/browse/HADOOP-15521
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.10.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15521-001.patch
>
>
> Upgraded Azure Storage Sdk to 7.0.0
> Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Attachment: HADOOP-15307.005.patch

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch, HADOOP-15307.004.patch, HADOOP-15307.005.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Attachment: (was: HADOOP-15307.005.patch)

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch, HADOOP-15307.004.patch, HADOOP-15307.005.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Attachment: HADOOP-15307.005.patch

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch, HADOOP-15307.004.patch, HADOOP-15307.005.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506210#comment-16506210
 ] 

Gabor Bota commented on HADOOP-15307:
-

Thanks [~templedf], I've fixed it in v005

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch, HADOOP-15307.004.patch, HADOOP-15307.005.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-08 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506202#comment-16506202
 ] 

Daniel Templeton commented on HADOOP-15307:
---

Thanks, [~gabor.bota].  Two more tiny issues, and I think we're good.  First, 
there should be a space before the paren in the _if_ statements.  I know there 
isn't one on the existing _if_ statements, but that's also wrong.  You can 
either just fix yours or fix them for the whole _if-else_.  Second, in the 
exception message, please add a colon after "flavor".

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch, HADOOP-15307.004.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2018-06-08 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506123#comment-16506123
 ] 

Andras Bokor commented on HADOOP-14178:
---

[~ajisakaa],

# The one checkstyle warning make sense. Importing ContainerStatus in 
TestChildQueueOrder no longer needed after the patch.
# TestTaskAttemptListenerImpl#testCheckpointIDTracking: mockTask, mockJob, 
clock objects became unused.

Other than these minor things I do not have new comment.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-08 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506020#comment-16506020
 ] 

genericqa commented on HADOOP-15521:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-15521 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926980/HADOOP-15521-001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14742/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code 
> blocks
> ---
>
> Key: HADOOP-15521
> URL: https://issues.apache.org/jira/browse/HADOOP-15521
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15521-001.patch
>
>
> Upgraded Azure Storage Sdk to 7.0.0
> Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15521:

Status: Patch Available  (was: Open)

> Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code 
> blocks
> ---
>
> Key: HADOOP-15521
> URL: https://issues.apache.org/jira/browse/HADOOP-15521
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15521-001.patch
>
>
> Upgraded Azure Storage Sdk to 7.0.0
> Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15520:
---

Assignee: Arash Nabili

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Assignee: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505997#comment-16505997
 ] 

Steve Loughran commented on HADOOP-15520:
-

thanks for this

* I like the detail on assert failures, always good.
* We have a 80 chars wide line width rule though; you are going to have to 
split down things. Sorry.
* in \{{TestLimitInputStream}}, inner class \{{TestInputStream}} should have a 
different name (no Test* prefix), make static.
* And for strictness, use try-with-resources to trigger closing of the streams, 
even on an assert failure
* If you want preformatted text in the javadocs, you'll need to use  
sections, or  clauses.
* There are some requirements about test timeouts (mandatory) and preferences 
(naming threads). If your tests extend org.apache.hadoop.test.HadoopTestBase 
then you get these.

ordering of imports is trouble: its not strictly enforced the way spark does, 
but we have some preferences
and like them to be followed: imports are often where merge conflict arises, so 
once in we can't easily reorder
them (example: TestShell).

the order I have in my IDE is

{code}
javax.*
java.*
\n
(other())
\n
org.apache.*
\
all static imports, alpha sorted.
{code}

(+static imports can use .* if they want)

Before
{code}
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;

import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;

import org.apache.hadoop.util.IntrusiveCollection.Element;
import org.junit.Test;
{code}

After

{code}
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;

import org.junit.Test;

import org.apache.hadoop.util.IntrusiveCollection.Element;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
{code}

Other than those details, the tests themselves look good. Revised title & 
component to reflect area of coverage

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15520:

Component/s: util

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test, util
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15520:

Component/s: test

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add tests for various org.apache.hadoop.util classes

2018-06-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15520:

Summary: Add tests for various org.apache.hadoop.util classes  (was: Add 
new JUnit test cases)

> Add tests for various org.apache.hadoop.util classes
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-08 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505924#comment-16505924
 ] 

Sunil Govindan commented on HADOOP-15518:
-

Thanks [~kminder]

I checked this change in below cases.
 # Accessed RM old UI and new YARN UI from kerberized browser when 
AuthenticationFilter is configured and Http Auth Type was kerberos.
 # Both UI were accessible when Http Auth Type was configured as 
JWTRedirectAuthenticationHandler.

Before this patch, we were getting replay attack for Ui2 as multiple auth 
handlers were present. For ui2, I have found that initial /ui2 will get 401 
from Auth handler and then later it ll accept request as proper cookie were 
present in new request. But jetty will do a next redirect (not from client 
side) to access /ui2/index.html which will not have this cookie. Hence was 
getting GSS exception.

Since this patch checks for principal as well, I think it looks fine. cc 
[~vinodkv] [~eyang]

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-06-08 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505858#comment-16505858
 ] 

Takanobu Asanuma commented on HADOOP-10783:
---

I also created HDDS-157 for hadoop-ozone and hadoop-hdds.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> HADOOP-10783.4.patch, HADOOP-10783.5.patch, HADOOP-10783.6.patch, 
> commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-06-08 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Release Note: 


A new configuration, `hadoop.security.kms.client.token.use.uri.format`, is 
introduced in the KMS clients to control the service field of the delegation 
tokens fetched from the KMS. Historically KMS delegation tokens have ip:port as 
service, making KMS clients only able to use the token to authenticate with 1 
KMS server, even though the token is shared among all KMS servers at 
server-side. The default value of this configuration is false, to be compatible 
with existing behavior.

When the configuration is set to true, KMS delegation token will use uri as its 
service. This way, the clients can use it to authenticate with all KMS servers.

Note that this should only be set to true if ALL clients and renewers are 
running software that contains HADOOP-14445. Clients running on software 
without HADOOP-14445 will fail to authenticate if the token is in uri format.

  was:


+Whether the KMS client provider should use uri format as delegation tokens'
+service field. Historically KMS tokens have ip:port as service, making
+KMS clients only able to use the token to authenticate with 1 KMS server,
+even though the token is shared among all KMS servers at server-side.
+With the tokens service in uri format, the clients can use it to
+authenticate with all KMS servers.
+Note that this should only be set to true if ALL clients are running
+software that contains HADOOP-14445. Clients running on software without
+HADOOP-14445 will fail to authenticate if the token is in uri format.
A new configuration, `hadoop.security.kms.client.token.use.uri.format`, is 
introduced in the KMS clients to control the service field of the delegation 
tokens fetched from the KMS. Historically KMS delegation tokens have ip:port as 
service, making KMS clients only able to use the token to authenticate with 1 
KMS server, even though the token is shared among all KMS servers at 
server-side. The default value of this configuration is false, to be compatible 
with existing behavior.

When the configuration is set to true, KMS delegation token will use uri as its 
service. This way, the clients can use it to authenticate with all KMS servers.

Note that this should only be set to true if ALL clients and renewers are 
running software that contains HADOOP-14445. Clients running on software 
without HADOOP-14445 will fail to authenticate if the token is in uri format.


> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from 

[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-06-08 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Release Note: 


+Whether the KMS client provider should use uri format as delegation tokens'
+service field. Historically KMS tokens have ip:port as service, making
+KMS clients only able to use the token to authenticate with 1 KMS server,
+even though the token is shared among all KMS servers at server-side.
+With the tokens service in uri format, the clients can use it to
+authenticate with all KMS servers.
+Note that this should only be set to true if ALL clients are running
+software that contains HADOOP-14445. Clients running on software without
+HADOOP-14445 will fail to authenticate if the token is in uri format.
A new configuration, `hadoop.security.kms.client.token.use.uri.format`, is 
introduced in the KMS clients to control the service field of the delegation 
tokens fetched from the KMS. Historically KMS delegation tokens have ip:port as 
service, making KMS clients only able to use the token to authenticate with 1 
KMS server, even though the token is shared among all KMS servers at 
server-side. The default value of this configuration is false, to be compatible 
with existing behavior.

When the configuration is set to true, KMS delegation token will use uri as its 
service. This way, the clients can use it to authenticate with all KMS servers.

Note that this should only be set to true if ALL clients and renewers are 
running software that contains HADOOP-14445. Clients running on software 
without HADOOP-14445 will fail to authenticate if the token is in uri format.

  was:


A new token kind, `KMS_DELEGATION_TOKEN`, is introduced for the delegation 
tokens issued by the KMS. This new token kind uses the full KMS URI as its 
service field, hence able to be aware of all the KMS servers that it is valid 
for. Legacy token kind, `kms-dt`, is deprecated.

Legacy token can still be used for authentication / renewal for backward 
compatibility.

By default, new KMS clients who get a `KMS_DELEGATION_TOKEN` will create an 
identical token of the legacy `kms-dt` kind, to support the hybrid of new 
clients and legacy clients during authentication. This behavior can be turned 
off by setting `hadoop.security.kms.client.copy.legacy.token` to false. It is 
recommended to turn this behavior off only after all of the following are 
upgraded to the new version: all KMS Servers, all KMS Clients, all KMS token 
renewers.


> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-06-08 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505761#comment-16505761
 ] 

Xiao Chen commented on HADOOP-14445:


Continuing on [this 
comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16464600=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16464600],
 revived and modified patch 003. Attaching patch 14 ready for review.

Besides unit tests, also manually tested this in a real cluster and covered the 
following matrix, all passed. Additionally blacklisted yarn user from access 
KMS, to prevent the kerberos fall back to work. Verified in job's log 
aggregation that token was indeed used for auth. (renewer and worker nodes are 
separate, so didn't make specific combination for them. This simplifies the 
test matrix)

||Submitter||Yarn NM(worker)||
|O|N|
|N|O|
|N|N|
|N,conf|N|

and 
||Submitter||Yarn RM(renewer)||
|O|N|
|N|O|
|N|N|
|N,conf|N|

O=without the patch, N= with the patch, N,conf = with the patch as well 
configuration (aka -Dhadoop.security.kms.client.token.use.uri.format=true)

[~shahrs87], would you have cycles to review this? Thanks much

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-06-08 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Attachment: HADOOP-14445.14.patch

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-06-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505745#comment-16505745
 ] 

Hudson commented on HADOOP-15482:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14390 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14390/])
HADOOP-15482. Upgrade jackson-databind to version 2.9.5. Contributed by 
(jitendra: rev c42dcc7c47340d517563890269c6c112996e8897)
* (edit) hadoop-project/pom.xml


> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15482.001.patch, HADOOP-15482.002.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-06-08 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505734#comment-16505734
 ] 

Jitendra Nath Pandey commented on HADOOP-15482:
---

I have committed to trunk. Thanks [~ljain] for the contribution.

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15482.001.patch, HADOOP-15482.002.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-06-08 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-15482:
--
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15482.001.patch, HADOOP-15482.002.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org