[jira] [Commented] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15054037#comment-15054037
 ] 

Hadoop QA commented on HADOOP-12628:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HADOOP-12628 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12777248/patch-for-hadoop-2.2.x.patch
 |
| JIRA Issue | HADOOP-12628 |
| Powered by | Apache Yetus 0.1.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8227/console |


This message was automatically generated.



> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Attachment: (was: patch-for-hadoop-2.2.x.patch)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Attachment: patch-for-hadoop-2.2.x.patch

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch, 
> patch-for-hadoop-2.2.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Attachment: patch-for-hadoop-2.2.x.patch

patch for hadoop2.2.0

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Attachment: (was: patch-for-hadoop-2.2.x.patch)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12421) Add jitter to RetryInvocationHandler

2015-12-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053970#comment-15053970
 ] 

Hadoop QA commented on HADOOP-12421:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
32s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 20s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 13s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.fs.TestLocalFsFCStatistics |
|   | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12777210/HADOOP-12421-v4.patch 
|
| JIRA Issue | HADOOP-12421 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e10210dea2fa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality

[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Status: Patch Available  (was: Open)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Status: Open  (was: Patch Available)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Attachment: (was: patch-for-hadoop-2.6.x.patch)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Attachment: (was: patch-for-hadoop-2.5.x.patch)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch, 
> patch-for-hadoop-2.6.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Labels: patch  (was: improvement patch)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: patch
> Attachments: patch-for-hadoop-2.2.x.patch, 
> patch-for-hadoop-2.5.x.patch, patch-for-hadoop-2.6.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Hadoop Flags:   (was: Incompatible change)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: improvement, patch
> Attachments: patch-for-hadoop-2.2.x.patch, 
> patch-for-hadoop-2.5.x.patch, patch-for-hadoop-2.6.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Flags: Patch,Important  (was: Patch)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: improvement, patch
> Attachments: patch-for-hadoop-2.2.x.patch, 
> patch-for-hadoop-2.5.x.patch, patch-for-hadoop-2.6.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Environment: hadoop2.2.0

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
> Environment: hadoop2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: improvement, patch
> Attachments: patch-for-hadoop-2.2.x.patch, 
> patch-for-hadoop-2.5.x.patch, patch-for-hadoop-2.6.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12421) Add jitter to RetryInvocationHandler

2015-12-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053891#comment-15053891
 ] 

Mingliang Liu commented on HADOOP-12421:


Thanks for working on this. One nit:
{code}
+long range = (long) (0.075 * retVal);
+
+// Only add jitter if there is some to add.
+if (retVal > 0 && range > 0) {
{code}
Can the _if_ condition be simplified as {{range > 0}}?

> Add jitter to RetryInvocationHandler
> 
>
> Key: HADOOP-12421
> URL: https://issues.apache.org/jira/browse/HADOOP-12421
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12421-v1.patch, HADOOP-12421-v2.patch, 
> HADOOP-12421-v3.patch, HADOOP-12421-v4.patch
>
>
> Calls to NN can become synchronized across a cluster during NN failover. This 
> leads to a spike in requests until things recover. Making an already tricky 
> time worse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12628) service level authorization check the combination of host and user (patch for hadoop2.2.0)

2015-12-11 Thread mai shurong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mai shurong updated HADOOP-12628:
-
Affects Version/s: (was: 2.6.2)
   (was: 2.6.1)
   (was: 2.5.2)
   (was: 2.5.1)
   (was: 2.6.0)
   (was: 2.4.1)
   (was: 2.5.0)
   (was: 2.4.0)
   (was: 2.3.0)
 Target Version/s:   (was: 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.5.1, 2.5.2, 
2.6.0, 2.6.1, 2.6.2)
  Summary: service level authorization check the combination of 
host and user (patch for hadoop2.2.0)  (was: service level authorization check 
the combination of host and user)

> service level authorization check the combination of host and user (patch for 
> hadoop2.2.0)
> --
>
> Key: HADOOP-12628
> URL: https://issues.apache.org/jira/browse/HADOOP-12628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
>Reporter: mai shurong
>Assignee: mai shurong
>  Labels: improvement, patch
> Attachments: patch-for-hadoop-2.2.x.patch, 
> patch-for-hadoop-2.5.x.patch, patch-for-hadoop-2.6.x.patch
>
>
> Service level authorization in hadoop2.2.x can only check the user from 
> client. Service level authorization in hadoop2.7.x add the function of 
> checking the host(ip) from client, but only can check host and user 
> independently and cannot check the combination of host and user.
> I add the function of checking the combination of host and user by the patch. 
> After put the patch,we can set the authorization of host-user pair in the 
> hadoop-policy.xml.Take security.client.protocol.acl for example:
> If we only let the hadoop_user1 from 192.168.0.1(ip) has the authorization, 
> we can set "hadoop_user1:192.168.0.1". So hadoop_user1 from other host but 
> 192.168.0.1 doesn't have the authorization. If we add the authorization of 
> hadoop_user2 from myhost.com.cn(hostname), we can set 
> "hadoop_user2:myhost.com.cn"; if we authorize hadoop_user3 from any host,we 
> just set "hadoop_user3" like before; if we want toauthorize any user from the 
> host 192.168.10.10, we can set "*:192.168.10.10".
> example:
> 
> security.client.protocol.acl
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> It is also applied to the blocked access control list after hadoop2.6.0:
> example:
> 
> security.client.protocol.acl.blocked
> 
> hadoop_user1:192.168.0.1,hadoop_user2:myhost.com.cn,hadoop_user3,*:192.168.10.10
>  
> The format of access control list is completely Compatible.   
> The list of users and groups are both comma separated list of names. The two 
> lists are separated by a space.
> Add a blank at the beginning of the line if only a list of groups is to be 
> provided, equivalently a comma-separated list of users followed by a space or 
> nothing implies only a set of given users.A special value of * implies that 
> all users from any host are allowed to access the service.
> Example: 
> user1,user2 group1,group2 (user1,user2,group1,group2 from any host have the 
> authorization)
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn (user1 from 192.168.0.1, user2 from 
> myhost1.com.cn, group1 from 192.168.0.2,group2 from myhost2.com.cn have the 
> authorization) 
>   \*:192.168.0.1,*:myhost1.com.cn (any user from 192.168.0.1, any user from 
> myhost1.com.cn have the authorization)
>   \* (any user from any host have the authorization) 
> example1:
> 
> security.client.protocol.acl
> *
>  
> example2:
>  
> security.client.protocol.acl
> user1,user2 group1,group2
>  
>  
> example3:
>  
> security.client.protocol.acl
> \*:192.168.0.1,*:myhost1.com.cn
>  
>  
> example3:
>  
> security.client.protocol.acl
> user1:192.168.0.1,user2:myhost1.com.cn  
> group1:192.168.0.2,group2:myhost2.com.cn
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2015-12-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053862#comment-15053862
 ] 

Hadoop QA commented on HADOOP-12563:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
46s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
27s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 24s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 52s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
55s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 23m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
21s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 58m 49s 
{color} | {color:red} root-jdk1.7.0_91 with JDK v1.7.0_91 generated 4 new 
issues (was 729, now 729). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 48s 
{color} | {color:red} Patch generated 1 new checkstyle issues in root (total 
was 27, now 1). {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 58s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
13s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 9m 32s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66 with JDK 
v1.8.0_66 generated 2 new issues (was 7, now 9). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 15m 52s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91 with JDK 
v1.7.0_91 generated 2 new issues (was 7, now 9). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {c

[jira] [Updated] (HADOOP-12421) Add jitter to RetryInvocationHandler

2015-12-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12421:
---
Attachment: HADOOP-12421-v4.patch

Rebased on trunk

> Add jitter to RetryInvocationHandler
> 
>
> Key: HADOOP-12421
> URL: https://issues.apache.org/jira/browse/HADOOP-12421
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12421-v1.patch, HADOOP-12421-v2.patch, 
> HADOOP-12421-v3.patch, HADOOP-12421-v4.patch
>
>
> Calls to NN can become synchronized across a cluster during NN failover. This 
> leads to a spike in requests until things recover. Making an already tricky 
> time worse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2015-12-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053770#comment-15053770
 ] 

Allen Wittenauer commented on HADOOP-12563:
---

Keep in mind that we're also thinking about YARN, etc so this won't be a 
filesystem specific interface

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053724#comment-15053724
 ] 

Hadoop QA commented on HADOOP-12581:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 34, now 34). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 1s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 29s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 9s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12777165/HADOOP-12581.002.patch
 |
| JIRA Issue | HADOOP-12581 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| una

[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2015-12-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053664#comment-15053664
 ] 

Daryn Sharp commented on HADOOP-12563:
--

Skimmed the patch because it looks interesting!

Please don't use getServiceName and getDelegationToken as your interface.  It 
won't work for multi-token services.  There's a reason why the filesystem 
javadoc refers to using addDelegationTokens.  A compound filesystem like ViewFs 
requires obtaining multiple tokens.  Fetching a RM token typically involves 
also implicitly acquiring a JHS or AHS token.  

You also cannot assume to know the alias that will be used by a provider which 
is actually impossible when n-many tokens may be returned.

It would be great if you had something like a -fs option so every custom fs 
doesn't need to register its scheme when 
path.getFileSystem(conf).addDelegationTokens() would handle all scenarios.

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12634) Change Lazy Rename Pending Operation Completion of WASB to address case of potential data loss due to partial copy

2015-12-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053509#comment-15053509
 ] 

Hadoop QA commented on HADOOP-12634:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12777080/HADOOP-12634.01.patch 
|
| JIRA Issue | HADOOP-12634 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 48df860487c7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 576b569 |
| findbugs | v3.0.0 |
| JDK v1.7.0_91  Test Results | 
https://build

[jira] [Updated] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12581:
---
Attachment: HADOOP-12581.002.patch

Now with added *BSD goodness :-)

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch, HADOOP-12581.002.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2015-12-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053412#comment-15053412
 ] 

Allen Wittenauer commented on HADOOP-12563:
---

Ping [~owen.omalley] to help review this. ;)

I haven't had a chance to apply and execute, but some feedback based upon 
visual inspection:

1) In 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java

{code}
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
{code}

Not part of this patch, but clearly wrong nonetheless especially with YARN-4435 
in the pipeline.  We should update it to include YARN while we're here.

2)
writeLegacyTokenStorageFile, etc.

I think I'd rather see these called something with version 0 or java 
serialization or something else.  This way if there is ever a version 2 (we 
drop protobuf?), we're covered.  Bonus points if we could somehow tie the 
dtutil -format option to the methods and version.

3) TestDtUtilShell.java:
System.getProperty("test.build.data", "/tmp"), "TestDtUtilShell");

Let's set this to target/ instead of /tmp to be less racy with multiple unit 
tests running on the same machine.

Thanks for fixing the service name in the usage. :)


> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12634) Change Lazy Rename Pending Operation Completion of WASB to address case of potential data loss due to partial copy

2015-12-11 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12634:
---
Attachment: HADOOP-12634.01.patch

> Change Lazy Rename Pending Operation Completion of WASB to address case of 
> potential data loss due to partial copy
> --
>
> Key: HADOOP-12634
> URL: https://issues.apache.org/jira/browse/HADOOP-12634
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
>Priority: Critical
> Attachments: HADOOP-12634.01.patch
>
>
> HADOOP-12334 changed mode of Copy Operation of HBase WAL Archiving to bypass 
> Azure Storage Throttling after retries. This was via client side copy. 
> However a process crash when the copy is partially done would result in a 
> scenario where the source and destination blobs will have different contents 
> and lazy rename pending operation will not handle this thus causing data 
> loss. We need to fix the lazy rename pending operation to address this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12634) Change Lazy Rename Pending Operation Completion of WASB to address case of potential data loss due to partial copy

2015-12-11 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12634:
---
Status: Patch Available  (was: Open)

> Change Lazy Rename Pending Operation Completion of WASB to address case of 
> potential data loss due to partial copy
> --
>
> Key: HADOOP-12634
> URL: https://issues.apache.org/jira/browse/HADOOP-12634
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
>Priority: Critical
> Attachments: HADOOP-12634.01.patch
>
>
> HADOOP-12334 changed mode of Copy Operation of HBase WAL Archiving to bypass 
> Azure Storage Throttling after retries. This was via client side copy. 
> However a process crash when the copy is partially done would result in a 
> scenario where the source and destination blobs will have different contents 
> and lazy rename pending operation will not handle this thus causing data 
> loss. We need to fix the lazy rename pending operation to address this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053042#comment-15053042
 ] 

Hadoop QA commented on HADOOP-12581:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 34, now 36). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 24s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 20s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12777046/HADOOP-12581.001.patch
 |
| JIRA Issue | HADOOP-12581 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b0259cf2beb7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 20

[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2015-12-11 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053035#comment-15053035
 ] 

Sean Mackrory commented on HADOOP-12537:


The failures on hadoop-common were also occurring in recently committed patches 
- I do not believe it's related to mine.

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, 
> HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12007) GzipCodec native CodecPool leaks memory

2015-12-11 Thread Arnaud Linz (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052987#comment-15052987
 ] 

Arnaud Linz commented on HADOOP-12007:
--

I have the same problem. Yarn kills my yarn container because my streaming app 
use GzipCodec and create a new off-heap buffer each time a new Hdfs file is 
created.


> GzipCodec native CodecPool leaks memory
> ---
>
> Key: HADOOP-12007
> URL: https://issues.apache.org/jira/browse/HADOOP-12007
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Yejun Yang
>
> org/apache/hadoop/io/compress/GzipCodec.java call 
> CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But 
> compressor objects are actually never returned to pool which cause memory 
> leak.
> HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object 
> to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually 
> returns a CompressorStream which overrides close().
> This cause CodecPool.returnCompressor never being called. In my log file I 
> can see lots of "Got brand-new compressor [.gz]" but no "Got recycled 
> compressor".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052970#comment-15052970
 ] 

Alan Burlison commented on HADOOP-12581:


Ah, OK, I wasn't clear what you were saying :-) I'll make that change & update 
the patch.

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Dmitry Sivachenko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052962#comment-15052962
 ] 

Dmitry Sivachenko commented on HADOOP-12581:


I meant getent usage should be the same for *BSD and your proposal 
OS.contains("BSD") should work for them all.

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052958#comment-15052958
 ] 

Alan Burlison commented on HADOOP-12581:


See also 
https://commons.apache.org/proper/commons-lang/apidocs/src-html/org/apache/commons/lang3/SystemUtils.html#line.1179

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052941#comment-15052941
 ] 

Alan Burlison commented on HADOOP-12581:


According to https://netbeans.org/bugzilla/show_bug.cgi?id=145462 "OpenBSD" is 
returned for OpenBSD, not "FreeBSD".

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-11 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052880#comment-15052880
 ] 

Harsh J commented on HADOOP-12559:
--

Although if its possible for the patch to not be specific to re-login during 
only decrypt calls, it can solve the NN tgt expiry issue also.

> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-12559.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12602) TestMetricsSystemImpl#testQSize occasionally fail

2015-12-11 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052866#comment-15052866
 ] 

Masatake Iwasaki commented on HADOOP-12602:
---

Thanks, [~ajisakaa]!

> TestMetricsSystemImpl#testQSize occasionally fail
> -
>
> Key: HADOOP-12602
> URL: https://issues.apache.org/jira/browse/HADOOP-12602
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12602.001.patch
>
>
> I have seen this test failed a few times in the past.
> Error Message
> {noformat}
> metricsSink.putMetrics();
> Wanted 2 times:
> -> at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
> But was 1 time:
> -> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:183)
> {noformat}
> Stacktrace
> {noformat}
> org.mockito.exceptions.verification.TooLittleActualInvocations: 
> metricsSink.putMetrics();
> Wanted 2 times:
> -> at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
> But was 1 time:
> -> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:183)
>   at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
> {noformat}
> Standard Output
> {noformat}
> 2015-11-25 19:07:49,867 INFO  impl.MetricsConfig 
> (MetricsConfig.java:loadFirst(115)) - loaded properties from 
> hadoop-metrics2-test.properties
> 2015-11-25 19:07:49,932 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(374)) - Scheduled snapshot period at 10 
> second(s).
> 2015-11-25 19:07:49,932 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:start(192)) - Test metrics system started
> 2015-11-25 19:07:50,134 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:start(203)) - Sink slowSink started
> 2015-11-25 19:07:50,135 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:registerSink(301)) - Registered sink slowSink
> 2015-11-25 19:07:50,135 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:start(203)) - Sink dataSink started
> 2015-11-25 19:07:50,136 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:registerSink(301)) - Registered sink dataSink
> 2015-11-25 19:07:50,746 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:stop(211)) - Stopping Test metrics system...
> 2015-11-25 19:07:50,747 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:publishMetricsFromQueue(140)) - slowSink thread 
> interrupted.
> 2015-11-25 19:07:50,748 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:publishMetricsFromQueue(140)) - dataSink thread 
> interrupted.
> 2015-11-25 19:07:50,748 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:stop(217)) - Test metrics system stopped.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Dmitry Sivachenko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052836#comment-15052836
 ] 

Dmitry Sivachenko commented on HADOOP-12581:


According to man-pages, it should be the same for NetBSD and OpenBSD (and 
DragonflyBSD)

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052830#comment-15052830
 ] 

Alan Burlison commented on HADOOP-12581:


What about other BSD variants? would {{OS.contains("BSD")}} work for them all?

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Dmitry Sivachenko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052799#comment-15052799
 ] 

Dmitry Sivachenko commented on HADOOP-12581:


The same patch will be valid for FreeBSD, can we please add also relevant 
OS.startswith("FreeBSD") too?

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12581:
---
Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-12-11 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12581:
---
Attachment: HADOOP-12581.001.patch

> ShellBasedIdMapping needs suport for Solaris
> 
>
> Key: HADOOP-12581
> URL: https://issues.apache.org/jira/browse/HADOOP-12581
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: HADOOP-12581.001.patch
>
>
> ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
> adding.
> From looking at the Linux support in ShellBasedIdMapping, the same sequences 
> of shell commands should work for Solaris as well so all that's probably 
> needed is to change the implementation of checkSupportedPlatform() to treat 
> Linux and Solaris the same way, plus possibly some renaming of other methods 
> to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12635) Adding Append API support for WASB

2015-12-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12635:

Target Version/s: 2.9.0  (was: 2.8.0)
 Component/s: azure

> Adding Append API support for WASB
> --
>
> Key: HADOOP-12635
> URL: https://issues.apache.org/jira/browse/HADOOP-12635
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: Append API.docx
>
>
> Currently the WASB implementation of the HDFS interface does not support 
> Append API. This JIRA is added to design and implement the Append API support 
> to WASB. The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12602) TestMetricsSystemImpl#testQSize occasionally fail

2015-12-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052666#comment-15052666
 ] 

Hudson commented on HADOOP-12602:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #685 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/685/])
HADOOP-12602. TestMetricsSystemImpl#testQSize occasionally fails. (aajisaka: 
rev eee0cf4611b02171e8a043f1cc5f7dbad47fc3b4)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestMetricsSystemImpl#testQSize occasionally fail
> -
>
> Key: HADOOP-12602
> URL: https://issues.apache.org/jira/browse/HADOOP-12602
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12602.001.patch
>
>
> I have seen this test failed a few times in the past.
> Error Message
> {noformat}
> metricsSink.putMetrics();
> Wanted 2 times:
> -> at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
> But was 1 time:
> -> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:183)
> {noformat}
> Stacktrace
> {noformat}
> org.mockito.exceptions.verification.TooLittleActualInvocations: 
> metricsSink.putMetrics();
> Wanted 2 times:
> -> at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
> But was 1 time:
> -> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:183)
>   at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
> {noformat}
> Standard Output
> {noformat}
> 2015-11-25 19:07:49,867 INFO  impl.MetricsConfig 
> (MetricsConfig.java:loadFirst(115)) - loaded properties from 
> hadoop-metrics2-test.properties
> 2015-11-25 19:07:49,932 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(374)) - Scheduled snapshot period at 10 
> second(s).
> 2015-11-25 19:07:49,932 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:start(192)) - Test metrics system started
> 2015-11-25 19:07:50,134 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:start(203)) - Sink slowSink started
> 2015-11-25 19:07:50,135 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:registerSink(301)) - Registered sink slowSink
> 2015-11-25 19:07:50,135 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:start(203)) - Sink dataSink started
> 2015-11-25 19:07:50,136 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:registerSink(301)) - Registered sink dataSink
> 2015-11-25 19:07:50,746 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:stop(211)) - Stopping Test metrics system...
> 2015-11-25 19:07:50,747 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:publishMetricsFromQueue(140)) - slowSink thread 
> interrupted.
> 2015-11-25 19:07:50,748 INFO  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:publishMetricsFromQueue(140)) - dataSink thread 
> interrupted.
> 2015-11-25 19:07:50,748 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:stop(217)) - Test metrics system stopped.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-11 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052650#comment-15052650
 ] 

Harsh J commented on HADOOP-12559:
--

[~zhz] - I've isolated the issue to a problem with tgt lifetime vs. when the NN 
renews it, so that trace can be ignored in this JIRA. I'll log a separate JIRA 
for it and follow up here later. Thanks for taking a look!

> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-12559.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)