[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329919#comment-16329919
 ] 

Ajay Kumar commented on HADOOP-14788:
-

fixed checkstyle issues patch v6. 

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, 
> HADOOP-14788.006.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14788:

Attachment: HADOOP-14788.006.patch

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, 
> HADOOP-14788.006.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: (was: HADOOP-15170.001.patch)

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Status: Patch Available  (was: Open)

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: HADOOP-15170.001.patch

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: HADOOP-15170.001.patch

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329898#comment-16329898
 ] 

genericqa commented on HADOOP-12751:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
58s{color} | {color:red} Docker failed to build yetus/hadoop:67e87c9. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12751 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906513/HADOOP-12751-branch-2.7.009.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13987/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329736#comment-16329736
 ] 

genericqa commented on HADOOP-12751:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
36s{color} | {color:red} Docker failed to build yetus/hadoop:67e87c9. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12751 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906513/HADOOP-12751-branch-2.7.009.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13986/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-17 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329726#comment-16329726
 ] 

Konstantin Shvachko edited comment on HADOOP-12751 at 1/18/18 12:09 AM:


Attached a patch for branch-2.7. Cherry picked all changes from 2.8 commit 
except {{KDiag}}, because diagnostic class is not in 2.7.

LMK if that works.


was (Author: shv):
Attached a patch for branch-2.7. Cherry picked all changes from 2.8 commit 
except {{KDiag}}, because diagnostic class is not in 2.7.

LMK of that works.

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-17 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-12751:
-
Status: Patch Available  (was: Reopened)

Attached a patch for branch-2.7. Cherry picked all changes from 2.8 commit 
except {{KDiag}}, because diagnostic class is not in 2.7.

LMK of that works.

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 3.0.0-alpha1, 2.8.0
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-17 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-12751:
-
Attachment: HADOOP-12751-branch-2.7.009.patch

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-17 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reopened HADOOP-12751:
--

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13974) S3Guard CLI to support list/purge of pending multipart commits

2018-01-17 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329685#comment-16329685
 ] 

Aaron Fabbri commented on HADOOP-13974:
---

Hey [~ste...@apache.org] just noticed some unrelated changes that were not in 
my patch seem to have been committed with this:

[https://github.com/apache/hadoop/commit/35ad9b1dd279b769381ea1625d9bf776c309c5cb]

Can you please take a look and let me know if we need to revert the extra bits?

> S3Guard CLI to support list/purge of pending multipart commits
> --
>
> Key: HADOOP-13974
> URL: https://issues.apache.org/jira/browse/HADOOP-13974
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Aaron Fabbri
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-13974.001.patch, HADOOP-13974.002.patch, 
> HADOOP-13974.003.patch, HADOOP-13974.004.patch, HADOOP-13974.005.patch, 
> HADOOP-13974.006.patch, HADOOP-13974.007.patch
>
>
> The S3A CLI will need to be able to list and delete pending multipart 
> commits. 
> We can do the cleanup already via fs.s3a properties. The CLI will let scripts 
> stat for outstanding data (have a different exit code) and permit batch jobs 
> to explicitly trigger cleanups.
> This will become critical with the multipart committer, as there's a 
> significantly higher likelihood of commits remaining outstanding.
> We may also want to be able to enumerate/cancel all pending commits in the FS 
> tree



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329647#comment-16329647
 ] 

Arpit Agarwal commented on HADOOP-14969:


Steve, do you still feel strongly about making this an IOException? Keeping the 
RuntimeException here sounds like the right choice to me. Also the existing 
code throws a RTE here.



> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15150) in FsShell, UGI params should be overidden through env vars(-D arg)

2018-01-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329535#comment-16329535
 ] 

Bharat Viswanadham commented on HADOOP-15150:
-

+1

Thanks for the patch [~brahmareddy]

> in FsShell, UGI params should be overidden through env vars(-D arg)
> ---
>
> Key: HADOOP-15150
> URL: https://issues.apache.org/jira/browse/HADOOP-15150
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15150-002.patch, HADOOP-15150.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files.*So that, -D args will not 
> take effect*.
> {code}
>   private static void ensureInitialized() {
> if (conf == null) {
>   synchronized(UserGroupInformation.class) {
> if (conf == null) { // someone might have beat us
>   initialize(new Configuration(), false);
> }
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15176) Enhance IAM assumed role support in S3A client

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329495#comment-16329495
 ] 

genericqa commented on HADOOP-15176:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 17s{color} 
| {color:red} root generated 2 new + 1239 unchanged - 2 fixed = 1241 total (was 
1241) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 58s{color} | {color:orange} root: The patch generated 16 new + 15 unchanged 
- 0 fixed = 31 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 37s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.s3a.TestS3AAWSCredentialsProvider |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15176 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906454/HADOOP-15176-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 6cfa560cdd23 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions

2018-01-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329452#comment-16329452
 ] 

Ajay Kumar commented on HADOOP-14959:
-

[~bharatviswa], thanks for review. Added [HADOOP-15178] to make changes in 
NetUtils#wrapException.

> DelegationTokenAuthenticator.authenticate() to wrap network exceptions
> --
>
> Key: HADOOP-14959
> URL: https://issues.apache.org/jira/browse/HADOOP-14959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net, security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14959.001.patch, HADOOP-14959.002.patch
>
>
> network errors raised in {{DelegationTokenAuthenticator.authenticate()}} 
> aren't being wrapped, so only return the usual limited-value java.net error 
> text. using {{NetUtils.wrapException()}} can address that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15178:

Description: 
NetUtils#wrapException returns an IOException if exception passed to it is not 
of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
By default, it  should always return instance (subclass of IOException) of same 
type unless a String constructor is not available.

  was:
NetUtils#wrapException returns an IOException if exception passed to it is not 
of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
# By default, it  should always return instance (subclass of IOException) of 
same type unless a String constructor is not available.


> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15178:

Environment: (was: NetUtils#wrapException returns an IOException if 
exception passed to it is not of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
By default, it  should always return instance (subclass of IOException) of same 
type unless a String constructor is not available.)

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15178:

Description: 
NetUtils#wrapException returns an IOException if exception passed to it is not 
of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
# By default, it  should always return instance (subclass of IOException) of 
same type unless a String constructor is not available.

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> # By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-15178:
---

 Summary: Generalize NetUtils#wrapException to handle other 
subclasses with String Constructor.
 Key: HADOOP-15178
 URL: https://issues.apache.org/jira/browse/HADOOP-15178
 Project: Hadoop Common
  Issue Type: Bug
 Environment: NetUtils#wrapException returns an IOException if 
exception passed to it is not of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
By default, it  should always return instance (subclass of IOException) of same 
type unless a String constructor is not available.
Reporter: Ajay Kumar
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions

2018-01-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329426#comment-16329426
 ] 

Bharat Viswanadham commented on HADOOP-14959:
-

Hi [~ajayydv]

Thanks for your patch.

UGI class throws KerberosAuthException, Which does not match with any of the if 
conditions in NetUtils.wrapException,  and it will go to else and we throw 
IOException wrapping original Exception. I think in this case we should throw 
original exception which is KerberosAuthException.

 

You can modify the NetUtilsWrapException to be generic to handle such scenarios.

> DelegationTokenAuthenticator.authenticate() to wrap network exceptions
> --
>
> Key: HADOOP-14959
> URL: https://issues.apache.org/jira/browse/HADOOP-14959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net, security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14959.001.patch, HADOOP-14959.002.patch
>
>
> network errors raised in {{DelegationTokenAuthenticator.authenticate()}} 
> aren't being wrapped, so only return the usual limited-value java.net error 
> text. using {{NetUtils.wrapException()}} can address that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-01-17 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-13761:
-

Assignee: Aaron Fabbri  (was: Steve Loughran)

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15176) Enhance IAM assumed role support in S3A client

2018-01-17 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329310#comment-16329310
 ] 

Aaron Fabbri commented on HADOOP-15176:
---

{quote}

Patch 001, still WiP

{quote}

Ok, I won't commit it yet. ;)  Shout when you feel like it is ready and I'll 
give it a good review.

> Enhance IAM assumed role support in S3A client
> --
>
> Key: HADOOP-15176
> URL: https://issues.apache.org/jira/browse/HADOOP-15176
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
> Environment: 
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15176-001.patch
>
>
> Followup HADOOP-15141 with
> * Code to generate basic AWS json policies somewhat declaratively (no hand 
> coded strings)
> * Tests to simulate users with different permissions down the path of a 
> single bucket
> * test-driven changes to S3A client to handle user without full write up the 
> FS tree
> * move the new authenticator into the s3a sub-package "auth", where we can 
> put more auth stuff (that base s3a package is getting way too big)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15176) Enhance IAM assumed role support in S3A client

2018-01-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15176:

Status: Patch Available  (was: Open)

> Enhance IAM assumed role support in S3A client
> --
>
> Key: HADOOP-15176
> URL: https://issues.apache.org/jira/browse/HADOOP-15176
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
> Environment: 
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15176-001.patch
>
>
> Followup HADOOP-15141 with
> * Code to generate basic AWS json policies somewhat declaratively (no hand 
> coded strings)
> * Tests to simulate users with different permissions down the path of a 
> single bucket
> * test-driven changes to S3A client to handle user without full write up the 
> FS tree
> * move the new authenticator into the s3a sub-package "auth", where we can 
> put more auth stuff (that base s3a package is getting way too big)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15167) [viewfs] ls will fail when user doesn't exist

2018-01-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329274#comment-16329274
 ] 

Brahma Reddy Battula commented on HADOOP-15167:
---

[~vinayrpet] thanks for detailed.Found some more places to be updated.

Uploaded patch.

> [viewfs] ls will fail when user doesn't exist
> -
>
> Key: HADOOP-15167
> URL: https://issues.apache.org/jira/browse/HADOOP-15167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15167-002.patch, HADOOP-15167.patch
>
>
> Have Secure federated cluster with atleast two nameservices
> Configure viewfs related configs 
>  When we run the {{ls}} cmd in HDFS client ,we will call the method: 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.InternalDirOfViewFs#getFileStatus
>  it will try to get the group of the kerberos user. If the node has not this 
> user, it fails. 
> Throws the following and exits.UserGroupInformation#getPrimaryGroupName
> {code}
> if (groups.isEmpty()) {
>   throw new IOException("There is no primary group for UGI " + this);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15176) Enhance IAM assumed role support in S3A client

2018-01-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329275#comment-16329275
 ] 

Steve Loughran commented on HADOOP-15176:
-

Patch 001, still WiP

Changes as noted in the description, with tests which give the assumed role 
restricted access the bucket (just some subdirs) and then attempt 
mkdir/delete/touch/rename operations underneath to see what happens. Patches 
S3A FS to not overreact if you can't create an empty directory marker because 
you don't have the permission to, even though the outcome isn't quite what a 
normal FS would deliver.

The "role model" class is in the hadoop-aws JAR tagged as unstable and for 
testing only...useful to be able to dynamically create a policy without making 
a mess of the JSON. I want it in the production JAR so that I can use it in 
some downstream testing too, seeing what other apps fail when you restrict 
access. Not because I expect people to use assumed roles in production much, 
but because the fact that you can do it in tests makes it straightforward to 
simulate restricted permissions.

Fun little thing there I'm still to try: what if you don't have s3guard write 
access & try using it in auth mode? I expect it gets inconsistent fast

> Enhance IAM assumed role support in S3A client
> --
>
> Key: HADOOP-15176
> URL: https://issues.apache.org/jira/browse/HADOOP-15176
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
> Environment: 
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15176-001.patch
>
>
> Followup HADOOP-15141 with
> * Code to generate basic AWS json policies somewhat declaratively (no hand 
> coded strings)
> * Tests to simulate users with different permissions down the path of a 
> single bucket
> * test-driven changes to S3A client to handle user without full write up the 
> FS tree
> * move the new authenticator into the s3a sub-package "auth", where we can 
> put more auth stuff (that base s3a package is getting way too big)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15150) in FsShell, UGI params should be overidden through env vars(-D arg)

2018-01-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329270#comment-16329270
 ] 

Brahma Reddy Battula commented on HADOOP-15150:
---

[~vinayrpet] thanks for review..Uploaded patch to use 
CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION to address 
[~shahrs87] comment. 

> in FsShell, UGI params should be overidden through env vars(-D arg)
> ---
>
> Key: HADOOP-15150
> URL: https://issues.apache.org/jira/browse/HADOOP-15150
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15150-002.patch, HADOOP-15150.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files.*So that, -D args will not 
> take effect*.
> {code}
>   private static void ensureInitialized() {
> if (conf == null) {
>   synchronized(UserGroupInformation.class) {
> if (conf == null) { // someone might have beat us
>   initialize(new Configuration(), false);
> }
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15150) in FsShell, UGI params should be overidden through env vars(-D arg)

2018-01-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-15150:
--
Attachment: HADOOP-15150-002.patch

> in FsShell, UGI params should be overidden through env vars(-D arg)
> ---
>
> Key: HADOOP-15150
> URL: https://issues.apache.org/jira/browse/HADOOP-15150
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15150-002.patch, HADOOP-15150.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files.*So that, -D args will not 
> take effect*.
> {code}
>   private static void ensureInitialized() {
> if (conf == null) {
>   synchronized(UserGroupInformation.class) {
> if (conf == null) { // someone might have beat us
>   initialize(new Configuration(), false);
> }
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15176) Enhance IAM assumed role support in S3A client

2018-01-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15176:

Attachment: HADOOP-15176-001.patch

> Enhance IAM assumed role support in S3A client
> --
>
> Key: HADOOP-15176
> URL: https://issues.apache.org/jira/browse/HADOOP-15176
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
> Environment: 
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15176-001.patch
>
>
> Followup HADOOP-15141 with
> * Code to generate basic AWS json policies somewhat declaratively (no hand 
> coded strings)
> * Tests to simulate users with different permissions down the path of a 
> single bucket
> * test-driven changes to S3A client to handle user without full write up the 
> FS tree
> * move the new authenticator into the s3a sub-package "auth", where we can 
> put more auth stuff (that base s3a package is getting way too big)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15177) Update the release year to 2018

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329240#comment-16329240
 ] 

genericqa commented on HADOOP-15177:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
25s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c2d96dd |
| JIRA Issue | HADOOP-15177 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906447/HADOOP-15177-branch-2.8.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 4542f086528d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / 34f08f7 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13983/testReport/ |
| Max. process+thread count | 76 (vs. ulimit of 5000) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13983/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update the release year to 2018
> ---
>
> Key: HADOOP-15177
> URL: https://issues.apache.org/jira/browse/HADOOP-15177
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
> Attachments: HADOOP-15177-branch-2.7.00.patch, 
> HADOOP-15177-branch-2.8.00.patch, HADOOP-15177.00.patch
>
>
> The release year needs to be updated.
> {noformat}
> $ find . -name "pom.xml" | xargs grep -n 2017
> ./hadoop-project/pom.xml:34:    2017{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, 

[jira] [Updated] (HADOOP-15177) Update the release year to 2018

2018-01-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15177:

Attachment: HADOOP-15177-branch-2.8.00.patch
HADOOP-15177-branch-2.7.00.patch

> Update the release year to 2018
> ---
>
> Key: HADOOP-15177
> URL: https://issues.apache.org/jira/browse/HADOOP-15177
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
> Attachments: HADOOP-15177-branch-2.7.00.patch, 
> HADOOP-15177-branch-2.8.00.patch, HADOOP-15177.00.patch
>
>
> The release year needs to be updated.
> {noformat}
> $ find . -name "pom.xml" | xargs grep -n 2017
> ./hadoop-project/pom.xml:34:    2017{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15177) Update the release year to 2018

2018-01-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15177:

Status: Patch Available  (was: Open)

> Update the release year to 2018
> ---
>
> Key: HADOOP-15177
> URL: https://issues.apache.org/jira/browse/HADOOP-15177
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
> Attachments: HADOOP-15177.00.patch
>
>
> The release year needs to be updated.
> {noformat}
> $ find . -name "pom.xml" | xargs grep -n 2017
> ./hadoop-project/pom.xml:34:    2017{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15177) Update the release year to 2018

2018-01-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15177:

Attachment: HADOOP-15177.00.patch

> Update the release year to 2018
> ---
>
> Key: HADOOP-15177
> URL: https://issues.apache.org/jira/browse/HADOOP-15177
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
> Attachments: HADOOP-15177.00.patch
>
>
> The release year needs to be updated.
> {noformat}
> $ find . -name "pom.xml" | xargs grep -n 2017
> ./hadoop-project/pom.xml:34:    2017{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15177) Update the release year to 2018

2018-01-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HADOOP-15177:
---

Assignee: Bharat Viswanadham

> Update the release year to 2018
> ---
>
> Key: HADOOP-15177
> URL: https://issues.apache.org/jira/browse/HADOOP-15177
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
>
> The release year needs to be updated.
> {noformat}
> $ find . -name "pom.xml" | xargs grep -n 2017
> ./hadoop-project/pom.xml:34:    2017{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15142) Register FTP and SFTP as FS services

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329000#comment-16329000
 ] 

genericqa commented on HADOOP-15142:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 11 unchanged - 0 fixed = 16 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 52s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 15s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15142 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906419/HADOOP-15142.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 0304cebe5b9f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e42d05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13982/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 

[jira] [Created] (HADOOP-15177) Update the release year to 2018

2018-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15177:
--

 Summary: Update the release year to 2018
 Key: HADOOP-15177
 URL: https://issues.apache.org/jira/browse/HADOOP-15177
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Akira Ajisaka


The release year needs to be updated.
{noformat}
$ find . -name "pom.xml" | xargs grep -n 2017
./hadoop-project/pom.xml:34:    2017{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15143) NPE due to Invalid KerberosTicket in UGI

2018-01-17 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328964#comment-16328964
 ] 

Wei-Chiu Chuang commented on HADOOP-15143:
--

FYI we should cherry pick the commit into branch-3.0, branch-2.8, branch-2.7 
and branch-2.6.

> NPE due to Invalid KerberosTicket in UGI
> 
>
> Key: HADOOP-15143
> URL: https://issues.apache.org/jira/browse/HADOOP-15143
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HADOOP-15143-branch-2.001.patch, HADOOP-15143.001.patch, 
> HADOOP-15143.002.patch
>
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.security.UserGroupInformation.fixKerberosTicketOrder(UserGroupInformation.java:1170)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1247)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1157)
>  
> {code}
> It could be related to jdk issue
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/fd0e0898721c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance

2018-01-17 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15027:

Fix Version/s: (was: 3.0.1)
   (was: 2.9.1)
   (was: 2.10.0)

I reverted this from branch-3.0, branch-2 and branch-2.9 since it broke the 
builds there:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-aliyun: Compilation failure: Compilation failure:
[ERROR] 
/home/jlowe/hadoop/apache/hadoop/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java:[46,30]
 cannot find symbol
[ERROR] symbol:   class BlockingThreadPoolExecutorService
[ERROR] location: package org.apache.hadoop.util
[ERROR] 
/home/jlowe/hadoop/apache/hadoop/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java:[53,30]
 cannot find symbol
[ERROR] symbol:   class SemaphoredDelegatingExecutor
[ERROR] location: package org.apache.hadoop.util
[ERROR] 
/home/jlowe/hadoop/apache/hadoop/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java:[332,30]
 cannot find symbol
[ERROR] symbol:   variable BlockingThreadPoolExecutorService
[ERROR] location: class org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
[ERROR] 
/home/jlowe/hadoop/apache/hadoop/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java:[549,13]
 cannot find symbol
[ERROR] symbol:   class SemaphoredDelegatingExecutor
[ERROR] location: class org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
{noformat}

If this needs to go into those releases, please revisit the patches for those 
branches.  Looks like this patch depends upon HADOOP-15039 which only went into 
3.1.0.

> AliyunOSS: Support multi-thread pre-read to improve sequential read from 
> Hadoop to Aliyun OSS performance
> -
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch, HADOOP-15027.013.patch, HADOOP-15027.014.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328913#comment-16328913
 ] 

genericqa commented on HADOOP-15158:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906405/HADOOP-15158.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5039b75ecfd2 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e42d05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13981/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13981/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: 

[jira] [Updated] (HADOOP-15142) Register FTP and SFTP as FS services

2018-01-17 Thread Mario Molina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Molina updated HADOOP-15142:
--
Attachment: HADOOP-15142.003.patch

> Register FTP and SFTP as FS services
> 
>
> Key: HADOOP-15142
> URL: https://issues.apache.org/jira/browse/HADOOP-15142
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.0.0
>Reporter: Mario Molina
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: HADOOP-15142.001.patch, HADOOP-15142.002.patch, 
> HADOOP-15142.003.patch
>
>
> SFTPFileSystem and FTPFileSystem are not registered as a FS services.
> When calling the 'get' or 'newInstance' methods of the FileSystem class, the 
> FS instance cannot be created due to the schema is not registered as a 
> service FS.
> Also, the SFTPFileSystem class doesn't have the getScheme method implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-17 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15158:
-
Attachment: HADOOP-15158.004.patch

> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.9.1, 3.0.1
>
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, 
> HADOOP-15158.003.patch, HADOOP-15158.004.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15176) Enhance IAM assumed role support in S3A client

2018-01-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15176:

Environment: 




  was:
Followup HADOOP-15141 with

* Code to generate basic AWS json policies somewhat declaratively (no hand 
coded strings)
* Tests to simulate users with different permissions down the path of a single 
bucket
* test-driven changes to S3A client to handle user without full write up the FS 
tree
* move the new authenticator into the s3a sub-package "auth", where we can put 
more auth stuff (that base s3a package is getting way too big)



Description: 
Followup HADOOP-15141 with

* Code to generate basic AWS json policies somewhat declaratively (no hand 
coded strings)
* Tests to simulate users with different permissions down the path of a single 
bucket
* test-driven changes to S3A client to handle user without full write up the FS 
tree
* move the new authenticator into the s3a sub-package "auth", where we can put 
more auth stuff (that base s3a package is getting way too big)

> Enhance IAM assumed role support in S3A client
> --
>
> Key: HADOOP-15176
> URL: https://issues.apache.org/jira/browse/HADOOP-15176
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
> Environment: 
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Followup HADOOP-15141 with
> * Code to generate basic AWS json policies somewhat declaratively (no hand 
> coded strings)
> * Tests to simulate users with different permissions down the path of a 
> single bucket
> * test-driven changes to S3A client to handle user without full write up the 
> FS tree
> * move the new authenticator into the s3a sub-package "auth", where we can 
> put more auth stuff (that base s3a package is getting way too big)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15176) Enhance IAM assumed role support in S3A client

2018-01-17 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15176:
---

 Summary: Enhance IAM assumed role support in S3A client
 Key: HADOOP-15176
 URL: https://issues.apache.org/jira/browse/HADOOP-15176
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.1.0
 Environment: Followup HADOOP-15141 with

* Code to generate basic AWS json policies somewhat declaratively (no hand 
coded strings)
* Tests to simulate users with different permissions down the path of a single 
bucket
* test-driven changes to S3A client to handle user without full write up the FS 
tree
* move the new authenticator into the s3a sub-package "auth", where we can put 
more auth stuff (that base s3a package is getting way too big)


Reporter: Steve Loughran
Assignee: Steve Loughran






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15171) Hadoop native ZLIB decompressor produces 0 bytes for some input

2018-01-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328710#comment-16328710
 ] 

Steve Loughran commented on HADOOP-15171:
-

Irrespective of the fix, sounds like the hadoop decompressor code should do a 
followup check that the #of bytes returned is non-zero

> Hadoop native ZLIB decompressor produces 0 bytes for some input
> ---
>
> Key: HADOOP-15171
> URL: https://issues.apache.org/jira/browse/HADOOP-15171
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sergey Shelukhin
>Priority: Critical
>
> While reading some ORC file via direct buffers, Hive gets a 0-sized buffer 
> for a particular compressed segment of the file. We narrowed it down to 
> Hadoop native ZLIB codec; when the data is copied to heap-based buffer and 
> the JDK Inflater is used, it produces correct output. Input is only 127 bytes 
> so I can paste it here.
> All the other (many) blocks of the file are decompressed without problems by 
> the same code.
> {noformat}
> 2018-01-13T02:47:40,815 TRACE [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Decompressing 
> 127 bytes to dest buffer pos 524288, limit 786432
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: The codec has 
> produced 0 bytes for 127 bytes at pos 0, data hash 1719565039: [e3 92 e1 62 
> 66 60 60 10 12 e5 98 e0 27 c4 c7 f1 e8 12 8f 40 c3 7b 5e 89 09 7f 6e 74 73 04 
> 30 70 c9 72 b1 30 14 4d 60 82 49 37 bd e7 15 58 d0 cd 2f 31 a1 a1 e3 35 4c fa 
> 15 a3 02 4c 7a 51 37 bf c0 81 e5 02 12 13 5a b6 9f e2 04 ea 96 e3 62 65 b8 c3 
> b4 01 ae fd d0 72 01 81 07 87 05 25 26 74 3c 5b c9 05 35 fd 0a b3 03 50 7b 83 
> 11 c8 f2 c3 82 02 0f 96 0b 49 34 7c fa ff 9f 2d 80 01 00
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Fell back to 
> JDK decompressor with memcopy; got 155 bytes
> {noformat}
> Hadoop version is based on 3.1 snapshot.
> The size of libhadoop.so is 824403 bytes, and libgplcompression is 78273 
> FWIW. Not sure how to extract versions from those. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328656#comment-16328656
 ] 

Steve Loughran commented on HADOOP-15141:
-

Oh, OK. I had just been working on some changes :)

* move the new authenticator to a new patch, s3a.auth
* add a class alongside., "RoleModel" to actually build up the JSON to pump out 
as valid AWS role policy
* tests to understand what permissions
* fix to innerDelete() so that if you can't create a mock parent dir marker on 
a directory delete, it doesn't trigger a failure

The latter means that If I only have write access to /user/stevel and I delete 
/user/stevel, if the attempt to create a /user/ marker fails, the delete still 
succeeds.

Essentially, I'm adding support into S3A to handle the situation "user doesn't 
have write access to everywhere" via test-and-see, using this for the tests, 
with RoleModel there to set up the statements & policies properly. I'll no 
doubt need to play with: rename, MPU (large files, commit, commit -> abort), 
s3guard.

Apart from the move to the new package, no other changes to the authenticator 
itself, that' s just adding the ability to do the user lockdown

Let me merge mine back in and do a followup

 

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch, HADOOP-15141-005.patch, 
> HADOOP-15141-006.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15157) Zookeeper authentication related properties to support CredentialProviders

2018-01-17 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328567#comment-16328567
 ] 

Gergo Repas commented on HADOOP-15157:
--

Thanks [~yufeigu] for the commit! No, I don't need it on other branches.
Thanks [~lmccay] for the review!

> Zookeeper authentication related properties to support CredentialProviders
> --
>
> Key: HADOOP-15157
> URL: https://issues.apache.org/jira/browse/HADOOP-15157
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15157.000.patch, HADOOP-15157.001.patch, 
> HADOOP-15157.002.patch
>
>
> The hadoop.zk.auth and ha.zookeeper.auth properties currently support either 
> a plain-text authentication info (in scheme:value format), or a 
> @/path/to/file notation which points to a plain-text file.
> This ticket proposes that the hadoop.zk.auth and ha.zookeeper.auth properties 
> can be retrieved via the CredentialProviderAPI that's been configured using 
> the credential.provider.path, with fallback provided to the clear-text value 
> or @/path/to/file notation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance

2018-01-17 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15027:
---
Target Version/s: 3.1.0, 2.10.0, 2.9.1, 3.0.1  (was: 3.1.0, 2.9.1, 3.0.1)

> AliyunOSS: Support multi-thread pre-read to improve sequential read from 
> Hadoop to Aliyun OSS performance
> -
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch, HADOOP-15027.013.patch, HADOOP-15027.014.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance

2018-01-17 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15027:
---
  Resolution: Fixed
Release Note: Support multi-thread pre-read in AliyunOSSInputStream to 
improve the sequential read performance from Hadoop to Aliyun OSS. 
  Status: Resolved  (was: Patch Available)

> AliyunOSS: Support multi-thread pre-read to improve sequential read from 
> Hadoop to Aliyun OSS performance
> -
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch, HADOOP-15027.013.patch, HADOOP-15027.014.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance

2018-01-17 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15027:
---
Fix Version/s: 3.0.1
   2.9.1
   2.10.0
   3.1.0

> AliyunOSS: Support multi-thread pre-read to improve sequential read from 
> Hadoop to Aliyun OSS performance
> -
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch, HADOOP-15027.013.patch, HADOOP-15027.014.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328474#comment-16328474
 ] 

Hudson commented on HADOOP-15141:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13508 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13508/])
HADOOP-15141 Support IAM Assumed roles in S3A. Contributed by Steve (fabbri: 
rev 268ab4e0279b3e40f4a627d3dfe91e2a3523a8cc)
* (add) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/S3xLoginHelper.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACredentialsInURL.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBLocalClientFactory.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCpAssumedRole.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AMiscOperations.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestAssumeRole.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AWSCredentialProviderList.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AssumedRoleCredentialProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md


> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch, HADOOP-15141-005.patch, 
> HADOOP-15141-006.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance

2018-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328448#comment-16328448
 ] 

Hudson commented on HADOOP-15027:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13507 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13507/])
HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve (sammi.chen: 
rev 9195a6e302028ed3921d1016ac2fa5754f06ebf0)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/ReadBuffer.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java


> AliyunOSS: Support multi-thread pre-read to improve sequential read from 
> Hadoop to Aliyun OSS performance
> -
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch, HADOOP-15027.013.patch, HADOOP-15027.014.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-17 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-15141:
--
   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the contribution [~ste...@apache.org].

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch, HADOOP-15141-005.patch, 
> HADOOP-15141-006.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-17 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328447#comment-16328447
 ] 

Aaron Fabbri commented on HADOOP-15141:
---

Test failures look unrelated (TestCopyFromLocal works from me and was not 
changed here).

All my testing in US West 2 looks good (including dynamo and some negative 
tests with restricted roles).

 

 

 

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch, HADOOP-15141-005.patch, 
> HADOOP-15141-006.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org