[jira] [Comment Edited] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg edited comment on HADOOP-14820 at 8/31/17 5:53 AM:


*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to {{getAncestor}} should take 
care of this. This is the correct fix. The call to {{performAuthCheck}} on 
L1714 is required.

*L2426* - This has been fixed. Please see changes to {{getAncestor}} in 
003.patch. The naming is consistent with the definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
bq. Ancestor: The last existing component of the requested path. For example, 
for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar exists. 
The ancestor path is /foo if /foo exists but /foo/bar does not exist.

*L2459* - Same as above.




was (Author: sisan...@microsoft.com):
*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to {{getAncestor}} should take 
care of this. This is the correct fix. The call to {{performAuthCheck}} on 
L1714 is required.

*L2426* - This has been fixed. Please see changes to {{getAncestor}} in 
003.patch. The naming is consistent with the definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg edited comment on HADOOP-14820 at 8/31/17 5:52 AM:


*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to {{getAncestor}} should take 
care of this. This is the correct fix. The call to {{performAuthCheck}} on 
L1714 is required.

*L2426* - This has been fixed. Please see changes to {{getAncestor}} in 
003.patch. The naming is consistent with the definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.




was (Author: sisan...@microsoft.com):
*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to {{getAncestor}} should take 
care of this. This is the correct fix. The call to 
{noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg edited comment on HADOOP-14820 at 8/31/17 5:52 AM:


*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to {{getAncestor}} should take 
care of this. This is the correct fix. The call to 
{noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.




was (Author: sisan...@microsoft.com):
*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to {panel}getAncestor{panel} 
should take care of this. This is the correct fix. The call to 
{noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg edited comment on HADOOP-14820 at 8/31/17 5:51 AM:


*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to {panel}getAncestor{panel} 
should take care of this. This is the correct fix. The call to 
{noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.




was (Author: sisan...@microsoft.com):
*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to 
{code:java}getAncestor{code} should take care of this. This is the correct fix. 
The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg edited comment on HADOOP-14820 at 8/31/17 5:50 AM:


*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to 
{code:java}getAncestor{code} should take care of this. This is the correct fix. 
The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.




was (Author: sisan...@microsoft.com):
*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to 
{noformat}getAncestor(){noformat} should take care of this. This is the correct 
fix. The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg edited comment on HADOOP-14820 at 8/31/17 5:49 AM:


*NativeAzureFileSystem.java*
*L1714* -  Thanks for pointing this out. The fix to 
{noformat}getAncestor(){noformat} should take care of this. This is the correct 
fix. The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.




was (Author: sisan...@microsoft.com):
*NativeAzureFileSystem.java*
*L1714 - * Thanks for pointing this out. The fix to 
{noformat}getAncestor(){noformat} should take care of this. This is the correct 
fix. The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg edited comment on HADOOP-14820 at 8/31/17 5:49 AM:


*NativeAzureFileSystem.java*
*L1714 - * Thanks for pointing this out. The fix to 
{noformat}getAncestor(){noformat} should take care of this. This is the correct 
fix. The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.




was (Author: sisan...@microsoft.com):
*NativeAzureFileSystem.java
L1714 - * Thanks for pointing this out. The fix to 
{noformat}getAncestor(){noformat} should take care of this. This is the correct 
fix. The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148486#comment-16148486
 ] 

Sivaguru Sankaridurg commented on HADOOP-14820:
---

*NativeAzureFileSystem.java
L1714 - * Thanks for pointing this out. The fix to 
{noformat}getAncestor(){noformat} should take care of this. This is the correct 
fix. The call to {noformat}performAuthCheck{noformat} on L1714 is required.

*L2426* - This has been fixed. Please see changes to 
{noformat}getAncestor{noformat} in 003.patch. The naming is consistent with the 
definition mentioned at 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Permission_Checks].
{noformat} Ancestor: The last existing component of the requested path. For 
example, for the path /foo/bar/baz, the ancestor path is /foo/bar if /foo/bar 
exists. The ancestor path is /foo if /foo exists but /foo/bar does not 
exist.{noformat}

*L2459* - Same as above.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-30 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148479#comment-16148479
 ] 

Allen Wittenauer commented on HADOOP-14498:
---

I think this fixes the immediate problem, but I'm wondering if we shouldn't 
purge the usage of hadoop_add_entry and hadoop_verify_entry while we're here... 
especially since it is only used by the HOT code anyway.

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
> Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch, 
> HADOOP-14498.003.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14781) Clarify that HADOOP_CONF_DIR shouldn't actually be set in hadoop-env.sh

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148459#comment-16148459
 ] 

Hadoop QA commented on HADOOP-14781:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884295/HADOOP-14781.00.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 87ab8c726738 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71bbb86 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13142/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13142/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clarify that HADOOP_CONF_DIR shouldn't actually be set in hadoop-env.sh
> ---
>
> Key: HADOOP-14781
> URL: https://issues.apache.org/jira/browse/HADOOP-14781
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, scripts
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14781.00.patch
>
>
> We should be more explicit in the documentation in hadoop-env.sh that 
> HADOOP_CONF_DIR:
> * shouldn't actually be set in this file
> * is really intended for something "outside" of this file to set
> * will break --config if the pointed to configs don't also set 
> HADOOP_CONF_DIR appropriately



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148452#comment-16148452
 ] 

Hudson commented on HADOOP-14670:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12281 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12281/])
HADOOP-14670. Increase minimum cmake version for all platforms (aw: rev 
71bbb86d69ac474596f5619d22718e9f7ff5f9dc)
* (edit) dev-support/docker/Dockerfile
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/CMakeLists.txt
* (edit) hadoop-common-project/hadoop-common/HadoopCommon.cmake
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
* (edit) start-build-env.sh
* (edit) BUILDING.txt
* (edit) hadoop-tools/hadoop-pipes/src/CMakeLists.txt
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/bzip2/org_apache_hadoop_io_compress_bzip2.h
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
* (edit) hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
* (edit) hadoop-common-project/hadoop-common/src/CMakeLists.txt
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/CMakeLists.txt


> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch, HADOOP-14670.03.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14670:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
 Release Note: CMake v3.1.0 is now the minimum version required to build 
Apache Hadoop's native components.
   Status: Resolved  (was: Patch Available)

Thanks! 

Committing to trunk.

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch, HADOOP-14670.03.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13421) Switch to v2 of the S3 List Objects API in S3A

2017-08-30 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148373#comment-16148373
 ] 

Aaron Fabbri commented on HADOOP-13421:
---

Got this working, with v1 compatibility config knob (off by default).  Now I'm 
working on {{InconsistentAmazonS3Client}}; gotta instrument the v2 APIs with 
the failure injection stuff.

> Switch to v2 of the S3 List Objects API in S3A
> --
>
> Key: HADOOP-13421
> URL: https://issues.apache.org/jira/browse/HADOOP-13421
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steven K. Wong
>Assignee: Aaron Fabbri
>Priority: Minor
>
> Unlike [version 
> 1|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html] of the 
> S3 List Objects API, [version 
> 2|http://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html] by 
> default does not fetch object owner information, which S3A doesn't need 
> anyway. By switching to v2, there will be less data to transfer/process. 
> Also, it should be more robust when listing a versioned bucket with "a large 
> number of delete markers" ([according to 
> AWS|https://aws.amazon.com/releasenotes/Java/0735652458007581]).
> Methods in S3AFileSystem that use this API include:
> * getFileStatus(Path)
> * innerDelete(Path, boolean)
> * innerListStatus(Path)
> * innerRename(Path, Path)
> Requires AWS SDK 1.10.75 or later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14781) Clarify that HADOOP_CONF_DIR shouldn't actually be set in hadoop-env.sh

2017-08-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148371#comment-16148371
 ] 

Andrew Wang commented on HADOOP-14781:
--

+1 could you retrigger the build? precommit failures look unrelated.

> Clarify that HADOOP_CONF_DIR shouldn't actually be set in hadoop-env.sh
> ---
>
> Key: HADOOP-14781
> URL: https://issues.apache.org/jira/browse/HADOOP-14781
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, scripts
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14781.00.patch
>
>
> We should be more explicit in the documentation in hadoop-env.sh that 
> HADOOP_CONF_DIR:
> * shouldn't actually be set in this file
> * is really intended for something "outside" of this file to set
> * will break --config if the pointed to configs don't also set 
> HADOOP_CONF_DIR appropriately



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148259#comment-16148259
 ] 

Chris Douglas commented on HADOOP-14670:


Checked that this removed YARN-5719, which this makes redundant.

+1 skimmed the patch and lgtm.

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch, HADOOP-14670.03.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14816) Update Dockerfile to use Xenial

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148177#comment-16148177
 ] 

Hadoop QA commented on HADOOP-14816:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 12s{color} | {color:orange} The patch generated 2 new + 104 unchanged - 0 
fixed = 106 total (was 104) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884557/HADOOP-14816.00.patch 
|
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 57003e236c0f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3e0e203 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13141/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| shelldocs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13141/artifact/patchprocess/diff-patch-shelldocs.txt
 |
| modules | C:  U:  |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13141/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update Dockerfile to use Xenial
> ---
>
> Key: HADOOP-14816
> URL: https://issues.apache.org/jira/browse/HADOOP-14816
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14816.00.patch
>
>
> It's probably time to update the 3.0 Dockerfile to use Xenial given that 
> Trusty is on life support from Ubuntu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148159#comment-16148159
 ] 

Allen Wittenauer commented on HADOOP-14670:
---

HADOOP-14816 has nearly the same Dockerfile changes.

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch, HADOOP-14670.03.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14816) Update Dockerfile to use Xenial

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148157#comment-16148157
 ] 

Hadoop QA commented on HADOOP-14816:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13141/console in case of 
problems.


> Update Dockerfile to use Xenial
> ---
>
> Key: HADOOP-14816
> URL: https://issues.apache.org/jira/browse/HADOOP-14816
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14816.00.patch
>
>
> It's probably time to update the 3.0 Dockerfile to use Xenial given that 
> Trusty is on life support from Ubuntu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14816) Update Dockerfile to use Xenial

2017-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14816:
--
Hadoop Flags: Incompatible change
  Status: Patch Available  (was: Open)

> Update Dockerfile to use Xenial
> ---
>
> Key: HADOOP-14816
> URL: https://issues.apache.org/jira/browse/HADOOP-14816
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14816.00.patch
>
>
> It's probably time to update the 3.0 Dockerfile to use Xenial given that 
> Trusty is on life support from Ubuntu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14816) Update Dockerfile to use Xenial

2017-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14816:
--
Attachment: HADOOP-14816.00.patch

> Update Dockerfile to use Xenial
> ---
>
> Key: HADOOP-14816
> URL: https://issues.apache.org/jira/browse/HADOOP-14816
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14816.00.patch
>
>
> It's probably time to update the 3.0 Dockerfile to use Xenial given that 
> Trusty is on life support from Ubuntu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14809) hadoop-aws shell profile not being built

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148142#comment-16148142
 ] 

Hadoop QA commented on HADOOP-14809:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HADOOP-13345 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
0s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} HADOOP-13345 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14809 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884544/HADOOP-14809-HADOOP-13345-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux fd65d48aa325 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 6b18a5d |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13139/artifact/patchprocess/patch-mvninstall-hadoop-dist.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13139/testReport/ |
| modules | C: hadoop-tools/hadoop-aws hadoop-dist U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13139/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hadoop-aws shell profile not being built
> 
>
> Key: HADOOP-14809
> URL: https://issues.apache.org/jira/browse/HADOOP-14809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>   

[jira] [Commented] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148113#comment-16148113
 ] 

Hadoop QA commented on HADOOP-14520:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 84 unchanged - 2 fixed = 85 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884545/HADOOP_14520_09.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1001f0af9345 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4148023 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13140/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13140/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13140/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: 

[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148106#comment-16148106
 ] 

Hadoop QA commented on HADOOP-14670:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-tools/hadoop-pipes . 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
21s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-tools/hadoop-pipes . 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 12s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}208m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.net.TestDNS |
|   | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14670 |
| JIRA 

[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148104#comment-16148104
 ] 

Hadoop QA commented on HADOOP-14670:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-tools/hadoop-pipes . 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-tools/hadoop-pipes . 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 50s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14670 |
| JIRA Patch URL | 

[jira] [Updated] (HADOOP-14821) Executing the command 'hdfs -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if permission is denied to some files

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14821:

Summary: Executing the command 'hdfs 
-Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if 
permission is denied to some files  (was: Executing the command 'hdfs 
-Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails;)

> Executing the command 'hdfs 
> -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if 
> permission is denied to some files
> ---
>
> Key: HADOOP-14821
> URL: https://issues.apache.org/jira/browse/HADOOP-14821
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hdfs-client, security
>Affects Versions: 2.7.3
> Environment: hadoop-common-2.7.3.2.6.0.11-1
>Reporter: Ernani Pereira de Mattos Junior
>Priority: Critical
>  Labels: features
>
> === 
> Request Use Case: 
> UC1: 
> The customer has the path to a directory and subdirectories full of keys. The 
> customer knows that he does not have the access to all the keys, but ignoring 
> this problem, the customer makes a list of the keys. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is granted locally then he can try the login on the s3a. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is not granted locally then he will skip the login on the 
> s3a and try the next key on the list. 
> ===
> For now, the UC1.2 fails with below exception and does not try the next key:
> {code}
> $ hdfs  --loglevel DEBUG dfs 
> -Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
>  -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/
> Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=502549376, access=READ, 
> inode="/tmp/aws.jceks":admin:hdfs:-rwx--
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14821) Executing the command 'hdfs -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails;

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14821:

Description: 
=== 
Request Use Case: 
UC1: 
The customer has the path to a directory and subdirectories full of keys. The 
customer knows that he does not have the access to all the keys, but ignoring 
this problem, the customer makes a list of the keys. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is granted locally then he can try the login on the s3a. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is not granted locally then he will skip the login on the s3a and 
try the next key on the list. 
===

For now, the UC1.2 fails with below exception and does not try the next key:
{code}
$ hdfs  --loglevel DEBUG dfs 
-Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
 -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/

Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=502549376, access=READ, 
inode="/tmp/aws.jceks":admin:hdfs:-rwx--
{code}

  was:
=== 
Request Use Case: 
UC1: 
The customer has the path to a directory and subdirectories full of keys. The 
customer knows that he does not have the access to all the keys, but ignoring 
this problem, the customer makes a list of the keys. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is granted locally then he can try the login on the s3a. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is not granted locally then he will skip the login on the s3a and 
try the next key on the list. 
===

For now, the UC1.2 fails with below exception and does not try the next key:

$ hdfs  --loglevel DEBUG dfs 
-Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
 -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/

Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=502549376, access=READ, 
inode="/tmp/aws.jceks":admin:hdfs:-rwx--


> Executing the command 'hdfs 
> -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails;
> --
>
> Key: HADOOP-14821
> URL: https://issues.apache.org/jira/browse/HADOOP-14821
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hdfs-client, security
>Affects Versions: 2.7.3
> Environment: hadoop-common-2.7.3.2.6.0.11-1
>Reporter: Ernani Pereira de Mattos Junior
>Priority: Critical
>  Labels: features
>
> === 
> Request Use Case: 
> UC1: 
> The customer has the path to a directory and subdirectories full of keys. The 
> customer knows that he does not have the access to all the keys, but ignoring 
> this problem, the customer makes a list of the keys. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is granted locally then he can try the login on the s3a. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is not granted locally then he will skip the login on the 
> s3a and try the next key on the list. 
> ===
> For now, the UC1.2 fails with below exception and does not try the next key:
> {code}
> $ hdfs  --loglevel DEBUG dfs 
> -Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
>  -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/
> Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=502549376, access=READ, 
> inode="/tmp/aws.jceks":admin:hdfs:-rwx--
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14821) Executing the command 'hdfs -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails;

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14821:

Component/s: security
 fs/s3

> Executing the command 'hdfs 
> -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails;
> --
>
> Key: HADOOP-14821
> URL: https://issues.apache.org/jira/browse/HADOOP-14821
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hdfs-client, security
>Affects Versions: 2.7.3
> Environment: hadoop-common-2.7.3.2.6.0.11-1
>Reporter: Ernani Pereira de Mattos Junior
>Priority: Critical
>  Labels: features
>
> === 
> Request Use Case: 
> UC1: 
> The customer has the path to a directory and subdirectories full of keys. The 
> customer knows that he does not have the access to all the keys, but ignoring 
> this problem, the customer makes a list of the keys. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is granted locally then he can try the login on the s3a. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is not granted locally then he will skip the login on the 
> s3a and try the next key on the list. 
> ===
> For now, the UC1.2 fails with below exception and does not try the next key:
> $ hdfs  --loglevel DEBUG dfs 
> -Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
>  -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/
> Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=502549376, access=READ, 
> inode="/tmp/aws.jceks":admin:hdfs:-rwx--



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14802:

   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

+1 for branch-2 patch; applied



> Add support for using container saskeys for all accesses
> 
>
> Key: HADOOP-14802
> URL: https://issues.apache.org/jira/browse/HADOOP-14802
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch, 
> HADOOP-14802.003.patch, HADOOP-14802-branch-2-001.patch.txt
>
>
> This JIRA tracks adding support for using container saskey for all storage 
> access.
> Instead of using saskeys that are specific to each blob, it is possible to 
> re-use the container saskey for all blob accesses.
> This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14804) correct wrong parameters format order in core-default.xml

2017-08-30 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148096#comment-16148096
 ] 

Chen Liang commented on HADOOP-14804:
-

Thanks [~Hongfei Chen] for the catch!

It seems there are several more places that have the same issue, such as (but 
not limited to)
{code}
hadoop.http.staticuser.user
hadoop.registry.rm.enabled
hadoop.registry.zk.root
{code}

> correct wrong parameters format order in core-default.xml
> -
>
> Key: HADOOP-14804
> URL: https://issues.apache.org/jira/browse/HADOOP-14804
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Chen Hongfei
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14804.001.patch
>
>
> descriptions of "HTTP CORS" parameters is before the names:  
> 
>Comma separated list of headers that are allowed for web
> services needing cross-origin (CORS) support.
>   hadoop.http.cross-origin.allowed-headers
>   X-Requested-With,Content-Type,Accept,Origin
>  
> ..
> but they should be following value as others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Status: Patch Available  (was: Open)

fixes javadoc issues.

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch, HADOOP_14520_09.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Status: Open  (was: Patch Available)

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch, HADOOP_14520_09.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Attachment: HADOOP_14520_09.patch

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch, HADOOP_14520_09.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14809) hadoop-aws shell profile not being built

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14809:

Attachment: HADOOP-14809-HADOOP-13345-003.patch

Patch 003: Allen's patch to hadoop-aws/pom.xml and my checks to 
hadoop-dist/pom.xml

I can confirm Allen's patch restores the hadoop-aws.sh file, so am happy to +1 
that patch. However I'd like to see if we can get a test into the build so we 
can verify that it hasn't returned, which is what my bit of patch 003 tries to 
do. It's doing this in the "mvn verify" phase. Yetus didn't like it last time 
though...

> hadoop-aws shell profile not being built
> 
>
> Key: HADOOP-14809
> URL: https://issues.apache.org/jira/browse/HADOOP-14809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14809.00.patch, 
> HADOOP-14809-HADOOP-13345-002.patch, HADOOP-14809-HADOOP-13345-003.patch, 
> HADOOP-14809.HADOOP-13345.00.patch
>
>
> As discussed on hadoop common list; the creation of the s3guard shell profile 
> is stopping the hadoop-aws profile being created, so you can't set up the CP 
> properly there



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14809) hadoop-aws shell profile not being built

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14809:

Status: Patch Available  (was: Open)

> hadoop-aws shell profile not being built
> 
>
> Key: HADOOP-14809
> URL: https://issues.apache.org/jira/browse/HADOOP-14809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14809.00.patch, 
> HADOOP-14809-HADOOP-13345-002.patch, HADOOP-14809-HADOOP-13345-003.patch, 
> HADOOP-14809.HADOOP-13345.00.patch
>
>
> As discussed on hadoop common list; the creation of the s3guard shell profile 
> is stopping the hadoop-aws profile being created, so you can't set up the CP 
> properly there



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148032#comment-16148032
 ] 

Hudson commented on HADOOP-14814:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12277 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12277/])
HADOOP-14814. Fix incompatible API change on FsServerDefaults to (junping_du: 
rev 41480233a9cfb0bcfb69cc0f1594120e7656f031)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsServerDefaults.java


> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-14814:

   Resolution: Fixed
Fix Version/s: 2.8.2
   3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

I have commit the patch to trunk, branch-2, branch-2.8 and branch-2.8.2. Thanks 
[~andrew.wang] and [~shahrs87] for review and comments!

> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147880#comment-16147880
 ] 

Hadoop QA commented on HADOOP-14520:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 84 unchanged - 2 fixed = 85 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-azure in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884518/HADOOP_14520_08.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7c84b180781f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fd66a24 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13138/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13138/artifact/patchprocess/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13138/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13138/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>

[jira] [Commented] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147860#comment-16147860
 ] 

Thomas Marquardt commented on HADOOP-14820:
---

Looks good.  I have the following feedback:

*NativeAzureFileSystem.java*
 *L1714* - When the file already exists and overwrite is true, both 
{{performAuthCheck}} calls (L1700 and L1714) are checking permissions on the 
file.  The {{performAuthCheck}} on L1714 can be removed.

 *L2426* - The method {{getAncestor}} no longer returns the ancestor or parent 
path; instead, it returns the first path segment that exists.  I recommend 
renaming it 
{{getFirstPathSegmentThatExists}}.

 *L2459* - The {{ancestor}} field is actual the first path segment that exists, 
which may be the file itself. method {{getAncestor}} no longer returns the 
ancestor or parent path; instead, it returns the first path segment that 
exists.  I recommend renaming it {{firstExistingPathSegment}}.


.

>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Status: Open  (was: Patch Available)

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14520 started by Georgi Chalakov.

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Status: Patch Available  (was: In Progress)

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147827#comment-16147827
 ] 

Georgi Chalakov commented on HADOOP-14520:
--

HADOOP_14520_08.patch 
whitespace fixes; javadoc fixes.

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-30 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Attachment: HADOOP_14520_08.patch

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch, 
> HADOOP_14520_07.patch, HADOOP_14520_08.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP_14520_07.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 777, Failures: 0, Errors: 0, Skipped: 155



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147803#comment-16147803
 ] 

Hadoop QA commented on HADOOP-14670:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 22s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14670 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884507/HADOOP-14670.01.patch 
|
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  cc  mvnsite  
javac  unit  |
| uname | Linux bc58d494d31c 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a20e710 |
| Default Java | 1.8.0_144 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13135/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13135/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask
 hadoop-tools/hadoop-pipes . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13135/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop 

[jira] [Commented] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147769#comment-16147769
 ] 

Junping Du commented on HADOOP-14814:
-

Anyway, thanks for reminding, Rushabh.

> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147768#comment-16147768
 ] 

Junping Du commented on HADOOP-14814:
-

Well. the target version here is not necessary to be very precisely as it only 
get used to remind RM on specific branch that some of blocker/critical issues 
are still open. Given we will commit the patch soon and 2.9 release is still in 
feature planning, it should be fine.

> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147758#comment-16147758
 ] 

Rushabh S Shah commented on HADOOP-14814:
-

We should include 2.9.0 also in target version since HADOOP-14104 was committed 
in 2.9.0

> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147744#comment-16147744
 ] 

Hadoop QA commented on HADOOP-14670:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13137/console in case of 
problems.


> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch, HADOOP-14670.03.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-14814:

Hadoop Flags: Reviewed

> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147740#comment-16147740
 ] 

Junping Du commented on HADOOP-14814:
-

Thanks [~andrew.wang] and [~shahrs87] for review and comments. I will commit it 
shortly if no objections.

> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14670:
--
Attachment: HADOOP-14670.03.patch

-03:
* correct patch file 

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch, HADOOP-14670.03.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147716#comment-16147716
 ] 

Hadoop QA commented on HADOOP-14670:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13136/console in case of 
problems.


> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14670:
--
Attachment: HADOOP-14670.02.patch

-02:
* we can remove the double make in the maven plugin now

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14818) Can not show help message of namenode/datanode/nodemanager when process started.

2017-08-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-14818:
---

Assignee: Ajay Kumar

> Can not show help message of namenode/datanode/nodemanager when process 
> started.
> 
>
> Key: HADOOP-14818
> URL: https://issues.apache.org/jira/browse/HADOOP-14818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 3.0.0-beta1
>Reporter: Wenxin He
>Assignee: Ajay Kumar
>Priority: Minor
>
> We should always get the help message whenever the process is started or not.
> But now,
> when datanode starts, we get an error message:
> {noformat}
> hadoop# bin/hdfs datanode -h
> datanode is running as process 1701.  Stop it first.
> {noformat}
> when datanode stops, we get what we want:
> {noformat}
> hadoop# bin/hdfs --daemon stop datanode
> hadoop# bin/hdfs datanode -h
> Usage: hdfs datanode [-regular | -rollback | -rollingupgrade rollback ]
> -regular : Normal DataNode startup (default).
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147599#comment-16147599
 ] 

Hadoop QA commented on HADOOP-14670:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13135/console in case of 
problems.


> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14670:
--
Attachment: HADOOP-14670.01.patch

-01:
* update cmake in the dockerfile directly from cmake.org rather than wait for a 
Xenial Dockerfile
* fix up some clang issues, because clang is not gcc.
* add ability to drop to docker's root from within the start-build-env.sh 
environment to ease debugging packaging issues

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14809) hadoop-aws shell profile not being built

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14809:

Status: Open  (was: Patch Available)

> hadoop-aws shell profile not being built
> 
>
> Key: HADOOP-14809
> URL: https://issues.apache.org/jira/browse/HADOOP-14809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14809.00.patch, 
> HADOOP-14809-HADOOP-13345-002.patch, HADOOP-14809.HADOOP-13345.00.patch
>
>
> As discussed on hadoop common list; the creation of the s3guard shell profile 
> is stopping the hadoop-aws profile being created, so you can't set up the CP 
> properly there



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14809) hadoop-aws shell profile not being built

2017-08-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147580#comment-16147580
 ] 

Steve Loughran commented on HADOOP-14809:
-

mvn failure was the verification script failing in the "verify" stage as there 
was now shellprofile dir
{code}
 [echo] Looking in 
/testptch/hadoop/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/libexec/shellprofile.d
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 8.672 s
[INFO] Finished at: 2017-08-29T22:13:57+00:00
[INFO] Final Memory: 26M/323M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (verify shellprofiles) on 
project hadoop-dist: An Ant BuildException has occured: Not Shellprofile 
directory 
/testptch/hadoop/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/libexec/shellprofile.d
[ERROR] around Ant part ..
 @ 5:133 in /testptch/hadoop/hadoop-dist/target/antrun/build-main.xml
{code}

> hadoop-aws shell profile not being built
> 
>
> Key: HADOOP-14809
> URL: https://issues.apache.org/jira/browse/HADOOP-14809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14809.00.patch, 
> HADOOP-14809-HADOOP-13345-002.patch, HADOOP-14809.HADOOP-13345.00.patch
>
>
> As discussed on hadoop common list; the creation of the s3guard shell profile 
> is stopping the hadoop-aws profile being created, so you can't set up the CP 
> properly there



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147537#comment-16147537
 ] 

Hadoop QA commented on HADOOP-14820:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 80 unchanged - 1 fixed = 82 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884498/HADOOP-14820.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 47dc7afe7f30 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9992675 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13134/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13134/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13134/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: 

[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147522#comment-16147522
 ] 

Hadoop QA commented on HADOOP-14220:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-13345 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 14 
new + 21 unchanged - 0 fixed = 35 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 9 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 43 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-tools_hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.getLongParam(Map, 
String, long)  At 
DynamoDBMetadataStore.java:org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.getLongParam(Map,
 String, long)  At DynamoDBMetadataStore.java:[line 1099] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14220 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884497/HADOOP-14220-HADOOP-13345-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3b79e53cd08e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 6b18a5d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13133/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 

[jira] [Updated] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14820:
--
Attachment: HADOOP-14820.002.patch

>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Attachment: HADOOP-14220-HADOOP-13345-005.patch

patch 005. set-capacity is documented with examples bucket-info uses "-guarded" 
in docs & src  more examples.

Testing: yes, though note the root dir test failure listed

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Status: Patch Available  (was: Open)

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Status: Open  (was: Patch Available)

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-08-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147471#comment-16147471
 ] 

Steve Loughran commented on HADOOP-14220:
-

FYI, managed to get some assertion failures which look like the fault injection 
kicking in on listings on a test with -Ds3guard -Ddynamodb
{code}
estRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 9.933 sec  <<< FAILURE!
java.lang.AssertionError: files mismatch: between 
  "s3a://hwdev-steve-ireland-new/file.txt"
] and 
  "s3a://hwdev-steve-ireland-new/file.txt"
  "s3a://hwdev-steve-ireland-new/fork-8/test/ancestor/file-DELAY_LISTING_ME"
]
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.assertFieldsEquivalent(ContractTestUtils.java:1484)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 13.728 sec  <<< FAILURE!
java.lang.AssertionError: listFiles(/, true).hasNext
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testListEmptyRootDirectory(AbstractContractRootDirectoryTest.java:192)
at 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRootDir.java:63)

testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 0.682 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testSimpleRootListing(AbstractContractRootDirectoryTest.java:207)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

presumably listing quirks...but doesn't go away when I turn s3guard off, or on 
later runs. Something isn't cleaning up

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve 

[jira] [Commented] (HADOOP-14671) Upgrade to Apache Yetus 0.5.0

2017-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147430#comment-16147430
 ] 

Hudson commented on HADOOP-14671:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12273 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12273/])
Revert "HADOOP-14671. Upgrade to Apache Yetus 0.5.0." (aajisaka: rev 
99926756fc036d6949c2602356ca0732b88e2653)
* (edit) dev-support/bin/yetus-wrapper


> Upgrade to Apache Yetus 0.5.0
> -
>
> Key: HADOOP-14671
> URL: https://issues.apache.org/jira/browse/HADOOP-14671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14671.001.patch
>
>
> Apache Yetus 0.5.0 was released.  Let's upgrade the bundled reference to the 
> new version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14671) Upgrade to Apache Yetus 0.5.0

2017-08-30 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147371#comment-16147371
 ] 

Akira Ajisaka commented on HADOOP-14671:


Reverted this because it broke HADOOP-14817.

> Upgrade to Apache Yetus 0.5.0
> -
>
> Key: HADOOP-14671
> URL: https://issues.apache.org/jira/browse/HADOOP-14671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14671.001.patch
>
>
> Apache Yetus 0.5.0 was released.  Let's upgrade the bundled reference to the 
> new version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14817) shelldocs fails mvn site

2017-08-30 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147372#comment-16147372
 ] 

Akira Ajisaka commented on HADOOP-14817:


Reverted.

> shelldocs fails mvn site
> 
>
> Key: HADOOP-14817
> URL: https://issues.apache.org/jira/browse/HADOOP-14817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>
> When exec-maven-plugin calls Apache Yetus 0.5.0 shelldocs, it fails:
> {code}
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> /usr/bin/env: python -B: No such file or directory
> [INFO] 
> 
> [INFO] BUILD FAILURE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14671) Upgrade to Apache Yetus 0.5.0

2017-08-30 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-14671:


> Upgrade to Apache Yetus 0.5.0
> -
>
> Key: HADOOP-14671
> URL: https://issues.apache.org/jira/browse/HADOOP-14671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14671.001.patch
>
>
> Apache Yetus 0.5.0 was released.  Let's upgrade the bundled reference to the 
> new version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14821) Executing the command 'hdfs -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails;

2017-08-30 Thread Ernani Pereira de Mattos Junior (JIRA)
Ernani Pereira de Mattos Junior created HADOOP-14821:


 Summary: Executing the command 'hdfs 
-Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails;
 Key: HADOOP-14821
 URL: https://issues.apache.org/jira/browse/HADOOP-14821
 Project: Hadoop Common
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.3
 Environment: hadoop-common-2.7.3.2.6.0.11-1
Reporter: Ernani Pereira de Mattos Junior
Priority: Critical


=== 
Request Use Case: 
UC1: 
The customer has the path to a directory and subdirectories full of keys. The 
customer knows that he does not have the access to all the keys, but ignoring 
this problem, the customer makes a list of the keys. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is granted locally then he can try the login on the s3a. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is not granted locally then he will skip the login on the s3a and 
try the next key on the list. 
===

For now, the UC1.2 fails with below exception and does not try the next key:

$ hdfs  --loglevel DEBUG dfs 
-Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
 -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/

Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=502549376, access=READ, 
inode="/tmp/aws.jceks":admin:hdfs:-rwx--



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14221) Add s3guardtool dump command

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14221.
-
Resolution: Works for Me

> Add s3guardtool dump command
> 
>
> Key: HADOOP-14221
> URL: https://issues.apache.org/jira/browse/HADOOP-14221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Priority: Minor
>
> Add a command Dump the database, perhaps matching a pattern, for diagnostics.
> e.g 
> {code}
> s3guard dump s3a://steve4/datasets/queries
> {code}
> issue: large files, throttling, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14221) Add s3guardtool dump command

2017-08-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147311#comment-16147311
 ] 

Steve Loughran commented on HADOOP-14221:
-

we sort of get this with the diff command anyway; hard to see if I can think 
anything obvious else to add right now. Closing as WORKSFORME

> Add s3guardtool dump command
> 
>
> Key: HADOOP-14221
> URL: https://issues.apache.org/jira/browse/HADOOP-14221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Priority: Minor
>
> Add a command Dump the database, perhaps matching a pattern, for diagnostics.
> e.g 
> {code}
> s3guard dump s3a://steve4/datasets/queries
> {code}
> issue: large files, throttling, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14815) s3guard usage calls function incorrectly

2017-08-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14815:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

+1

doesn't cause a regression in my CLI test, and I assume you know what you are 
doing w.r.t the shell scripts. At least you'd better...

> s3guard usage calls function incorrectly
> 
>
> Key: HADOOP-14815
> URL: https://issues.apache.org/jira/browse/HADOOP-14815
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14815.HADOOP-13345.00.patch
>
>
> The format of the hadoop_add_subcommand  function has changed incompatibly in 
> trunk, resulting in the s3guard usage being a bit wacky.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104

2017-08-30 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147247#comment-16147247
 ] 

Rushabh S Shah commented on HADOOP-14814:
-

Thanks [~djp] for finding the bug.
Apologies for introducing the bug.
+1 non-binding. The patch lgtm.
Regarding the test failures:
1.  {{org.apache.hadoop.fs.sftp.TestSFTPFileSystem#testGetAccessTime}}: This 
failure is being tracked by {{HADOOP-14206}}
Even though in HADOOP-14206, different test is failing but the stack trace is 
the same.

2. 
{{org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem#testTrashRoot}}
 This test passes locally on my box. I think its a case of not cleaning up.

> Fix incompatible API change on FsServerDefaults to HADOOP-14104
> ---
>
> Key: HADOOP-14814
> URL: https://issues.apache.org/jira/browse/HADOOP-14814
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: HADOOP-14814.patch
>
>
> From recently jdiff report: 
> https://builds.apache.org/job/Hadoop-2.8-JACC/376/artifact/target/compat-check/report.html.
>  We found an incompatible API change: in HADOOP-14104, we remove the 
> constructor with replacing with more parameters. This will cause API 
> incompatible given FsServerDefaults marked as public.
> We should fix it before 2.8.2 and 3.0-beta kicked out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147233#comment-16147233
 ] 

Hadoop QA commented on HADOOP-14820:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 80 unchanged - 1 fixed = 82 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884469/HADOOP-14820.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3eb9cac08840 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 200b113 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13132/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13132/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13132/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: 

[jira] [Updated] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14820:
--
Attachment: HADOOP-14820.001.patch

>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14820:
--
Status: Patch Available  (was: Open)

>  Fix for HDFS semantics parity for mkdirs -p
> 
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14820.001.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14820) Fix for HDFS semantics parity for mkdirs -p

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)
Sivaguru Sankaridurg created HADOOP-14820:
-

 Summary:  Fix for HDFS semantics parity for mkdirs -p
 Key: HADOOP-14820
 URL: https://issues.apache.org/jira/browse/HADOOP-14820
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: Sivaguru Sankaridurg
Assignee: Sivaguru Sankaridurg


No authorization checks should be made when a user tries to create (mkdirs -p) 
an existing folder hierarchy.

For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
following operations, the results should be as shown below.

{noformat}
hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix

hdiuser@hn0-0d2f67:~$ ls -l
dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix

hdiuser@hn0-0d2f67:~$ mkdir -p /home
hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
The first three mkdirs succeed, because the ancestor is already present. The 
fourth one fails because of a permission check against the (shorter) ancestor 
(as compared to the path being created).
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147100#comment-16147100
 ] 

Hadoop QA commented on HADOOP-14802:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-azure in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 19s{color} | 
{color:black} 

[jira] [Created] (HADOOP-14819) Update commons-net to 3.6

2017-08-30 Thread Lukas Waldmann (JIRA)
Lukas Waldmann created HADOOP-14819:
---

 Summary: Update commons-net to 3.6
 Key: HADOOP-14819
 URL: https://issues.apache.org/jira/browse/HADOOP-14819
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Lukas Waldmann


Please update commons-net to 3.6 as used 3.1 is 6 years old and have several 
issues with ssl connections



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-30 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147080#comment-16147080
 ] 

Lukas Waldmann commented on HADOOP-1:
-

super, thanks

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.2.patch, 
> HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, 
> HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, 
> HADOOP-1.9.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147064#comment-16147064
 ] 

Steve Loughran commented on HADOOP-14439:
-

Good q. I think there's no technical issue, it's just the security one: do we 
enable this? Or do we make it something you need to ask for?

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14802:
--
Status: Patch Available  (was: Open)

Attached a branch-2 patch. Submitting it.

> Add support for using container saskeys for all accesses
> 
>
> Key: HADOOP-14802
> URL: https://issues.apache.org/jira/browse/HADOOP-14802
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch, 
> HADOOP-14802.003.patch, HADOOP-14802-branch-2-001.patch.txt
>
>
> This JIRA tracks adding support for using container saskey for all storage 
> access.
> Instead of using saskeys that are specific to each blob, it is possible to 
> re-use the container saskey for all blob accesses.
> This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14802:
--
Attachment: HADOOP-14802-branch-2-001.patch.txt

> Add support for using container saskeys for all accesses
> 
>
> Key: HADOOP-14802
> URL: https://issues.apache.org/jira/browse/HADOOP-14802
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch, 
> HADOOP-14802.003.patch, HADOOP-14802-branch-2-001.patch.txt
>
>
> This JIRA tracks adding support for using container saskey for all storage 
> access.
> Instead of using saskeys that are specific to each blob, it is possible to 
> re-use the container saskey for all blob accesses.
> This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-30 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14802:
--
Status: Open  (was: Patch Available)

Cancelling current patch in order to submit a branch-2 patch

> Add support for using container saskeys for all accesses
> 
>
> Key: HADOOP-14802
> URL: https://issues.apache.org/jira/browse/HADOOP-14802
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch, 
> HADOOP-14802.003.patch, HADOOP-14802-branch-2-001.patch.txt
>
>
> This JIRA tracks adding support for using container saskey for all storage 
> access.
> Instead of using saskeys that are specific to each blob, it is possible to 
> re-use the container saskey for all blob accesses.
> This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-08-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146916#comment-16146916
 ] 

Steve Loughran commented on HADOOP-14670:
-

update seems a reasonable idea, but looks like yetus is still on the older 
versions

{code}
[WARNING] CMake Error at CMakeLists.txt:23 (cmake_minimum_required):
[WARNING]   CMake 3.1 or higher is required.  You are running version 2.8.12.2
{code}

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14670.00.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14177) hadoop-build fails to handle non-english UTF-8 characters

2017-08-30 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He resolved HADOOP-14177.

Resolution: Workaround

> hadoop-build fails to handle non-english UTF-8 characters 
> --
>
> Key: HADOOP-14177
> URL: https://issues.apache.org/jira/browse/HADOOP-14177
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Wenxin He
>Priority: Minor
> Attachments: HADOOP-14177.001.patch
>
>
> as this [link | 
> http://askubuntu.com/questions/581458/how-to-configure-locales-to-unicode-in-a-docker-ubuntu-14-04-container]
>  shows, ubuntu:trusty which hadoop-build image build from does not specify 
> any encoding, eg. UTF-8. so not english characters will show incorrect and 
> may fail the build.
> current:
> hewenxin@24a74cc3053b:~/hadoop$ ll zdh-hdfs-autotests/cases/
> total 284
> drwxr-xr-x 2 hewenxin users 4096 Mar 13 05:51 ./
> drwxr-xr-x 7 hewenxin users 4096 Mar 11 08:28 ../
> -rw-r--r-- 1 hewenxin users 3749 Mar  3 09:50 Y01.???.robot
> -rw-r--r-- 1 hewenxin users 2060 Feb 27 05:20 Z00..robot
> -rw-r--r-- 1 hewenxin users 1102 Feb 27 05:20 Z00..robot.define
> -rw-r--r-- 1 hewenxin users 1308 Feb 27 08:46 Z01.??.robot
> -rw-r--r-- 1 hewenxin users 1365 Feb 27 05:20 Z02.HttpFS.robot
> -rw-r--r-- 1 hewenxin users 1353 Feb 27 05:20 
> Z03..robot
> -rw-r--r-- 1 hewenxin users 2928 Feb 27 08:46 Z04.???.robot
> -rw-r--r-- 1 hewenxin users 1534 Mar  1 05:48 Z05..robot
> -rw-r--r-- 1 hewenxin users 1260 Feb 27 05:20 Z06.??.robot
> -rw-r--r-- 1 hewenxin users 1322 Feb 27 05:20 Z06.??.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 
> Z07.balance?.robot
> -rw-r--r-- 1 hewenxin users  915 Feb 27 05:20 
> Z08.hdfs???.robot
> -rw-r--r-- 1 hewenxin users  920 Mar  1 05:48 
> Z09.?JMX.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z10.hdfs??.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z11.hdfs???.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z12.???.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z13.datanode.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z14.journal.robot
> -rw-r--r-- 1 hewenxin users 1246 Mar  1 05:48 
> Z15.hdfs???.robot
> -rw-r--r-- 1 hewenxin users  169 Feb 27 05:20 
> Z16.?.robot
> -rw-r--r-- 1 hewenxin users  169 Feb 27 05:20 
> Z17.JMX.robot
> -rw-r--r-- 1 hewenxin users  169 Feb 27 05:20 
> Z18.??krb.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z99.hdfs???.robot
> after modified:
> hewenxin@836ade384c62:~/hadoop$ ll zdh-hdfs-autotests/cases/
> total 284
> drwxr-xr-x 2 hewenxin users 4096 Mar 13 05:51 ./
> drwxr-xr-x 7 hewenxin users 4096 Mar 11 08:28 ../
> -rw-r--r-- 1 hewenxin users 3749 Mar  3 09:50 Y01.磁盘热插拔.robot
> -rw-r--r-- 1 hewenxin users 2060 Feb 27 05:20 Z00.心跳上报.robot
> -rw-r--r-- 1 hewenxin users 1102 Feb 27 05:20 Z00.心跳上报.robot.define
> -rw-r--r-- 1 hewenxin users 1308 Feb 27 08:46 Z01.基本功能冒烟.robot
> -rw-r--r-- 1 hewenxin users 1365 Feb 27 05:20 Z02.HttpFS功能测试.robot
> -rw-r--r-- 1 hewenxin users 1353 Feb 27 05:20 Z03.日志级别动态调整.robot
> -rw-r--r-- 1 hewenxin users 2928 Feb 27 08:46 Z04.多线程拷贝.robot
> -rw-r--r-- 1 hewenxin users 1534 Mar  1 05:48 Z05.指标上报.robot
> -rw-r--r-- 1 hewenxin users 1260 Feb 27 05:20 Z06.容灾功能测试.robot
> -rw-r--r-- 1 hewenxin users 1322 Feb 27 05:20 Z06.快照功能测试.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z07.balance速率可调节优化.robot
> -rw-r--r-- 1 hewenxin users  915 Feb 27 05:20 Z08.hdfs智能配置最大坏盘数.robot
> -rw-r--r-- 1 hewenxin users  920 Mar  1 05:48 Z09.元数据JMX上报测试.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z10.hdfs的白名单功能.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z11.hdfs故障域功能.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z12.磁盘热插拔.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z13.datanode的缩扩容.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z14.journal的缩扩容.robot
> -rw-r--r-- 1 hewenxin users 1246 Mar  1 05:48 Z15.hdfs进程内存使用百分比.robot
> -rw-r--r-- 1 hewenxin users  169 Feb 27 05:20 Z16.审计日志支持自定义列表.robot
> -rw-r--r-- 1 hewenxin users  169 Feb 27 05:20 Z17.JMX上报版本构建时间.robot
> -rw-r--r-- 1 hewenxin users  169 Feb 27 05:20 Z18.支持krb下控制台浏览文件.robot
> -rw-r--r-- 1 hewenxin users  198 Feb 27 05:20 Z99.hdfs客户端部署.robot



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Created] (HADOOP-14818) Can not show help message of namenode/datanode/nodemanager when process started.

2017-08-30 Thread Wenxin He (JIRA)
Wenxin He created HADOOP-14818:
--

 Summary: Can not show help message of 
namenode/datanode/nodemanager when process started.
 Key: HADOOP-14818
 URL: https://issues.apache.org/jira/browse/HADOOP-14818
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin
Affects Versions: 3.0.0-beta1
Reporter: Wenxin He
Priority: Minor


We should always get the help message whenever the process is started or not.

But now,
when datanode starts, we get an error message:
{noformat}
hadoop# bin/hdfs datanode -h
datanode is running as process 1701.  Stop it first.
{noformat}

when datanode stops, we get what we want:
{noformat}
hadoop# bin/hdfs --daemon stop datanode
hadoop# bin/hdfs datanode -h
Usage: hdfs datanode [-regular | -rollback | -rollingupgrade rollback ]
-regular : Normal DataNode startup (default).
...
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14817) shelldocs fails mvn site

2017-08-30 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146768#comment-16146768
 ] 

Akira Ajisaka commented on HADOOP-14817:


Thanks Allen for the report. Now I'm +1 for reverting HADOOP-14671.

> shelldocs fails mvn site
> 
>
> Key: HADOOP-14817
> URL: https://issues.apache.org/jira/browse/HADOOP-14817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>
> When exec-maven-plugin calls Apache Yetus 0.5.0 shelldocs, it fails:
> {code}
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> /usr/bin/env: python -B: No such file or directory
> [INFO] 
> 
> [INFO] BUILD FAILURE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org