[jira] [Commented] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860879#comment-15860879
 ] 

Genmao Yu commented on HADOOP-14069:


IMHO, it is better to check the real modifiedtime but not just {{>0}}, but lgtm 
overall. 
cc [~drankye]

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860876#comment-15860876
 ] 

Fei Hui commented on HADOOP-14069:
--

Here is the result after patch applying. 
{quote}
$bin/hadoop fs -ls oss://oss-for-hadoop-sh/
Found 1 items
drwxrwxrwx   -  0 2017-02-10 10:47 oss://oss-for-hadoop-sh/test00
{quote}

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14069:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-13377

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.

2017-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860873#comment-15860873
 ] 

Hudson commented on HADOOP-13768:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11230 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11230/])
HADOOP-13768. AliyunOSS: handle the failure in the batch delete (kai.zheng: rev 
5b151290ae2916dc04d6a4338085fcefafa21982)
* (edit) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java


> AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch, 
> HADOOP-13768.003.patch, HADOOP-13768.004.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860871#comment-15860871
 ] 

Genmao Yu commented on HADOOP-14069:


LGTM, and could you please paste the command output after this patch?

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14069:
---
Description: 
When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
info is wrong

{quote}
$bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
Found 1 items
drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
{quote}

the modifiedtime is wrong, it should not be 1970-01-01 08:00

  was:
When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
info is wrong

{quote}
$bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
Found 1 items
drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
{quote}

the modifiedtime is wrong, it should not be 1970-01-01 08:00


> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13768) AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.

2017-02-09 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13768:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~uncleGen] for the contribution and 
[~ste...@apache.org] for the review.

> AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch, 
> HADOOP-13768.003.patch, HADOOP-13768.004.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-09 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860844#comment-15860844
 ] 

Yuanbo Liu edited comment on HADOOP-13119 at 2/10/17 6:54 AM:
--

Reopen it. For some links(such as "/jmx, /stack"), blocking the links in filter 
chain because of impersonation issue is not friendly for users. For example, 
user "sam" is not allowed to be impersonated by user "knox", the link "/jmx" 
doesn't need any user to do authorization by default, and it only needs user 
"knox" to do authentication, in this case, it's not right to  block the access 
in SPNEGO filter. We intend to verify the impersonation when the method 
"getRemoteUser" of request is used, so that such kind of links would not be 
blocked by mistake. I will attach a new patch ASAP.


was (Author: yuanbo):
Reopen it. Because because for some links(such as "/jmx, /stack"), blocking the 
links in filter chain because of impersonation issue is not friendly for users. 
For example, user "sam" is not allowed to be impersonated by user "knox", the 
link "/jmx" doesn't need any user to do authorization by default, and it only 
needs user "knox" to do authentication, in this case, it's not right to  block 
the access in SPNEGO filter. We intend to verify the impersonation when the 
method "getRemoteUser" of request is used, so that such kind of links would not 
be blocked by mistake. I will attach a new patch ASAP.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reopened HADOOP-13119:
-

Reopen it. Because because for some links(such as "/jmx, /stack"), blocking the 
links in filter chain because of impersonation issue is not friendly for users. 
For example, user "sam" is not allowed to be impersonated by user "knox", the 
link "/jmx" doesn't need any user to do authorization by default, and it only 
needs user "knox" to do authentication, in this case, it's not right to  block 
the access in SPNEGO filter. We intend to verify the impersonation when the 
method "getRemoteUser" of request is used, so that such kind of links would not 
be blocked by mistake. I will attach a new patch ASAP.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.

2017-02-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860841#comment-15860841
 ] 

Kai Zheng commented on HADOOP-13768:


The latest patch LGTM and +1. 

> AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch, 
> HADOOP-13768.003.patch, HADOOP-13768.004.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860769#comment-15860769
 ] 

Fei Hui commented on HADOOP-14069:
--

OSS Tests
---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.956 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.326 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.733 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Tests run: 10, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 5.59 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.544 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.322 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.95 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.574 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Running 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.574 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.982 sec - 
in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.219 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.725 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.548 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.677 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials

Results :

Tests run: 140, Failures: 0, Errors: 0, Skipped: 2

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 03:04 min
[INFO] Finished at: 2017-02-10T12:43:57+08:00
[INFO] Final Memory: 27M/66M
[INFO] 


> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860745#comment-15860745
 ] 

Hadoop QA commented on HADOOP-14069:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851988/HADOOP-14069.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2112615ab9a3 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 08f9397 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11605/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11605/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -

[jira] [Commented] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-09 Thread Seth Fitzsimmons (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860725#comment-15860725
 ] 

Seth Fitzsimmons commented on HADOOP-14028:
---

I'm running into HADOOP-14071 when using {{HADOOP-14028-branch-2.8-003.patch}}.

It manages to complete intermittently and usually takes a few hours to fail.

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-branch-2-001.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-09 Thread Seth Fitzsimmons (JIRA)
Seth Fitzsimmons created HADOOP-14071:
-

 Summary: S3a: Failed to reset the request input stream
 Key: HADOOP-14071
 URL: https://issues.apache.org/jira/browse/HADOOP-14071
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
Reporter: Seth Fitzsimmons


When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
reset the request input stream}} exceptions. They're more likely to occur the 
larger the file that's being written (70GB in the extreme case, but it needs to 
be one file).

{code}
2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
FileBlock{index=416, 
destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
state=Upload, dataSize=11591473, limit=104857600}
2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
blockUploadsCompleted=416, blockUploadsFailed=3, bytesPendingUpload=209747761, 
bytesUploaded=43317747712, blocksAllocated=416, blocksReleased=416, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, 
transferDuration=1389936 ms, queueDuration=519 ms, averageQueueTime=1 ms, 
totalUploadDuration=1390455 ms, effectiveBandwidth=3.1153649497466657E7 bytes/s}
at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
Multi-part upload with id 
'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
 to 2017/planet-170206.orc on 2017/planet-170206.orc: 
com.amazonaws.ResetException: Failed to reset the request input stream; If the 
request involves an input stream, the maximum stream buffer size can be 
configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
reset the request input stream; If the request involves an input stream, the 
maximum stream buffer size can be configured via 
request.getRequestClientOptions().setReadLimit(int)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
Caused by: com.amazonaws.ResetException: Failed to reset the request input 
stream; If the request involves an input stream, the maximum stream buffer size 
can be configured via request.getRequestClientOptions().setReadLimit(int)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
at 
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3026)
at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1114)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:501)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:492)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
at 

[jira] [Created] (HADOOP-14070) S3a: Failed to reset the request input stream

2017-02-09 Thread Seth Fitzsimmons (JIRA)
Seth Fitzsimmons created HADOOP-14070:
-

 Summary: S3a: Failed to reset the request input stream
 Key: HADOOP-14070
 URL: https://issues.apache.org/jira/browse/HADOOP-14070
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
Reporter: Seth Fitzsimmons


{code}
Feb 07, 2017 8:05:46 AM 
com.google.common.util.concurrent.Futures$CombinedFuture setExceptionAndMaybeLog
SEVERE: input future failed.
com.amazonaws.ResetException: Failed to reset the request input stream; If the 
request involves an input stream, the maximum stream buffer size can be 
configured via request.getRequestClientOptions().setReadLimit(int)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
at 
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3026)
at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1114)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:501)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:492)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1219)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Resetting to invalid mark
at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
at 
com.amazonaws.internal.SdkBufferedInputStream.reset(SdkBufferedInputStream.java:106)
at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
at com.amazonaws.event.ProgressInputStream.reset(ProgressInputStream.java:169)
at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
... 20 more
2017-02-07 08:05:46 WARN S3AInstrumentation:777 - Closing output stream 
statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=519, blocksInQueue=0, blocksActive=1, 
blockUploadsCompleted=518, blockUploadsFailed=2, bytesPendingUpload=82528300, 
bytesUploaded=54316236800, blocksAllocated=519, blocksReleased=519, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, 
transferDuration=2637812 ms, queueDuration=839 ms, averageQueueTime=1 ms, 
totalUploadDuration=2638651 ms, effectiveBandwidth=2.05848506680118E7 bytes/s}
Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
Multi-part upload with id 
'uDonLgtsyeToSmhyZuNb7YrubCDiyXCCQy4mdVc5ZmYWPPHyZ3H3ZlFZzKktaPUiYb7uT4.oM.lcyoazHF7W8pK4xWmXV4RWmIYGYYhN6m25nWRrBEE9DcJHcgIhFD8xd7EKIjijEd1k4S5JY1HQvA--'
 to 2017/history-170130.orc on 2017/history-170130.orc: 
com.amazonaws.ResetException: Failed to reset the request input stream; If the 
request involves an input stream, the maximum stream buffer size can be 
configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
reset the request input stream; If the request involves an input stream, the 
maximum stream buffer size can be configured via 
request.getRequestClientOptions().setReadLimit(int)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
at 

[jira] [Commented] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860717#comment-15860717
 ] 

Fei Hui commented on HADOOP-14069:
--

CC [~uncleGen], could you please give any suggestions ? thanks

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14069:
-
Status: Patch Available  (was: Open)

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14069:
-
Attachment: HADOOP-14069.001.patch

patch uploaded

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui reassigned HADOOP-14069:


Assignee: Fei Hui

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
>
> When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-14069:


 Summary: AliyunOSS: listStatus returns wrong file info
 Key: HADOOP-14069
 URL: https://issues.apache.org/jira/browse/HADOOP-14069
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss
Affects Versions: 3.0.0-alpha2
Reporter: Fei Hui


When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
info is wrong

{quote}
$bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
Found 1 items
drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
{quote}

the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.

2017-02-09 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860613#comment-15860613
 ] 

Genmao Yu commented on HADOOP-13768:


cc [~drankye]

> AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch, 
> HADOOP-13768.003.patch, HADOOP-13768.004.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860576#comment-15860576
 ] 

Hadoop QA commented on HADOOP-14063:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14063 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851966/HADOOP-14063-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux de75441e9920 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 08f9397 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11604/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11604/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11604/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11604/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Comment Edited] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-02-09 Thread ramtin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860491#comment-15860491
 ] 

ramtin edited comment on HADOOP-14063 at 2/10/17 1:42 AM:
--

[~lmccay] Thank you for your comment.
I agree with you. Provided a new patch to separately log path and permission 
issues.


was (Author: ramtinb):
[~lmccay] Thank you for your comment.
I agree with you. Provided a new patch to separately log path and permission 
issue.

> Hadoop CredentialProvider fails to load list of keystore files
> --
>
> Key: HADOOP-14063
> URL: https://issues.apache.org/jira/browse/HADOOP-14063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-14063-001.patch, HADOOP-14063-002.patch
>
>
> The {{hadoop.security.credential.provider.path}} property can be a list of 
> keystore files like this:
> _jceks://hdfs/file1.jceks,jceks://hdfs/file2.jceks,jceks://hdfs/file3.jceks 
> ..._
> Each file can have different permissions set to limit the users that have 
> access to the keys.  Some users may not have access to all the keystore files.
> Each keystore file in the list should be tried until one is found with the 
> key needed. 
> Currently it will throw an exception if one of the keystore files cannot be 
> loaded instead of continuing to try the next one in the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-02-09 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-14063:

Attachment: HADOOP-14063-002.patch

> Hadoop CredentialProvider fails to load list of keystore files
> --
>
> Key: HADOOP-14063
> URL: https://issues.apache.org/jira/browse/HADOOP-14063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-14063-001.patch, HADOOP-14063-002.patch
>
>
> The {{hadoop.security.credential.provider.path}} property can be a list of 
> keystore files like this:
> _jceks://hdfs/file1.jceks,jceks://hdfs/file2.jceks,jceks://hdfs/file3.jceks 
> ..._
> Each file can have different permissions set to limit the users that have 
> access to the keys.  Some users may not have access to all the keystore files.
> Each keystore file in the list should be tried until one is found with the 
> key needed. 
> Currently it will throw an exception if one of the keystore files cannot be 
> loaded instead of continuing to try the next one in the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-02-09 Thread ramtin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860491#comment-15860491
 ] 

ramtin commented on HADOOP-14063:
-

[~lmccay] Thank you for your comment.
I agree with you. Provided a new patch to separately log path and permission 
issue.

> Hadoop CredentialProvider fails to load list of keystore files
> --
>
> Key: HADOOP-14063
> URL: https://issues.apache.org/jira/browse/HADOOP-14063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-14063-001.patch
>
>
> The {{hadoop.security.credential.provider.path}} property can be a list of 
> keystore files like this:
> _jceks://hdfs/file1.jceks,jceks://hdfs/file2.jceks,jceks://hdfs/file3.jceks 
> ..._
> Each file can have different permissions set to limit the users that have 
> access to the keys.  Some users may not have access to all the keystore files.
> Each keystore file in the list should be tried until one is found with the 
> key needed. 
> Currently it will throw an exception if one of the keystore files cannot be 
> loaded instead of continuing to try the next one in the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860444#comment-15860444
 ] 

Hudson commented on HADOOP-14033:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11229 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11229/])
HADOOP-14033. Reduce fair call queue lock contention. Contributed by (kihwal: 
rev 0c01cf57987bcc7a17154a3538960b67f625a9e5)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java


> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14048) The REDO operation of AtomicRename of folder doesn't create a placeholder blob for destination folder

2017-02-09 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860434#comment-15860434
 ] 

Gaurav Kanade commented on HADOOP-14048:


+1 for the patch

> The REDO operation of AtomicRename of folder doesn't create a placeholder 
> blob for destination folder
> -
>
> Key: HADOOP-14048
> URL: https://issues.apache.org/jira/browse/HADOOP-14048
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
> Attachments: HADOOP-14048.patch
>
>
> While doing manual testing, I realized that the crash recovery of 
> AtomicRename operation of a folder in AzureNativeFileSystem doesn't create a 
> placeholder property blob for destination folder. Due to this bug, the 
> destination folder can not be renamed again.
> Below is how I tested this:
> 1. Create a test directory as "/test/A"
> 2. Create 15 block blobs in "/test/A" folder.
> 3. Run "hadoop fs -mv /test/A /test/B" command and crash it as soon as 
> /test/A-RenamePending.json file is created.
> 4. Now run "hadoop fs -lsr /test" command, which should complete the pending 
> rename operation (redo) as a part of crash recovery. 
> 5. The REDO method copies the pending files from source folder to destination 
> folder (by consulting A-RenamePending.json file), but it doesn't create a 
> 0-byte property blob for /test/B folder, which is a bug as that folder will 
> not be usable for many operations. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14033:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2 and branch-2.8. Thanks for the improvement, 
Daryn.

> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860309#comment-15860309
 ] 

Kihwal Lee commented on HADOOP-14033:
-

The test failure is being addressed in HADOOP-14030.

> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860304#comment-15860304
 ] 

Kihwal Lee commented on HADOOP-14033:
-

+1 for the current patch. Please file a new jira for adding metrics if you have 
ideas.

> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860202#comment-15860202
 ] 

Daryn Sharp commented on HADOOP-14033:
--

Sounds reasonable, but would prefer a separate jira since metrics don't exist 
today.  It'll be important to ensure that metrics don't re-introduce similar 
synchronization overhead this patch intends to remove.

> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860167#comment-15860167
 ] 

Hadoop QA commented on HADOOP-13075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 50s{color} | {color:orange} root: The patch generated 7 new + 5 unchanged - 
1 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 40s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13075 |
| GITHUB PR | https://github.com/apache/hadoop/pull/183 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux b0c493a00425 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5fb723b |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11603/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11603/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Updated] (HADOOP-14062) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-02-09 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand updated HADOOP-14062:
-
Attachment: (was: YARN-6013-branch-2.8.0.003.patch)

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: HADOOP-14062
> URL: https://issues.apache.org/jira/browse/HADOOP-14062
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Priority: Critical
> Attachments: HADOOP-14062.001.patch, HADOOP-14062.002.patch, 
> HADOOP-14062-branch-2.8.0.004.patch, yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> 

[jira] [Updated] (HADOOP-14062) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-02-09 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand updated HADOOP-14062:
-
Attachment: (was: YARN-6013-branch-2.8.0.002.patch)

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: HADOOP-14062
> URL: https://issues.apache.org/jira/browse/HADOOP-14062
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Priority: Critical
> Attachments: HADOOP-14062.001.patch, HADOOP-14062.002.patch, 
> HADOOP-14062-branch-2.8.0.004.patch, yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> 

[jira] [Updated] (HADOOP-14062) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-02-09 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand updated HADOOP-14062:
-
Attachment: HADOOP-14062.002.patch
HADOOP-14062-branch-2.8.0.004.patch

Attaching updated patches for branch-2.8.0 and trunk.

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: HADOOP-14062
> URL: https://issues.apache.org/jira/browse/HADOOP-14062
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Priority: Critical
> Attachments: HADOOP-14062.001.patch, HADOOP-14062.002.patch, 
> HADOOP-14062-branch-2.8.0.004.patch, YARN-6013-branch-2.8.0.002.patch, 
> YARN-6013-branch-2.8.0.003.patch, yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)

[jira] [Updated] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-02-09 Thread Steve Moist (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Moist updated HADOOP-13075:
-
Attachment: HADOOP-13075-branch2.002.patch
HADOOP-13075-003.patch

Branch-2 patch and trunk patches with updated ignoring KMS test if no key is 
defined.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859967#comment-15859967
 ] 

Kihwal Lee edited comment on HADOOP-14033 at 2/9/17 6:39 PM:
-

Is it possible to add a meaningful metrics to show how reader threads 
(producer) are doing?


was (Author: kihwal):
Is it possible to add a meaningful metrics to show how readers are doing?

> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859967#comment-15859967
 ] 

Kihwal Lee commented on HADOOP-14033:
-

Is it possible to add a meaningful metrics to show how readers are doing?

> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859929#comment-15859929
 ] 

Hadoop QA commented on HADOOP-14033:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 22s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850016/HADOOP-14033.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c25d758d2f6b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bb99c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11602/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11602/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11602/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: 

[jira] [Commented] (HADOOP-14055) SwiftRestClient includes pass length in exception if auth fails

2017-02-09 Thread Marcell Hegedus (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859920#comment-15859920
 ] 

Marcell Hegedus commented on HADOOP-14055:
--

[~arpitagarwal], could you commit the patch, please?

> SwiftRestClient includes pass length in exception if auth fails 
> 
>
> Key: HADOOP-14055
> URL: https://issues.apache.org/jira/browse/HADOOP-14055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Marcell Hegedus
>Assignee: Marcell Hegedus
>Priority: Minor
> Attachments: HADOOP-14055.01.patch, HADOOP-14055.02.patch
>
>
> SwiftRestClient.exec(M method) throws SwiftAuthenticationFailedException if 
> auth fails and its message will contain the pass length that may leak into 
> logs.
> Fix is trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14034) Allow ipc layer exceptions to selectively close connections

2017-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859861#comment-15859861
 ] 

Hudson commented on HADOOP-14034:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11226 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11226/])
HADOOP-14034. Allow ipc layer exceptions to selectively close (kihwal: rev 
b6bb99c18a772d2179d5cc6757cddf141e8d39c0)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> Allow ipc layer exceptions to selectively close connections
> ---
>
> Key: HADOOP-14034
> URL: https://issues.apache.org/jira/browse/HADOOP-14034
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14034-branch-2.patch, HADOOP-14034-trunk.patch
>
>
> IPC layer exceptions generated in the readers are translated into fatal 
> errors - resulting in connection closure.  Ex. RetriableExceptions from call 
> queue pushback.
> Always closing the connection degrades performance for all clients since a 
> disconnected client will immediately reconnect on retry.  Readers become 
> overwhelmed servicing new connections and re-authentications from bad clients 
> instead of servicing calls from good clients.  The call queues run dry.
> Exceptions originating in the readers should be able to indicate if the 
> exception is an error or fatal so connections can remain open.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13975) Allow DistCp to use MultiThreadedMapper

2017-02-09 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859827#comment-15859827
 ] 

Erik Krogen commented on HADOOP-13975:
--

This seems very useful! Thanks for working on this.

Why is there both {{parseThreadsPerMap}} and {{parseNumThreadsPerMap}} in 
{{OptionsParser}}? It seems only one of them is used. Additionally the error 
message is incorrect in both of them, with one referring to {{MAX_MAPS}} and 
one referring to {{NUM_LISTSSTATUS_THREADS}}. 

> Allow DistCp to use MultiThreadedMapper
> ---
>
> Key: HADOOP-13975
> URL: https://issues.apache.org/jira/browse/HADOOP-13975
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Zheng Shao
>Assignee: Zheng Shao
>Priority: Minor
> Attachments: HADOOP-distcp-multithreaded-mapper-branch26.1.patch, 
> HADOOP-distcp-multithreaded-mapper-branch26.2.patch, 
> HADOOP-distcp-multithreaded-mapper-branch26.3.patch, 
> HADOOP-distcp-multithreaded-mapper-branch26.4.patch, 
> HADOOP-distcp-multithreaded-mapper-trunk.1.patch, 
> HADOOP-distcp-multithreaded-mapper-trunk.2.patch, 
> HADOOP-distcp-multithreaded-mapper-trunk.3.patch, 
> HADOOP-distcp-multithreaded-mapper-trunk.4.patch
>
>
> Although distcp allow users to control the parallelism via number of mappers, 
> sometimes it's desirable to run fewer mappers but more threads per mapper.  
> Since distcp is network bound (either by throughput or more frequently by 
> latency of creating connections, opening files, reading/writing files, and 
> closing files), this can make each mapper much more efficient.
> In that way, a lot of resources can be shared so we can save memory and 
> connections to NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14034) Allow ipc layer exceptions to selectively close connections

2017-02-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14034:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-2.8. Thanks for working on 
this, Daryn.

> Allow ipc layer exceptions to selectively close connections
> ---
>
> Key: HADOOP-14034
> URL: https://issues.apache.org/jira/browse/HADOOP-14034
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14034-branch-2.patch, HADOOP-14034-trunk.patch
>
>
> IPC layer exceptions generated in the readers are translated into fatal 
> errors - resulting in connection closure.  Ex. RetriableExceptions from call 
> queue pushback.
> Always closing the connection degrades performance for all clients since a 
> disconnected client will immediately reconnect on retry.  Readers become 
> overwhelmed servicing new connections and re-authentications from bad clients 
> instead of servicing calls from good clients.  The call queues run dry.
> Exceptions originating in the readers should be able to indicate if the 
> exception is an error or fatal so connections can remain open.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14034) Allow ipc layer exceptions to selectively close connections

2017-02-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859809#comment-15859809
 ] 

Kihwal Lee commented on HADOOP-14034:
-

It is very nice of you to fix prior unresolved merge conflict that got checked 
in.

{code:java}
 /**
-<<< HEAD
- * Process an RPC Request - handle connection setup and decoding of
- * request into a Call
-===
  * Process one RPC Request from buffer read from socket stream 
  *  - decode rpc in a rpc-Call
  *  - handle out-of-band RPC requests such as the initial connectionContext
@@ -2264,17 +2255,16 @@ private void unwrapPacketAndProcessRpcs(byte[] inBuf)
  * if SASL then SASL has been established and the buf we are passed
  * has been unwrapped from SASL.
  * 
->>> 3d94da1... HADOOP-11552. Allow handoff on the server side for RPC 
requests. Contributed by Siddharth Seth
  * @param bb - contains the RPC request header and the rpc request
{code}

> Allow ipc layer exceptions to selectively close connections
> ---
>
> Key: HADOOP-14034
> URL: https://issues.apache.org/jira/browse/HADOOP-14034
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14034-branch-2.patch, HADOOP-14034-trunk.patch
>
>
> IPC layer exceptions generated in the readers are translated into fatal 
> errors - resulting in connection closure.  Ex. RetriableExceptions from call 
> queue pushback.
> Always closing the connection degrades performance for all clients since a 
> disconnected client will immediately reconnect on retry.  Readers become 
> overwhelmed servicing new connections and re-authentications from bad clients 
> instead of servicing calls from good clients.  The call queues run dry.
> Exceptions originating in the readers should be able to indicate if the 
> exception is an error or fatal so connections can remain open.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14033) Reduce fair call queue lock contention

2017-02-09 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-14033:
-
Status: Patch Available  (was: Open)

> Reduce fair call queue lock contention
> --
>
> Key: HADOOP-14033
> URL: https://issues.apache.org/jira/browse/HADOOP-14033
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14033.patch
>
>
> Under heavy load the call queue may run dry yet clients experience high 
> latency.
> The fcq requires producers and consumers to sync via a shared lock.  Polling 
> consumers hold the lock while scanning all sub-queues.  Consumers are 
> serialized despite the sub-queues being thread-safe blocking queues.  The 
> effect is to cause other producers/consumers to frequently park.
> The lock is unfair, so producers/consumers attempt to barge in on the lock.  
> The outnumbered producers tend to remain blocked for an extended time.  As 
> load increases and the queues fill, the barging consumers drain the queues 
> faster than the producers can fill it.
> Server metrics provide an illusion of healthy throughput, response time, and 
> call queue length due to starvation on the ingress.   Often as the load gets 
> worse, the server looks better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-02-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859790#comment-15859790
 ] 

Daryn Sharp commented on HADOOP-13433:
--

Just saw this jira due to internal conflict.  We discovered the exact same 
issue a few years ago.  This jira's patch works around the side-effect of the 
root cause: the entire relogin process is not atomic with regard to gssapi or 
spnego authentication.  The jdk issue is technically a bug, but it's only 
triggered by hadoop's unsafe subject manipulation.

I was already intending to release our internal fix within a week.  It should 
negate the need for this patch but this can still be a safety net.

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0, 2.7.4, 2.6.6, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.7.patch, 
> HADOOP-13433-branch-2.7-v1.patch, HADOOP-13433-branch-2.7-v2.patch, 
> HADOOP-13433-branch-2.8.patch, HADOOP-13433-branch-2.8.patch, 
> HADOOP-13433-branch-2.8-v1.patch, HADOOP-13433-branch-2.patch, 
> HADOOP-13433.patch, HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, 
> HADOOP-13433-v4.patch, HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, 
> HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at 

[jira] [Commented] (HADOOP-14034) Allow ipc layer exceptions to selectively close connections

2017-02-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859780#comment-15859780
 ] 

Kihwal Lee commented on HADOOP-14034:
-

+1. the patch looks good. We've been running with this change for several 
months. 

> Allow ipc layer exceptions to selectively close connections
> ---
>
> Key: HADOOP-14034
> URL: https://issues.apache.org/jira/browse/HADOOP-14034
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14034-branch-2.patch, HADOOP-14034-trunk.patch
>
>
> IPC layer exceptions generated in the readers are translated into fatal 
> errors - resulting in connection closure.  Ex. RetriableExceptions from call 
> queue pushback.
> Always closing the connection degrades performance for all clients since a 
> disconnected client will immediately reconnect on retry.  Readers become 
> overwhelmed servicing new connections and re-authentications from bad clients 
> instead of servicing calls from good clients.  The call queues run dry.
> Exceptions originating in the readers should be able to indicate if the 
> exception is an error or fatal so connections can remain open.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14032) Reduce fair call queue priority inversion

2017-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859751#comment-15859751
 ] 

Hudson commented on HADOOP-14032:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11225 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11225/])
HADOOP-14032. Reduce fair call queue priority inversion. Contributed by 
(kihwal: rev a0bfb4150464013a618f30c2e38d88fc6de11ad2)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java


> Reduce fair call queue priority inversion
> -
>
> Key: HADOOP-14032
> URL: https://issues.apache.org/jira/browse/HADOOP-14032
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14032.patch
>
>
> The fcq's round robin multiplexer actually rewards abusive users.  Queue 
> consumers scan for a call from the roving multiplexer index to the lowest 
> prio ring before wrapping around to the higher prio rings.
> Let's take a fcq with 4 priority rings.  Multiplexer shares per index are 8, 
> 4, 2, 1.  
> All well behaved clients are operating in ring 0.  Bad client floods the 
> server and drops to the lowest prio.  Unfortunately the service order gives 8 
> shares to the good clients, followed by 4+2+1=7 shares to the bad client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14032) Reduce fair call queue priority inversion

2017-02-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14032:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   Status: Resolved  (was: Patch Available)

Committed to from trunk to branch-2.8. Thanks for the patch, Daryn.

> Reduce fair call queue priority inversion
> -
>
> Key: HADOOP-14032
> URL: https://issues.apache.org/jira/browse/HADOOP-14032
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14032.patch
>
>
> The fcq's round robin multiplexer actually rewards abusive users.  Queue 
> consumers scan for a call from the roving multiplexer index to the lowest 
> prio ring before wrapping around to the higher prio rings.
> Let's take a fcq with 4 priority rings.  Multiplexer shares per index are 8, 
> 4, 2, 1.  
> All well behaved clients are operating in ring 0.  Bad client floods the 
> server and drops to the lowest prio.  Unfortunately the service order gives 8 
> shares to the good clients, followed by 4+2+1=7 shares to the bad client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14032) Reduce fair call queue priority inversion

2017-02-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859690#comment-15859690
 ] 

Kihwal Lee commented on HADOOP-14032:
-

+1

> Reduce fair call queue priority inversion
> -
>
> Key: HADOOP-14032
> URL: https://issues.apache.org/jira/browse/HADOOP-14032
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14032.patch
>
>
> The fcq's round robin multiplexer actually rewards abusive users.  Queue 
> consumers scan for a call from the roving multiplexer index to the lowest 
> prio ring before wrapping around to the higher prio rings.
> Let's take a fcq with 4 priority rings.  Multiplexer shares per index are 8, 
> 4, 2, 1.  
> All well behaved clients are operating in ring 0.  Bad client floods the 
> server and drops to the lowest prio.  Unfortunately the service order gives 8 
> shares to the good clients, followed by 4+2+1=7 shares to the bad client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-02-09 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859673#comment-15859673
 ] 

Larry McCay commented on HADOOP-14063:
--

Hi [~ramtinb] - This is a good improvement.
I was aware that this was possible but it seemed like the setting of the 
provider path was lining up properly in practice.

Once concern that I have about this is that there is no distinction between 
permissions blocking the interrogation of and the credential not existing. I 
think it would be better to at least log that a search for a credential was not 
possible due to file permissions on a given provider within the path. 
Otherwise, a user may be inclined to add the credential again.

Maybe when you check file.canRead() is a good place to do this?

What do you think?

> Hadoop CredentialProvider fails to load list of keystore files
> --
>
> Key: HADOOP-14063
> URL: https://issues.apache.org/jira/browse/HADOOP-14063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-14063-001.patch
>
>
> The {{hadoop.security.credential.provider.path}} property can be a list of 
> keystore files like this:
> _jceks://hdfs/file1.jceks,jceks://hdfs/file2.jceks,jceks://hdfs/file3.jceks 
> ..._
> Each file can have different permissions set to limit the users that have 
> access to the keys.  Some users may not have access to all the keystore files.
> Each keystore file in the list should be tried until one is found with the 
> key needed. 
> Currently it will throw an exception if one of the keystore files cannot be 
> loaded instead of continuing to try the next one in the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-02-09 Thread Sameer Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859628#comment-15859628
 ] 

Sameer Choudhary commented on HADOOP-13345:
---

Makes sense. Thanks! For persistance, frequent snapshotting to S3 have to be 
implemented by the user for their Metadata store. One that is loss less. 
However, I agree that for most users Dynamo DB based solution should be 
sufficient.

 

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-02-09 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-14063:

Component/s: security

> Hadoop CredentialProvider fails to load list of keystore files
> --
>
> Key: HADOOP-14063
> URL: https://issues.apache.org/jira/browse/HADOOP-14063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-14063-001.patch
>
>
> The {{hadoop.security.credential.provider.path}} property can be a list of 
> keystore files like this:
> _jceks://hdfs/file1.jceks,jceks://hdfs/file2.jceks,jceks://hdfs/file3.jceks 
> ..._
> Each file can have different permissions set to limit the users that have 
> access to the keys.  Some users may not have access to all the keystore files.
> Each keystore file in the list should be tried until one is found with the 
> key needed. 
> Currently it will throw an exception if one of the keystore files cannot be 
> loaded instead of continuing to try the next one in the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-02-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859586#comment-15859586
 ] 

Steve Loughran commented on HADOOP-13345:
-

I should add to aaron with 

# the in-memory one really, really is for testing only. 
# it won't be throttling per-se, more than when you get API calls rejected,  
the client will back off. See HADOOP-13904.

I like your thoughts about HBase; there's no obvious reason why this won't work 
(though you need to persist it somehow). For now though, Dynamo is what we 
target, so we can just use something that AWS keeps up. It helps for dev & test 
as we don't need to bring up miniHbase clusters. 

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14066) VersionInfo should be public api

2017-02-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859574#comment-15859574
 ] 

Steve Loughran commented on HADOOP-14066:
-

I'm happy with this, though I do know that Hive does overreact to version 3 
JARs; we've had to add the ability to lie about versions to shut it up 
{{-Ddeclared.hadoop.version=2.11}}

> VersionInfo should be public api
> 
>
> Key: HADOOP-14066
> URL: https://issues.apache.org/jira/browse/HADOOP-14066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Thejas M Nair
>Priority: Critical
>
> org.apache.hadoop.util.VersionInfo is commonly used by applications that work 
> with multiple versions of Hadoop.
> In case of Hive, this is used in a shims layer to identify the version of 
> hadoop and use different shim code based on version (and the corresponding 
> api it supports).
> I checked Pig and Hbase as well and they also use this class to get version 
> information.
> However, this method is annotated as "@private" and "@unstable".
> This code has actually been stable for long time and is widely used like a 
> public api. I think we should mark it as such.
> Note that there are apis to find the version of server components in hadoop, 
> however, this class necessary for finding the version of client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14058) Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks

2017-02-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859569#comment-15859569
 ] 

Steve Loughran commented on HADOOP-14058:
-

no need to declare a branch; if it's against trunk you can skip a suffix, if 
it's branch-2 then use -branch-2 as the suffix

what is important is that you need to confirm that you've tested against an 
object store: the way we do this is require that the patch submitter says which 
s3 endpoint they ran their tests against. 

> Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks
> ---
>
> Key: HADOOP-14058
> URL: https://issues.apache.org/jira/browse/HADOOP-14058
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: s3
> Attachments: HADOOP-14058.001.patch, 
> HADOOP-14058-HADOOP-13345.001.patch
>
>
> In NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks, 
> {code}
>   else if (i == 3) {
> // test both markers
> store.storeEmptyFile(base + "_$folder$");
> store.storeEmptyFile(base + "/dir_$folder$");
> store.storeEmptyFile(base + "/");
> store.storeEmptyFile(base + "/dir/");
>   }
> {code}
> the above test code is not executed. In the following code:
> {code}
> for (int i = 0; i < 3; i++) {
> {code}
> < should be <=.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13475) Adding Append Blob support for WASB

2017-02-09 Thread Raul da Silva Martins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859396#comment-15859396
 ] 

Raul da Silva Martins commented on HADOOP-13475:


Hi [~ste...@apache.org],

What is the current state on this patch?

Thank you!

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Raul da Silva Martins
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch, 
> HADOOP-13475.001.patch, HADOOP-13475.002.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. As owners of a large scale 
> service who intend to start writing to Append blobs, we need this support in 
> order to be able to keep using our HDI capabilities.
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14065) AliyunOSS: oss directory filestatus should use meta time

2017-02-09 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859370#comment-15859370
 ] 

Fei Hui edited comment on HADOOP-14065 at 2/9/17 11:35 AM:
---

thanks [~drankye] and [~uncleGen]


was (Author: ferhui):
thanks [~drankye]

> AliyunOSS: oss directory filestatus should use meta time
> 
>
> Key: HADOOP-14065
> URL: https://issues.apache.org/jira/browse/HADOOP-14065
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14065.001.patch, HADOOP-14065.002.patch, 
> HADOOP-14065.003.patch, HADOOP-14065.004.patch, HADOOP-14065.patch
>
>
> code in getFileStatus function
> {code:title=AliyunOSSFileSystem.java|borderStyle=solid}
> else if (objectRepresentsDirectory(key, meta.getContentLength())) {
>   return new FileStatus(0, true, 1, 0, 0, qualifiedPath);
> }
> {code}
> When the object is a directory,  we should set right modifiedtime rather than 
> 0 for FileStatus



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14065) AliyunOSS: oss directory filestatus should use meta time

2017-02-09 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14065:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

> AliyunOSS: oss directory filestatus should use meta time
> 
>
> Key: HADOOP-14065
> URL: https://issues.apache.org/jira/browse/HADOOP-14065
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14065.001.patch, HADOOP-14065.002.patch, 
> HADOOP-14065.003.patch, HADOOP-14065.004.patch, HADOOP-14065.patch
>
>
> code in getFileStatus function
> {code:title=AliyunOSSFileSystem.java|borderStyle=solid}
> else if (objectRepresentsDirectory(key, meta.getContentLength())) {
>   return new FileStatus(0, true, 1, 0, 0, qualifiedPath);
> }
> {code}
> When the object is a directory,  we should set right modifiedtime rather than 
> 0 for FileStatus



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14065) AliyunOSS: oss directory filestatus should use meta time

2017-02-09 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859370#comment-15859370
 ] 

Fei Hui commented on HADOOP-14065:
--

thanks [~drankye]

> AliyunOSS: oss directory filestatus should use meta time
> 
>
> Key: HADOOP-14065
> URL: https://issues.apache.org/jira/browse/HADOOP-14065
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14065.001.patch, HADOOP-14065.002.patch, 
> HADOOP-14065.003.patch, HADOOP-14065.004.patch, HADOOP-14065.patch
>
>
> code in getFileStatus function
> {code:title=AliyunOSSFileSystem.java|borderStyle=solid}
> else if (objectRepresentsDirectory(key, meta.getContentLength())) {
>   return new FileStatus(0, true, 1, 0, 0, qualifiedPath);
> }
> {code}
> When the object is a directory,  we should set right modifiedtime rather than 
> 0 for FileStatus



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14065) AliyunOSS: oss directory filestatus should use meta time

2017-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859345#comment-15859345
 ] 

Hudson commented on HADOOP-14065:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11224 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11224/])
HADOOP-14065. AliyunOSS: oss directory filestatus should use meta time. 
(kai.zheng: rev a8a594b4c89319bef294534755f0e4ed6198ec88)
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java


> AliyunOSS: oss directory filestatus should use meta time
> 
>
> Key: HADOOP-14065
> URL: https://issues.apache.org/jira/browse/HADOOP-14065
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14065.001.patch, HADOOP-14065.002.patch, 
> HADOOP-14065.003.patch, HADOOP-14065.004.patch, HADOOP-14065.patch
>
>
> code in getFileStatus function
> {code:title=AliyunOSSFileSystem.java|borderStyle=solid}
> else if (objectRepresentsDirectory(key, meta.getContentLength())) {
>   return new FileStatus(0, true, 1, 0, 0, qualifiedPath);
> }
> {code}
> When the object is a directory,  we should set right modifiedtime rather than 
> 0 for FileStatus



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14065) AliyunOSS: oss directory filestatus should use meta time

2017-02-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859299#comment-15859299
 ] 

Kai Zheng commented on HADOOP-14065:


The latest patch LGTM and +1. 

> AliyunOSS: oss directory filestatus should use meta time
> 
>
> Key: HADOOP-14065
> URL: https://issues.apache.org/jira/browse/HADOOP-14065
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14065.001.patch, HADOOP-14065.002.patch, 
> HADOOP-14065.003.patch, HADOOP-14065.004.patch, HADOOP-14065.patch
>
>
> code in getFileStatus function
> {code:title=AliyunOSSFileSystem.java|borderStyle=solid}
> else if (objectRepresentsDirectory(key, meta.getContentLength())) {
>   return new FileStatus(0, true, 1, 0, 0, qualifiedPath);
> }
> {code}
> When the object is a directory,  we should set right modifiedtime rather than 
> 0 for FileStatus



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14065) AliyunOSS: oss directory filestatus should use meta time

2017-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859185#comment-15859185
 ] 

Hadoop QA commented on HADOOP-14065:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14065 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851805/HADOOP-14065.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 67af00aa4fd9 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37b4acf |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11601/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11601/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: oss directory filestatus should use meta time
> 
>
> Key: HADOOP-14065
> URL: https://issues.apache.org/jira/browse/HADOOP-14065
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14065.001.patch, HADOOP-14065.002.patch, 
> HADOOP-14065.003.patch, HADOOP-14065.004.patch, HADOOP-14065.patch
>
>
> code in getFileStatus function
>