[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2016-01-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081251#comment-15081251
 ] 

Kihwal Lee commented on HDFS-7163:
--

The build failure is in libwebhdfs native builds.  This is in contrib and only 
gets built when {{-Drequire.libwebhdfs}} is specified. You might need to 
install some of the build dependencies such as {{jansson-devel}}  Looks like 
HDFS-8346.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0, 2.7.3
>
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163-branch-2.7.005.patch, 
> HDFS-7163.001.patch, HDFS-7163.002.patch, HDFS-7163.003.patch, 
> HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-29 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15074044#comment-15074044
 ] 

Eric Payne commented on HDFS-7163:
--

{quote}
||Vote||Subsystem||Runtime||Comment||
|-1|compile|0m 43s|hadoop-hdfs in branch-2.7 failed with JDK v1.8.0_66.|
|-1|compile|0m 44s|hadoop-hdfs in branch-2.7 failed with JDK v1.7.0_91.|
{quote}
I'm not sure what's wrong with the HDFS pre-commit build. This patch builds for 
me locally.

Thanks again, [~kihwal].

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0, 2.7.3
>
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163-branch-2.7.005.patch, 
> HDFS-7163.001.patch, HDFS-7163.002.patch, HDFS-7163.003.patch, 
> HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-28 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072914#comment-15072914
 ] 

Kihwal Lee commented on HDFS-7163:
--

+1 The patch looks like a valid port. Committing to branch-2.7.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0
>
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163-branch-2.7.005.patch, 
> HDFS-7163.001.patch, HDFS-7163.002.patch, HDFS-7163.003.patch, 
> HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071936#comment-15071936
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
51s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-hdfs in branch-2.7 failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 44s 
{color} | {color:red} hadoop-hdfs in branch-2.7 failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 47s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 47s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 101, now 104). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s 
{color} | {color:red} The patch has 4418 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 59s 
{color} | {color:red} The patch has 173 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 47s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 42m 42s 
{color} | {color:red} Patch generated 71 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 0s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:date2015-12-26 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779554/HDFS-7163-branch-2.7.005.patch
 |
| JIRA Issue | HDFS-7163 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068235#comment-15068235
 ] 

Kihwal Lee commented on HDFS-7163:
--

Multiple precommit ran on the same build slave, causing them to run more than 5 
hours and fail bunch of tests.
Hopefully it is less busy today.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read 
> Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068652#comment-15068652
 ] 

Kihwal Lee commented on HDFS-7163:
--

The latest patch looks good and seems to have addressed all the concerns from 
the reviews.
The failed tests pass when run locally.
+1

{noformat}
---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.389 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.679 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.477 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.388 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.773 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.842 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding

Results :

Tests run: 36, Failures: 0, Errors: 0, Skipped: 0
{noformat}

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read 
> Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068445#comment-15068445
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 7m 32s 
{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 26s 
{color} | {color:red} hadoop-hdfs-project in trunk failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-hdfs-project in trunk failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 13s {color} 
| {color:red} hadoop-hdfs-project-jdk1.8.0_66 with JDK v1.8.0_66 generated 4 
new issues (was 91, now 49). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 46s {color} 
| {color:red} hadoop-hdfs-project-jdk1.7.0_91 with JDK v1.7.0_91 generated 39 
new issues (was 14, now 51). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-hdfs-project (total was 58, now 61). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
58s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 41s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66 with JDK 
v1.8.0_66 generated 7 new issues (was 7, now 14). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 6m 11s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91 with JDK 
v1.7.0_91 generated 7 new issues (was 7, now 14). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068674#comment-15068674
 ] 

Hudson commented on HDFS-7163:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9015 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9015/])
HDFS-7163. WebHdfsFileSystem should retry reads according to the (kihwal: rev 
867048c3e4b20ece0039a876def129fa5eb9234f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java


> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read 
> Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068675#comment-15068675
 ] 

Kihwal Lee commented on HDFS-7163:
--

I've committed this to trunk, branch-2 and branch-2.8. [~eepayne], please post 
a 2.7 version.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read 
> Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064131#comment-15064131
 ] 

Kihwal Lee commented on HDFS-7163:
--

The precommit didn't compile the code right. It build fine locally. I will kick 
the build again.
{noformat}
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java:[903,10]
 error: cannot find symbol
[ERROR]   symbol:   class WebHdfsInputStream
  location: class TestWebHDFS
{noformat}

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, HDFS-7163.005.patch, WebHDFS Read 
> Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064835#comment-15064835
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 27m 58s 
{color} | {color:red} hadoop-hdfs-project-jdk1.8.0_66 with JDK v1.8.0_66 
generated 3 new issues (was 48, now 49). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 33m 0s {color} 
| {color:red} hadoop-hdfs-project-jdk1.7.0_91 with JDK v1.7.0_91 generated 3 
new issues (was 50, now 51). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-hdfs-project (total was 58, now 61). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 25s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 229m 18s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 46s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 21s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 31s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | 

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063147#comment-15063147
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 45s 
{color} | {color:red} hadoop-hdfs-project-jdk1.8.0_66 with JDK v1.8.0_66 
generated 3 new issues (was 48, now 49). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 52s 
{color} | {color:red} hadoop-hdfs-project-jdk1.7.0_91 with JDK v1.7.0_91 
generated 3 new issues (was 50, now 51). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-hdfs-project (total was 58, now 61). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 2s {color} | 
{color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15053396#comment-15053396
 ] 

Daryn Sharp commented on HDFS-7163:
---

Regarding use of client-side block locations, I did that internally for 0.23 
and it wasn't as easy as it seemed.  The excluded nodes support is actually 
much cleaner for the client.  The NN also issues the redirect based on the 
block location of the offset.

Code:
# We may want to defer the open until a read occurs.  Otherwise when 
immediately seeking (ex. splits), the file will start unnecessarily streaming 
and consuming bandwidth, the client will close the connection, then reopen at 
the seek offset.
# {{runnerState}} is initialized as CLOSED (terminal state) although reassigned 
as DISCONNECTED in the ctor.  Feels like it should just be initialized as 
DISCONNECTED.
# If {{read(...)}} throws an IOE due to an explicitly closed stream, will 
retries occur?
# In {{connect(URL)}}, {{conn = connection}}, conn is re-assigned if null, conn 
is returned, but connection remains null.  This seems confusing and wrong at 
first.  Calling it {{cachedConnection}} would clarify its purpose.
# In {{getResponse}}:
##  Should {{initializeInputStream}} be unconditionally invoked inside the 
prior null check on connection?  Ie.  Is there ever a case when {{in}} 
shouldn't be initialized when a new connection is made?
## I think the logic should be {{if (conn != cachedConnection) { 
cachedConnection = conn; in = initializeInputStream(cachedConnection) } }}
## Should use {{URL#getAuthority}} instead of explicitly extracting and joining 
the host and port.
# In {{ReadRunner#initializeInputStream}} has a misspelled "performznt".
# In {{closeInputStream}}, I'd use {{IOUtils.closeStream}} to ensure the close 
doesn't throw which would prevent the stream state from being updated.
# In general the state management isn't clear.  DISCONNECTED vs SEEK appear to 
be the same, with the exception that SEEK allows the connection to be reopened. 
 When errors occur and the stream is DISCONNECTED, are you sure it will 
retry/recover in all cases?

Tests:
# In {{testWebHdfsReadRetries}}
## A 5m timeout seems overly generous for something that will hopefully fail 
much faster if there's a problem.
## Why the 5s safemode extension?  Seems like it will unnecessarily slow down 
the test?
## The healthy check on dfs appears redundant since {{cluster#waitActive()}} 
already checked.
#  The client shouldn't just give up and do nothing for InvalidToken.  It's 
supposed to try and get another token and retry the read.  It's unclear if that 
actually happens or if the position is retaining correctly?
# May consider more mockito verifies.


> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-12-01 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034375#comment-15034375
 ] 

Eric Payne commented on HDFS-7163:
--

[~wheat9] and [~daryn],
Did my comments 
[above|https://issues.apache.org/jira/browse/HDFS-7163?focusedCommentId=15019039=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15019039]
 make sense:
{quote}
In this patch, if the DN that is being read from goes down, WebHDFS will put 
that DN into the client's URL exclude list before querying the NN again for 
another DN. The only time the same DN is reused is if a seek has occurred.
bq. An alternative approach is to have WebHDFS (1) expose a GET_BLOCK call 
where the DN returns the block directly, and (2) be a smarter client that 
retries based on block locations.
Although this may be a more elegant solution, I think that could be done as 
part of a separate JIRA, given that we can take advantage of the exclude list 
functionality as I mentioned above.
{quote}

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-24 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15025288#comment-15025288
 ] 

Eric Payne commented on HDFS-7163:
--

Although the following tests are listed in the above {{Failed unit tests}} 
section, they all passed for me in my local environment:
{code}
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
hadoop.hdfs.server.datanode.TestDataNodeMetrics
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.hdfs.server.namenode.TestDecommissioningStatus
hadoop.hdfs.shortcircuit.TestShortCircuitCache
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
hadoop.hdfs.TestEncryptionZones
hadoop.hdfs.TestReadStripedFileWithDecoding
hadoop.hdfs.TestReplaceDatanodeOnFailure
hadoop.hdfs.web.TestWebHDFS
{code}

And, this one failed with and without my patch:
{code}
hadoop.security.TestPermission
{code}

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-24 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15025292#comment-15025292
 ] 

Eric Payne commented on HDFS-7163:
--

Looks like {{TestPermission#testBackwardCompatibility}} is broken by  
HADOOP-12294 as documented 
[here|https://issues.apache.org/jira/browse/HDFS-9451?focusedCommentId=15023926=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15023926]

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.004.patch, HDFS-7163-branch-2.7.003.patch, 
> HDFS-7163-branch-2.7.004.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, HDFS-7163.004.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15024999#comment-15024999
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 45s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 35s {color} 
| {color:red} hadoop-hdfs-project-jdk1.8.0_66 with JDK v1.8.0_66 generated 3 
new issues (was 49, now 50). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 18s 
{color} | {color:red} hadoop-hdfs-project-jdk1.7.0_85 with JDK v1.7.0_85 
generated 3 new issues (was 51, now 52). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project (total was 58, now 59). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 11s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 40s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_85. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| 

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-23 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022684#comment-15022684
 ] 

Eric Payne commented on HDFS-7163:
--

{quote}
In this patch, if the DN that is being read from goes down, WebHDFS will put 
that DN into the client's URL exclude list before querying the NN again for 
another DN. The only time the same DN is reused is if a seek has occurred.
{quote}
[~wheat9], I was wrong about one thing. In the current patch, a failed read 
does not put the current node into the exclude list, so when the client queries 
the NN again, the NN could give it the same node back. I will put up a new 
patch soon addressing this issue.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.7.003.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-20 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15019039#comment-15019039
 ] 

Eric Payne commented on HDFS-7163:
--

[~wheat9], thank you for your review and comments on this feature.

bq. I think retrying only on the data node is problematic as the retry might 
have little value when the DN goes down.

In this patch, if the DN that is being read from goes down, WebHDFS will put 
that DN into the client's URL exclude list before querying the NN again for 
another DN. The only time the same DN is reused is if a seek has occurred.

bq. An alternative approach is to have WebHDFS (1) expose a GET_BLOCK call 
where the DN returns the block directly, and (2) be a smarter client that 
retries based on block locations.

Although this may be a more elegant solution, I think that could be done as 
part of a separate JIRA, given that we can take advantage of the exclude list 
functionality as I mentioned above.


> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.7.003.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-19 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014779#comment-15014779
 ] 

Haohui Mai commented on HDFS-7163:
--

I think retrying only on the data node is problematic as the retry might have 
little value when the DN goes down.

An alternative approach is to have WebHDFS (1) expose a {{GET_BLOCK}} call 
where the DN returns the block directly, and (2) be a smarter client that 
retries based on block locations.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.7.003.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-19 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014632#comment-15014632
 ] 

Eric Payne commented on HDFS-7163:
--

Hi [~wheat9] and [~daryn]. I wonder if either of you know when you might have a 
chance to review this. I would really appreciate your feedback. Thanks.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.7.003.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002873#comment-15002873
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
16s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 101, now 102). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s 
{color} | {color:red} The patch has 2114 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 56s 
{color} | {color:red} The patch has 95 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 52s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 26s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 47m 44s 
{color} | {color:red} Patch generated 72 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 205m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | 

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002422#comment-15002422
 ] 

Haohui Mai commented on HDFS-7163:
--

Sure. Will look into it next week.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.7.003.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14994725#comment-14994725
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project (total was 58, now 59). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 47s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 48m 41s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-06 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771091/HDFS-7163.003.patch |
| JIRA Issue | HDFS-7163 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992790#comment-14992790
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 7 new checkstyle issues in 
hadoop-hdfs-project (total was 58, now 65). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 14s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client introduced 1 new 
FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 4s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.getResponse(HttpURLConnection)
 invokes inefficient new Integer(int) constructor; use Integer.valueOf(int) 
instead  At WebHdfsFileSystem.java:constructor; use Integer.valueOf(int) 
instead  At WebHdfsFileSystem.java:[line 1781] |
| JDK v1.8.0_66 Failed junit tests | 

[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979365#comment-14979365
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 7 new checkstyle issues in 
hadoop-hdfs-project (total was 58, now 65). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client introduced 1 new 
FindBugs issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 45s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_60 with 
JDK v1.8.0_60 has problems. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 6m 15s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_79 with 
JDK v1.7.0_79 has problems. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 5s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason ||