[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480077#comment-16480077
 ] 

Hudson commented on HDFS-13573:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14234 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14234/])
HDFS-13573. Javadoc for BlockPlacementPolicyDefault is inaccurate. (yqlin: rev 
f749517cc78fc761cecff21e8b7f65fb719bfca2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480060#comment-16480060
 ] 

Yiqun Lin commented on HDFS-13573:
--

Committed this to trunk. ThanksĀ [~zvenczel] for the contribution.

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480046#comment-16480046
 ] 

Yiqun Lin commented on HDFS-13573:
--

LGTM, +1. Will commit this shortly.

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478996#comment-16478996
 ] 

genericqa commented on HDFS-13573:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13573 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923897/HDFS-13573.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 397988c2b80e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454de3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478806#comment-16478806
 ] 

Yiqun Lin commented on HDFS-13573:
--

{quote}
I would suggest not to leave out the scenario when the writer is not on a 
datanode and have the following:
...
{quote}
Change looks good to me. [~zvenczel], feel free to attach the updated patch.

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478801#comment-16478801
 ] 

Zsolt Venczel commented on HDFS-13573:
--

Hi [~linyiqun]!
 Thanks for the suggestions!

One question about the sentence you mentioned:
{code:java}
* The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine otherwise a random datanode
* (By passing the {@link org.apache.hadoop.fs.CreateFlag}#NO_LOCAL_WRITE flag
* the client can request not to put a block replica on the local datanode.
{code}
If I understand you correctly you're suggesting the above to be changed to the 
following:
{code:java}
* The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine by default.
* (By passing the {@link org.apache.hadoop.fs.CreateFlag}#NO_LOCAL_WRITE flag
* the client can request not to put a block replica on the local datanode.
{code}
I would suggest not to leave out the scenario when the writer is not on a 
datanode and have the following:
{code:java}
* The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine by default
* (By passing the {@link org.apache.hadoop.fs.CreateFlag#NO_LOCAL_WRITE} flag
* the client can request not to put a block replica on the local datanode.
* Subsequent replicas will still follow default block placement policy.).
* If the writer is not on a datanode, the 1st replica is placed on a random 
node.{code}
What do you think?

Best regards,
Zsolt

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478616#comment-16478616
 ] 

Yiqun Lin commented on HDFS-13573:
--

Thanks [~zvenczel] for working this and providing the patch.
{quote}the 1st replica is placed on the local machine otherwise a random 
datanode
{quote}
Not on the local node not means it must be in a random node. This is not 
absolutely correct. I make a minor change based on your change, you can update 
this like:
{noformat}
 * the 1st replica is placed on the local machine by default.
 * (By passing the {@link org.apache.hadoop.fs.CreateFlag#NO_LOCAL_WRITE} flag
 * the client can request not to put a block replica on the local datanode.
 * Subsequent replicas will still follow default block placement policy.).
{noformat}
Also correct \{@link org.apache.hadoop.fs.CreateFlag}#NO_LOCAL_WRITE to

{@link org.apache.hadoop.fs.CreateFlag#NO_LOCAL_WRITE}

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-16 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477671#comment-16477671
 ] 

Zsolt Venczel commented on HDFS-13573:
--

Unit test failures should be unrelated.

No tests added as no code change was introduced.

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477662#comment-16477662
 ] 

genericqa commented on HDFS-13573:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13573 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923690/HDFS-13573.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ed2932e5918d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0a22860 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24230/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24230/testReport/ |
| Max. process+thread count | 2877 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console