[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580332#comment-16580332
 ] 

Hudson commented on HDFS-13788:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14769 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14769/])
HDFS-13788. Update EC documentation about rack fault tolerance. (xiao: rev 
cede33997f7ab09fc046017508b680e282289ce3)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md


> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13788.001.patch, HDFS-13788.002.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-14 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580276#comment-16580276
 ] 

Xiao Chen commented on HDFS-13788:
--

+1

> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13788.001.patch, HDFS-13788.002.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579790#comment-16579790
 ] 

genericqa commented on HDFS-13788:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
40m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13788 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935531/HDFS-13788.002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 8cba93d3b207 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d1830d8 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 312 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24773/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13788.001.patch, HDFS-13788.002.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-14 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579720#comment-16579720
 ] 

Kitti Nanasi commented on HDFS-13788:
-

Thanks for the comment, [~xiaochen]! I fixed the description in patch v002.

> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13788.001.patch, HDFS-13788.002.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-13 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578667#comment-16578667
 ] 

Xiao Chen commented on HDFS-13788:
--

Thanks [~knanasi] for the patch and [~zvenczel] for the review.
bq. For rack fault-tolerance, it is also important to have at least as many 
racks as the configured number of EC parity cells.
This is technically not correct. In case of RS(3,2), having 2 racks is not safe.
I suggest we word it something like: ... to have enough number of racks, so 
that on average, each rack holds number of blocks no more than the number of EC 
parity blocks. A formula to calculate this would be (data blocks + parity 
blocks) / parity blocks, rounding up.

Then in the 6,3 example, we add the example calculation: ... minimally 3 racks 
(calculated by (6 + 3) / 3 = 3) ...


It'd be great if we can add a note in the end as well, after:
bq. ...will still attempt to spread a striped file across multiple nodes to 
preserve node-level fault-tolerance.
For this reason, it is recommended to setup racks with similar number of 
DataNodes. 

> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13788.001.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-13 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578168#comment-16578168
 ] 

Zsolt Venczel commented on HDFS-13788:
--

Thanks [~xiaochen] for reporting this issue and thanks [~knanasi] for working 
on the patch.

The updated documentation seems to be fine. +1 (non-binding) from me.

> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13788.001.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578085#comment-16578085
 ] 

genericqa commented on HDFS-13788:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
40m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13788 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935341/HDFS-13788.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 467c3038cda3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 475bff6 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24759/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update EC documentation about rack fault tolerance
> --
>
> Key: HDFS-13788
> URL: https://issues.apache.org/jira/browse/HDFS-13788
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13788.001.patch
>
>
> From 
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
> {quote}
> For rack fault-tolerance, it is also important to have at least as many racks 
> as the configured EC stripe width. For EC policy RS (6,3), this means 
> minimally 9 racks, and ideally 10 or 11 to handle planned and unplanned 
> outages. For clusters with fewer racks than the stripe width, HDFS cannot 
> maintain rack fault-tolerance, but will still attempt to spread a striped 
> file across multiple nodes to preserve node-level fault-tolerance.
> {quote}
> Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
> be updated.
> (I didn't check timestamps, but this is probably due to 
> {{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when 
> HDFS-9088 introduced this doc. Later there's also examples in 
> {{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org