[jira] [Commented] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757826#comment-16757826
 ] 

Hudson commented on HDFS-14187:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15863 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15863/])
HDFS-14187. Make warning message more clear when there are not enough (weichiu: 
rev 0ab7fc92009fec2f0ab341f3d878e1b8864b8ea9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-31 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757827#comment-16757827
 ] 

Kitti Nanasi commented on HDFS-14187:
-

Thanks for reviewing and committing it [~jojochuang]!

> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757780#comment-16757780
 ] 

Wei-Chiu Chuang commented on HDFS-14187:


+1

> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751317#comment-16751317
 ] 

Hadoop QA commented on HDFS-14187:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956162/HDFS-14187.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bc9d7604da40 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f3d8265 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26042/testReport/ |
| Max. process+thread count | 332 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26042/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Make warning message more