[jira] [Commented] (HDFS-12080) Ozone: Fix UT failure in TestOzoneConfigurationFields

2017-07-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075965#comment-16075965
 ] 

Anu Engineer commented on HDFS-12080:
-

I have asked for a force build for this patch.

https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDFS-Build/detail/PreCommit-HDFS-Build/20170/pipeline


> Ozone: Fix UT failure in TestOzoneConfigurationFields
> -
>
> Key: HDFS-12080
> URL: https://issues.apache.org/jira/browse/HDFS-12080
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-12080-HDFS-7240.001.patch
>
>
> HDFS-12023 added a test case {{TestOzoneConfigurationFields}} to make sure 
> ozone configuration properties is fully documented in ozone-default.xml. This 
> is currently failing because
> 1. ozone-default.xml has 1 property not used anywhere
> {code}
>   ozone.scm.internal.bind.host
> {code}
> 2. Some cblock properties are missing in ozone-default.xml
> {code}
>   dfs.cblock.scm.ipaddress
>   dfs.cblock.scm.port
>   dfs.cblock.jscsi-address
>   dfs.cblock.service.rpc-bind-host
>   dfs.cblock.jscsi.rpc-bind-host
> {code}
> this needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12076) Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR

2017-07-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12076:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~vagarychen] Thanks for the contribution. I have committed this to the feature 
branch.

> Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR
> 
>
> Key: HDFS-12076
> URL: https://issues.apache.org/jira/browse/HDFS-12076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12076-HDFS-7240.001.patch
>
>
> We should review the cases where we are returning FAILED_INTERNAL_ERROR in 
> SCM and KSM. If appropriate we should return better error codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9388) Refactor decommission related code to support maintenance state for datanodes

2017-07-05 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-9388:
-
Attachment: HDFS-9388.02.patch

Thanks for the review [~mingma]. Attached v02 patch addressing the following 
comments. Can you please take a look at the latest patch ?
bq. Configuration keys DFS_NAMENODE_DECOMMISSION_* only mentioned decommission 
in hdfs-default.xml. Better to use general term like admin, or include 
maintenance.
Updated hdfs-default.xml to mention on maintenance for these config params. 
But, I have not changed the config param strings yet. In order to rename these 
config key strings to a new one, we need to deprecate the current ones and 
introduce new ones. Will you be ok, if we can take this deprecation alone in a 
separate jira ?

bq. Comments in functions like handleInsufficientlyStored and 
processBlocksInternal refer to decommission only; would be useful to update the 
comments.
Updated the comments to mention about the maintenance.

bq. The checkstyle and whitespace might not be related to the change. Still it 
will be nice to fix them if it isn't too much effort.
Fixed the checkstyle and bunch of space issues in the new file. Will watch out 
for new ones in the new patch and fix them if needed.


> Refactor decommission related code to support maintenance state for datanodes
> -
>
> Key: HDFS-9388
> URL: https://issues.apache.org/jira/browse/HDFS-9388
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9388.01.patch, HDFS-9388.02.patch
>
>
> Lots of code can be shared between the existing decommission functionality 
> and to-be-added maintenance state support for datanodes. To make it easier to 
> add maintenance state support, let us first modify the existing code to make 
> it more general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11975) Provide a system-default EC policy

2017-07-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075948#comment-16075948
 ] 

Kai Zheng commented on HDFS-11975:
--

Great you made it and figured out the root cause. I'll review the updated patch 
later and hope we can get it in soon.

> Provide a system-default EC policy
> --
>
> Key: HDFS-11975
> URL: https://issues.apache.org/jira/browse/HDFS-11975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: luhuichun
> Attachments: HDFS-11975-001.patch, HDFS-11975-002.patch, 
> HDFS-11975-003.patch, HDFS-11975-004.patch, HDFS-11975-005.patch, 
> HDFS-11975-006.patch
>
>
> From the usability point of view, it'd be nice to be able to specify a 
> system-wide EC policy, i.e., in {{hdfs-site.xml}}. For most of users / admin 
> / downstream projects, it is not necessary to know the tradeoffs of the EC 
> policy, considering that it requires the knowledge of EC, the actual physical 
> topology of the clusters, and many other factors (i.e., network, cluster size 
> and etc).
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11975) Provide a system-default EC policy

2017-07-05 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun updated HDFS-11975:
-
Attachment: HDFS-11975-006.patch

we find we can set optional ecPolicyName in erasurecoding.proto and  remove 
"Null check" in ClientNamenodeProtocolTranslatorPB and 
ClientNamenodeProtocolServerSideTranslatorPB, this can be avoid of passing the 
const string which is a not better solution.

> Provide a system-default EC policy
> --
>
> Key: HDFS-11975
> URL: https://issues.apache.org/jira/browse/HDFS-11975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: luhuichun
> Attachments: HDFS-11975-001.patch, HDFS-11975-002.patch, 
> HDFS-11975-003.patch, HDFS-11975-004.patch, HDFS-11975-005.patch, 
> HDFS-11975-006.patch
>
>
> From the usability point of view, it'd be nice to be able to specify a 
> system-wide EC policy, i.e., in {{hdfs-site.xml}}. For most of users / admin 
> / downstream projects, it is not necessary to know the tradeoffs of the EC 
> policy, considering that it requires the knowledge of EC, the actual physical 
> topology of the clusters, and many other factors (i.e., network, cluster size 
> and etc).
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12085) Reconfigure namenode interval fails if the interval was set with time unit

2017-07-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075894#comment-16075894
 ] 

Weiwei Yang edited comment on HDFS-12085 at 7/6/17 4:02 AM:


Hi [~xiaobingo] Can you please take a look at this issue as it is related to 
HDFS-1477? Thanks a lot.


was (Author: cheersyang):
Hi [~xiaobingo] Can you please take a look at this issue ?

> Reconfigure namenode interval fails if the interval was set with time unit
> --
>
> Key: HDFS-12085
> URL: https://issues.apache.org/jira/browse/HDFS-12085
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-12085.001.patch
>
>
> It fails when I set duration with time unit, e.g 5s, error
> {noformat}
> Reconfiguring status for node [localhost:8111]: started at Tue Jul 04 
> 08:14:18 PDT 2017 and finished at Tue Jul 04 08:14:18 PDT 2017.
> FAILED: Change property dfs.heartbeat.interval
>   From: "3s"
>   To: "5s"
>   Error: For input string: "5s".
> {noformat}
> time unit support was added via HDFS-9847.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12085) Reconfigure namenode interval fails if the interval was set with time unit

2017-07-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075894#comment-16075894
 ] 

Weiwei Yang commented on HDFS-12085:


Hi [~xiaobingo] Can you please take a look at this issue ?

> Reconfigure namenode interval fails if the interval was set with time unit
> --
>
> Key: HDFS-12085
> URL: https://issues.apache.org/jira/browse/HDFS-12085
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-12085.001.patch
>
>
> It fails when I set duration with time unit, e.g 5s, error
> {noformat}
> Reconfiguring status for node [localhost:8111]: started at Tue Jul 04 
> 08:14:18 PDT 2017 and finished at Tue Jul 04 08:14:18 PDT 2017.
> FAILED: Change property dfs.heartbeat.interval
>   From: "3s"
>   To: "5s"
>   Error: For input string: "5s".
> {noformat}
> time unit support was added via HDFS-9847.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-05 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075771#comment-16075771
 ] 

lufei edited comment on HDFS-12088 at 7/6/17 1:40 AM:
--

I'm sorry. It's wrong to removing an ec policy from ECPM for a "remove" 
operation.It will leads to the problems you mentioned above.
But the "remove" operation may still has some problems.My operation is as 
follows:

[root@master lufei]# *hdfs ec -addPolicies -policyFile 
/home/lufei/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop/user_ec_policies.xml.template*
 
2017-07-06 09:10:44,007 INFO util.ECPolicyLoader: Loading EC policy file 
/home/lufei/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop/user_ec_policies.xml.template
Add ErasureCodingPolicy XOR-2-1-128k succeed.
Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is Codec 
name RS-legacy is not supported
[root@master lufei]# *hdfs ec -enablePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis enabled
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
{color:#d04437}XOR-2-1-128k{color}
XOR-2-1-64k
[root@master lufei]# *hdfs ec -removePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis removed
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
XOR-2-1-64k
[root@master lufei]# *hdfs ec -enablePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis enabled
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
{color:#d04437}XOR-2-1-128k{color}
XOR-2-1-64k

As my operations shown above, I am confused by the current implementation too. 
As the current implementation,a "remove" and a "disable" have the same effect.
And the removed EC policy should not be able to recover by a "enable" operate, 
but only can recover by a "add" operate.



was (Author: figo):
I'm sorry. It's wrong to removing an ec policy from ECPM for a "remove" 
operation.It will leads to the problems you mentioned above.
But the "remove" operation may still has some problems.My operation is as 
follows:

+[root@master lufei]# *hdfs ec -addPolicies -policyFile 
/home/lufei/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop/user_ec_policies.xml.template*
 
2017-07-06 09:10:44,007 INFO util.ECPolicyLoader: Loading EC policy file 
/home/lufei/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop/user_ec_policies.xml.template
Add ErasureCodingPolicy XOR-2-1-128k succeed.
Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is Codec 
name RS-legacy is not supported
[root@master lufei]# *hdfs ec -enablePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis enabled
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
{color:#d04437}XOR-2-1-128k{color}
XOR-2-1-64k
[root@master lufei]# *hdfs ec -removePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis removed
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
XOR-2-1-64k
[root@master lufei]# *hdfs ec -enablePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis enabled
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
{color:#d04437}XOR-2-1-128k{color}
XOR-2-1-64k+

As my operations shown above, I am confused by the current implementation too. 
As the current implementation,a "remove" and a "disable" have the same effect.
And the removed EC policy should not be able to recover by a "enable" operate, 
but only can recover by a "add" operate.


> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> ---
>
> Key: HDFS-12088
> URL: https://issues.apache.org/jira/browse/HDFS-12088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12088.001.patch
>
>
> When user remove an user defined EC policy, it needs to remove the policy 
> from the userPolicies Map but not only remove from the enabledPolicies 
> Map.Otherwise, after remove the user defined EC policy, user can recover the 
> EC policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-05 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075771#comment-16075771
 ] 

lufei commented on HDFS-12088:
--

I'm sorry. It's wrong to removing an ec policy from ECPM for a "remove" 
operation.It will leads to the problems you mentioned above.
But the "remove" operation may still has some problems.My operation is as 
follows:

+[root@master lufei]# *hdfs ec -addPolicies -policyFile 
/home/lufei/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop/user_ec_policies.xml.template*
 
2017-07-06 09:10:44,007 INFO util.ECPolicyLoader: Loading EC policy file 
/home/lufei/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop/user_ec_policies.xml.template
Add ErasureCodingPolicy XOR-2-1-128k succeed.
Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is Codec 
name RS-legacy is not supported
[root@master lufei]# *hdfs ec -enablePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis enabled
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
{color:#d04437}XOR-2-1-128k{color}
XOR-2-1-64k
[root@master lufei]# *hdfs ec -removePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis removed
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
XOR-2-1-64k
[root@master lufei]# *hdfs ec -enablePolicy -policy XOR-2-1-128k*
Erasure coding policy XOR-2-1-128kis enabled
[root@master lufei]# *hdfs ec -listPolicies*
Erasure Coding Policies:
RS-LEGACY-6-3-64k
{color:#d04437}XOR-2-1-128k{color}
XOR-2-1-64k+

As my operations shown above, I am confused by the current implementation too. 
As the current implementation,a "remove" and a "disable" have the same effect.
And the removed EC policy should not be able to recover by a "enable" operate, 
but only can recover by a "add" operate.


> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> ---
>
> Key: HDFS-12088
> URL: https://issues.apache.org/jira/browse/HDFS-12088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12088.001.patch
>
>
> When user remove an user defined EC policy, it needs to remove the policy 
> from the userPolicies Map but not only remove from the enabledPolicies 
> Map.Otherwise, after remove the user defined EC policy, user can recover the 
> EC policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12076) Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075670#comment-16075670
 ] 

Hadoop QA commented on HDFS-12076:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
33s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12076 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875799/HDFS-12076-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ee15a01b18a6 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 62c0cc7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20165/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20165/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20165/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Review all cases where we are 

[jira] [Commented] (HDFS-12013) libhdfs++: read with offset at EOF should return 0 bytes instead of error

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075592#comment-16075592
 ] 

Hadoop QA commented on HDFS-12013:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
50s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
 2s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
40s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
20s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12013 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875820/HDFS-12013.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 6d6eff0cee39 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 0d2d073 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20167/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20167/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: read with offset at EOF should return 0 bytes instead of error
> -
>
> Key: HDFS-12013
> URL: https://issues.apache.org/jira/browse/HDFS-12013
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
>Priority: Critical
> Attachments: HDFS-12013.HDFS-8707.000.patch
>
>
> The current behavior is when you read from offset == file_length, it will 
> throw error:
> "AsyncPreadSome: trying to begin a read past the EOF"
> But read with offset at 

[jira] [Updated] (HDFS-12086) Ozone: Add the unit test for KSMMetrics

2017-07-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12086:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~linyiqun] Thank you for the contribution. I have committed this to the 
feature branch.

> Ozone: Add the unit test for KSMMetrics
> ---
>
> Key: HDFS-12086
> URL: https://issues.apache.org/jira/browse/HDFS-12086
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12086-HDFS-7240.001.patch, 
> HDFS-12086-HDFS-7240.002.patch
>
>
> Currently the unit test for KSMMetrics is missing. And some metrics name is 
> inconsistent with that in documentation:
> * numVolumeModifies should be numVolumeUpdates
> * numBucketModifies should be numBucketUpdates



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12092) VolumeScanner exits when block metadata file is corrupted on datanode.

2017-07-05 Thread Ashwin Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwin Ramesh updated HDFS-12092:
-
Description: 
Restarted a datanode,  corrupted the metafile for blk_1073741825 with something 
like echo '' > blk_1073741825_1001.meta, and datanode logs reveal that 
the VolumeScanner exits due to an illegal argument exception. Here is the 
relevant trace: 
--
{code}
2017-07-05 22:03:41,878 [VolumeScannerThread()] DEBUG datanode.VolumeScanner: 
start scanning block BP-955735389-###-1494002319684:blk_1073741825_1001
2017-07-05 22:03:41,879 [VolumeScannerThread()] ERROR datanode.VolumeScanner: 
VolumeScanner() exiting because of exception 
java.lang.IllegalArgumentException: id=122 out of range [0, 5)
at 
org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:67)
at 
org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:123)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:178)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:142)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:156)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1022)
at 
org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.getLastChecksumAndDataLen(FinalizedReplica.java:104)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:259)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:484)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:614)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:699)
2017-07-05 22:03:41,879 [VolumeScannerThread()] INFO datanode.VolumeScanner: 
VolumeScanner() exiting.
{code}

  was:
Restarted a datanode,  corrupted the metafile for blk_1073741825 with something 
like echo '' > blk_1073741825_1001.meta, and datanode logs reveal that 
the VolumeScanner exits due to an illegal argument exception. Here is the 
relevant trace

2017-07-05 22:03:41,878 
[VolumeScannerThread(/grid/0/tmp/hadoop-hdfsqa/dfs/data)] DEBUG 
datanode.VolumeScanner: start scanning block 
BP-955735389-10.215.76.172-1494002319684:blk_1073741825_1001
2017-07-05 22:03:41,879 
[VolumeScannerThread(/grid/0/tmp/hadoop-hdfsqa/dfs/data)] ERROR 
datanode.VolumeScanner: VolumeScanner(/grid/0/tmp/hadoop-hdfsqa/dfs/data, 
DS-7817e9a3-c179-4901-8757-af965b27b689) exiting because of exception 
java.lang.IllegalArgumentException: id=122 out of range [0, 5)
at 
org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:67)
at 
org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:123)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:178)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:142)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:156)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1022)
at 
org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.getLastChecksumAndDataLen(FinalizedReplica.java:104)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:259)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:484)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:614)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:699)
2017-07-05 22:03:41,879 
[VolumeScannerThread(/grid/0/tmp/hadoop-hdfsqa/dfs/data)] INFO 
datanode.VolumeScanner: VolumeScanner(/grid/0/tmp/hadoop-hdfsqa/dfs/data, 
DS-7817e9a3-c179-4901-8757-af965b27b689) exiting.


> VolumeScanner exits when block metadata file is corrupted on datanode.
> --
>
> Key: HDFS-12092
> URL: https://issues.apache.org/jira/browse/HDFS-12092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.8.0
>Reporter: Ashwin Ramesh
>
> Restarted a datanode,  corrupted the metafile for blk_1073741825 with 
> something like echo '' > blk_1073741825_1001.meta, and datanode logs 
> reveal that the VolumeScanner exits due to an illegal argument exception. 
> Here is the relevant trace: 
> 

[jira] [Created] (HDFS-12092) VolumeScanner exits when block metadata file is corrupted on datanode.

2017-07-05 Thread Ashwin Ramesh (JIRA)
Ashwin Ramesh created HDFS-12092:


 Summary: VolumeScanner exits when block metadata file is corrupted 
on datanode.
 Key: HDFS-12092
 URL: https://issues.apache.org/jira/browse/HDFS-12092
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs
Affects Versions: 2.8.0
Reporter: Ashwin Ramesh


Restarted a datanode,  corrupted the metafile for blk_1073741825 with something 
like echo '' > blk_1073741825_1001.meta, and datanode logs reveal that 
the VolumeScanner exits due to an illegal argument exception. Here is the 
relevant trace

2017-07-05 22:03:41,878 
[VolumeScannerThread(/grid/0/tmp/hadoop-hdfsqa/dfs/data)] DEBUG 
datanode.VolumeScanner: start scanning block 
BP-955735389-10.215.76.172-1494002319684:blk_1073741825_1001
2017-07-05 22:03:41,879 
[VolumeScannerThread(/grid/0/tmp/hadoop-hdfsqa/dfs/data)] ERROR 
datanode.VolumeScanner: VolumeScanner(/grid/0/tmp/hadoop-hdfsqa/dfs/data, 
DS-7817e9a3-c179-4901-8757-af965b27b689) exiting because of exception 
java.lang.IllegalArgumentException: id=122 out of range [0, 5)
at 
org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:67)
at 
org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:123)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:178)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:142)
at 
org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:156)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1022)
at 
org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.getLastChecksumAndDataLen(FinalizedReplica.java:104)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:259)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:484)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:614)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:699)
2017-07-05 22:03:41,879 
[VolumeScannerThread(/grid/0/tmp/hadoop-hdfsqa/dfs/data)] INFO 
datanode.VolumeScanner: VolumeScanner(/grid/0/tmp/hadoop-hdfsqa/dfs/data, 
DS-7817e9a3-c179-4901-8757-af965b27b689) exiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075540#comment-16075540
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
40s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875818/HDFS-12026.HDFS-8707.002.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 72b7f2c0b6bd 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 0d2d073 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20166/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20166/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfspp: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> 

[jira] [Updated] (HDFS-11470) Ozone: SCM: CLI: Design SCM Command line interface

2017-07-05 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11470:
--
Attachment: storage-container-manager-cli-v004.pdf

Update design doc to reflect the change from HDFS-12002

> Ozone: SCM: CLI: Design SCM Command line interface
> --
>
> Key: HDFS-11470
> URL: https://issues.apache.org/jira/browse/HDFS-11470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
> Attachments: storage-container-manager-cli-003.pdf, 
> storage-container-manager-cli-v001.pdf, 
> storage-container-manager-cli-v002.pdf, storage-container-manager-cli-v004.pdf
>
>
> This jira the describes the SCM CLI. Since CLI will have lots of commands, we 
> will file other JIRAs for specific commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2017-07-05 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-9820:
---
  Labels: Incompatible  (was: )
Target Version/s: 3.0.0-alpha2, 2.9.0  (was: 2.9.0, 3.0.0-alpha2)

> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: Incompatible
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch, HDFS-9820.005.patch, 
> HDFS-9820.006.patch, HDFS-9820.007.patch, HDFS-9820.008.patch, 
> HDFS-9820.009.patch, HDFS-9820.branch-2.002.patch, HDFS-9820.branch-2.patch
>
>
> A common use scenario (scenaio 1): 
> # create snapshot sx in clusterX, 
> # do some experiemnts in clusterX, which creates some files. 
> # throw away the files changed and go back to sx.
> Another scenario (scenario 2) is, there is a production cluster and a backup 
> cluster, we periodically sync up the data from production cluster to the 
> backup cluster with distcp. 
> The cluster in scenario 1 could be the backup cluster in scenario 2.
> For scenario 1:
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges.  Before that jira is implemented, we count on 
> distcp to copy from snapshot to the current state. However, the performance 
> of this operation could be very bad because we have to go through all files 
> even if we only changed a few files.
> For scenario 2:
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from 
> snapshot sx of either the source or the target cluster, to restore target 
> cluster's current state to sx. 
> Specifically,
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> As a native restore feature, HDFS-4167 would still be ideal to have. However, 
>  HDFS-9820 would hopefully be easier to implement, before HDFS-4167 is in 
> place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12013) libhdfs++: read with offset at EOF should return 0 bytes instead of error

2017-07-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075485#comment-16075485
 ] 

James Clampffer commented on HDFS-12013:


Nice catch, looks good to me. +1 pending a passing CI run

> libhdfs++: read with offset at EOF should return 0 bytes instead of error
> -
>
> Key: HDFS-12013
> URL: https://issues.apache.org/jira/browse/HDFS-12013
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
>Priority: Critical
> Attachments: HDFS-12013.HDFS-8707.000.patch
>
>
> The current behavior is when you read from offset == file_length, it will 
> throw error:
> "AsyncPreadSome: trying to begin a read past the EOF"
> But read with offset at EOF should just return 0 bytes. The above error 
> should only be thrown when offset > file_length.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12013) libhdfs++: read with offset at EOF should return 0 bytes instead of error

2017-07-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-12013:
---
Attachment: HDFS-12013.HDFS-8707.000.patch

HDFS-12013.HDFS-8707.000.patch fixes the issue by not treating seek 
offset=file_length as an error.

> libhdfs++: read with offset at EOF should return 0 bytes instead of error
> -
>
> Key: HDFS-12013
> URL: https://issues.apache.org/jira/browse/HDFS-12013
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
>Priority: Critical
> Attachments: HDFS-12013.HDFS-8707.000.patch
>
>
> The current behavior is when you read from offset == file_length, it will 
> throw error:
> "AsyncPreadSome: trying to begin a read past the EOF"
> But read with offset at EOF should just return 0 bytes. The above error 
> should only be thrown when offset > file_length.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12013) libhdfs++: read with offset at EOF should return 0 bytes instead of error

2017-07-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-12013:
---
Status: Patch Available  (was: Open)

> libhdfs++: read with offset at EOF should return 0 bytes instead of error
> -
>
> Key: HDFS-12013
> URL: https://issues.apache.org/jira/browse/HDFS-12013
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
>Priority: Critical
>
> The current behavior is when you read from offset == file_length, it will 
> throw error:
> "AsyncPreadSome: trying to begin a read past the EOF"
> But read with offset at EOF should just return 0 bytes. The above error 
> should only be thrown when offset > file_length.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12029) Data node process crashes after kernel upgrade

2017-07-05 Thread Vipin Rathor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075448#comment-16075448
 ] 

Vipin Rathor commented on HDFS-12029:
-

As per this RedHat Errata ([https://access.redhat.com/errata/RHBA-2017:1674]), 
upgrade to newer Kernel 3.10.0-514.26.2.el7.x86_64 fixes the issue. FYI.

>  Data node process crashes after kernel upgrade
> ---
>
> Key: HDFS-12029
> URL: https://issues.apache.org/jira/browse/HDFS-12029
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Anu Engineer
>Assignee: Nandakumar
>Priority: Blocker
>
>  We have seen that when Linux kernel is upgraded to address a specific CVE 
>  ( https://access.redhat.com/security/vulnerabilities/stackguard ) it might 
> cause a datanode crash.
> We have observed this issue while upgrading from 3.10.0-514.6.2 to 
> 3.10.0-514.21.2 versions of the kernel.
> Original kernel fix is here -- 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1be7107fbe18eed3e319a6c3e83c78254b693acb
> Datanode fails with the following stack trace, 
> {noformat}
> # 
> # A fatal error has been detected by the Java Runtime Environment: 
> # 
> # SIGBUS (0x7) at pc=0x7f458d078b7c, pid=13214, tid=139936990349120 
> # 
> # JRE version: (8.0_40-b25) (build ) 
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.40-b25 mixed mode 
> linux-amd64 compressed oops) 
> # Problematic frame: 
> # j java.lang.Object.()V+0 
> # 
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again 
> # 
> # An error report file with more information is saved as: 
> # /tmp/hs_err_pid13214.log 
> # 
> # If you would like to submit a bug report, please visit: 
> # http://bugreport.java.com/bugreport/crash.jsp 
> # 
> {noformat}
> The root cause is a failure in jsvc. If we pass a greater than 1MB value as 
> the stack size argument, this can be mitigated.  Something like:
> {code}
> exec "$JSVC" \
> -Xss2m
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter "$@"
> {code}
> This JIRA tracks potential fixes for this problem. We don't have data on how 
> this impacts other applications that run on datanode as this might impact 
> datanodes memory usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-07-05 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.002.patch

Thanks for the reveiw, [~James C].

* I was able to remove the ASIO dependency on the libraries libc++ and 
libc++abi, and now the build should work out-of-the-box as long as clang is 
installed and protoc was compiled with clang. This fix also allowed me to 
remove the ugly hacks in hdfs.cc and logging.cc.

* I moved the underscores for consistency as you mentioned above.

* For updating the dockerfile and the CI system we should probably create 
another JIRA. Also, I have not yet added a try_compile for checking whether 
build fail is caused by the protoc version. The cmake documentation for 
try_compile is limited, maybe it is easier to just add a message saying 
somthing along the lines "if you get protoc errors please make sure it is 
compiled with clang".

* In optional_wrapper.h the pragmas for ignoring warnings are due to ones in 
the third-party header optional.hpp, which we probably should not change 
ourselves.

Please see the new patch.

> libhdfspp: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12086) Ozone: Add the unit test for KSMMetrics

2017-07-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075367#comment-16075367
 ] 

Anu Engineer commented on HDFS-12086:
-

+1, I will commit this shortly.


> Ozone: Add the unit test for KSMMetrics
> ---
>
> Key: HDFS-12086
> URL: https://issues.apache.org/jira/browse/HDFS-12086
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12086-HDFS-7240.001.patch, 
> HDFS-12086-HDFS-7240.002.patch
>
>
> Currently the unit test for KSMMetrics is missing. And some metrics name is 
> inconsistent with that in documentation:
> * numVolumeModifies should be numVolumeUpdates
> * numBucketModifies should be numBucketUpdates



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-07-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075278#comment-16075278
 ] 

James Clampffer commented on HDFS-12026:


Thanks for looking into this [~anatoli.shein].

I got the error below when I first tried to compile.  Looks like the fix is to 
install the libstdc++ source (libstdc+\+-dev package on ubuntu).  Could you 
have cmake run a sanity test on a little file that includes  or any 
other c++ std header and warn if can't build?  Looks like CMake's try_compile 
function is built for this sort of thing.  A more open question is if this 
should also be added to the dockerfile.  Looks like it'd need clang as well to 
build in the docker environment.

{code}
 [exec] In file included from 
/apache_hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/protoc_gen_hrpc.cc:19:
 [exec] 
/apache_hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/protobuf/protobuf/cpp_helpers.h:38:10:
 fatal error: 'string' file not found
 [exec] #include 
 [exec]  ^
 [exec] 1 error generated.
{code}

After fixing that I get link errors on protoc-gen-hrpc.  If the fix is to 
rebuild protoc and libprotobuf with clang could you add another try_compile and 
if it fails explain what's going on?  If nothing else getting clang and 
libstdc++ into the dockerfile for the review would help so that we are both 
looking at the same environment.

In block_location.h make the class members have the trailing underscore rather 
than the argument so it's consistent with the rest of the library.
{code}
  void setHostname(const std::string & hostname_) {
this->hostname = hostname_;
}
{code}
turns to:
{code}
  void setHostname(const std::string & hostname) {
this->hostname_ = hostname;
}
{code}

The definition of a function with a __cxa prefix in hdfs.cc might have a better 
alternative (not sure what though).  __cxa deals with ABI stuff so I think 
relying on a function at that level might not be stable across major versions 
of compilers.  If we have to go this route can you put in a version check for 
glibc like the one mentioned on 
https://stackoverflow.com/questions/29322666/undefined-reference-to-cxa-thread-atexitcxxabi-when-compiling-with-libc?

More minor stuff:
In optional_wrapper.h is the pragma for ignoring extra semicolons due to ones 
in our code or third-party headers?  If it's ours I think it'd be best to try 
and clean those up and get GCC to complain about them as well so things stay 
clean.  Likewise ignoring -Wweak-vtables can most likely be removed after 
making sure all virtual functions are defined and there isn't some include 
that's forcing different vtable instantiations for the same class in different 
translation units.  Once I can get this built I can try and look into it too.



> libhdfspp: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12076) Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR

2017-07-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075265#comment-16075265
 ] 

Anu Engineer commented on HDFS-12076:
-

+1, Thanks for taking care of this. Pending jenkins.

> Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR
> 
>
> Key: HDFS-12076
> URL: https://issues.apache.org/jira/browse/HDFS-12076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12076-HDFS-7240.001.patch
>
>
> We should review the cases where we are returning FAILED_INTERNAL_ERROR in 
> SCM and KSM. If appropriate we should return better error codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12081) Ozone: Add infoKey REST API document

2017-07-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12081:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~cheersyang] Thanks for the contribution. [~vagarychen] Thanks for the review. 
I have committed this to the feature branch.

> Ozone: Add infoKey REST API document
> 
>
> Key: HDFS-12081
> URL: https://issues.apache.org/jira/browse/HDFS-12081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: document
> Attachments: HDFS-12081-HDFS-7240.001.patch
>
>
> HDFS-12030 has implemented {{infoKey}}, need to add appropriate document to 
> {{OzoneRest.md}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12081) Ozone: Add infoKey REST API document

2017-07-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075213#comment-16075213
 ] 

Anu Engineer commented on HDFS-12081:
-

[~vagarychen] I will take care of this while committing. Thanks for your review.


> Ozone: Add infoKey REST API document
> 
>
> Key: HDFS-12081
> URL: https://issues.apache.org/jira/browse/HDFS-12081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: document
> Attachments: HDFS-12081-HDFS-7240.001.patch
>
>
> HDFS-12030 has implemented {{infoKey}}, need to add appropriate document to 
> {{OzoneRest.md}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075210#comment-16075210
 ] 

Hudson commented on HDFS-12089:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11969 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11969/])
HDFS-12089. Fix ambiguous NN retry log message in WebHDFS. Contributed 
(liuml07: rev 6436768baf1b2ac05f6786edcd76fd3a66c03eaa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The message is misleading since it has already tried once. This message 
> indicates the first retry attempt and that it had retried 0 times in the 
> past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12081) Ozone: Add infoKey REST API document

2017-07-05 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075207#comment-16075207
 ] 

Chen Liang commented on HDFS-12081:
---

Did not see Anu's comment when writing the previous comment. Maybe just change 
it when committing.

> Ozone: Add infoKey REST API document
> 
>
> Key: HDFS-12081
> URL: https://issues.apache.org/jira/browse/HDFS-12081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: document
> Attachments: HDFS-12081-HDFS-7240.001.patch
>
>
> HDFS-12030 has implemented {{infoKey}}, need to add appropriate document to 
> {{OzoneRest.md}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12081) Ozone: Add infoKey REST API document

2017-07-05 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075205#comment-16075205
 ] 

Chen Liang commented on HDFS-12081:
---

Thanks [~cheersyang] for the patch. Only one minor comment, for the existing 
document of commands, looks like in the Query Parameter table, the value field 
is to describe the data type. e.g. string, int, etc. Could you please update 
infoKey's table to be consistent with this?

> Ozone: Add infoKey REST API document
> 
>
> Key: HDFS-12081
> URL: https://issues.apache.org/jira/browse/HDFS-12081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: document
> Attachments: HDFS-12081-HDFS-7240.001.patch
>
>
> HDFS-12030 has implemented {{infoKey}}, need to add appropriate document to 
> {{OzoneRest.md}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12081) Ozone: Add infoKey REST API document

2017-07-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075197#comment-16075197
 ] 

Anu Engineer commented on HDFS-12081:
-

+1, I will commit this shortly. Thanks for taking care of this.

> Ozone: Add infoKey REST API document
> 
>
> Key: HDFS-12081
> URL: https://issues.apache.org/jira/browse/HDFS-12081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: document
> Attachments: HDFS-12081-HDFS-7240.001.patch
>
>
> HDFS-12030 has implemented {{infoKey}}, need to add appropriate document to 
> {{OzoneRest.md}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12053) Ozone: ozone server should create missing metadata directory if it has permission to

2017-07-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12053:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~cheersyang] Thank you for the contribution. I have committed this to the 
feature branch.

> Ozone: ozone server should create missing metadata directory if it has 
> permission to
> 
>
> Key: HDFS-12053
> URL: https://issues.apache.org/jira/browse/HDFS-12053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12053-HDFS-7240.001.patch, 
> HDFS-12053-HDFS-7240.002.patch, HDFS-12053-HDFS-7240.003.patch
>
>
> Datanode state machine right now simple fails if container metadata directory 
> is missing, it is better to create the directory if it has permission to. 
> This is extremely useful at a fresh setup, usually we set 
> {{ozone.container.metadata.dirs}} to be under same parent of 
> {{dfs.datanode.data.dir}}. E.g
> * /hadoop/hdfs/data
> * /hadoop/hdfs/scm
> if I don't pre-setup /hadoop/hdfs/scm/repository, ozone could not be started.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12076) Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR

2017-07-05 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12076:
--
Status: Patch Available  (was: Open)

> Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR
> 
>
> Key: HDFS-12076
> URL: https://issues.apache.org/jira/browse/HDFS-12076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12076-HDFS-7240.001.patch
>
>
> We should review the cases where we are returning FAILED_INTERNAL_ERROR in 
> SCM and KSM. If appropriate we should return better error codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12076) Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR

2017-07-05 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12076:
--
Attachment: HDFS-12076-HDFS-7240.001.patch

Post initial patch.

> Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR
> 
>
> Key: HDFS-12076
> URL: https://issues.apache.org/jira/browse/HDFS-12076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12076-HDFS-7240.001.patch
>
>
> We should review the cases where we are returning FAILED_INTERNAL_ERROR in 
> SCM and KSM. If appropriate we should return better error codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12077) Implement a remaining space based balancer policy

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075179#comment-16075179
 ] 

Mingliang Liu commented on HDFS-12077:
--

Ping [~szetszwo].

> Implement a remaining space based balancer policy
> -
>
> Key: HDFS-12077
> URL: https://issues.apache.org/jira/browse/HDFS-12077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Affects Versions: 2.6.0
>Reporter: liuyiyang
>
> Our cluster has DataNodes with 2T disk storage, as storage utilization of the 
> cluster growing, we need to add new DataNodes to increse the capacity of our 
> cluster. In order to make utilization of every DataNode be in relatively 
> balanced state, usually we use HDFS balancer tool to balance our cluster 
> every time we add new DataNodes.
> We have been facing an issue with heterogeneous disk capacity when using HDFS 
> balancer tool. In production cluster, we often have to add new DataNodes with 
> larger disk capacity than previous DNs. Since the original balancer is 
> implemented to balance utilization of every DataNode, the balancer will make 
> every DN's utilization and average utilization of the cluster be within a 
> given threshold.
> For example, in a cluster with two DataNodes DN1 and DN2, DN1 has ten disks 
> with 2T capacity, DN2 has ten disks with 10T capacity, the original balancer 
> may make the cluster balanced in the following state:
> ||DataNode||Total Capacity||Used||Remaining|| utilization||
> |DN1   | 20T  |  18T| 2T| 90%|
> |DN2|100T   |90T   |  10T|90%|
> each DN has reached a 90% utilization, in such a case, DN1's capacibility to 
> store new blocks is far less than DN2's. When DN1 is full, all of the new 
> blocks will be written to DN2 and more MR tasks will be scheduled to DN2. As 
> a result, DN2 is overloaded and we can not 
> make full use of each DN's I/O capacity. In such a case, We wish the balancer 
> could run based on remaining space of every DN. After balancing, every DN's 
> remaining space could be balanced like the following state:
> ||DataNode  ||Total Capacity || Used  ||Remaining||utilization||
>  |DN1   |  20T |14T | 6T |70%|
>  |DN2   |  100T |   94T|  6T |94%|
> In a cluster with balanced remaining space of DN's capacity, every DN will be 
> utilized when writing new blocks to the cluster,  on the other hand,  every 
> DN's I/O capacity can be utilized when running MR jobs.
> Please let me know what you guys think.  I will attach a patch if necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-12089:
-
Component/s: webhdfs

> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The message is misleading since it has already tried once. This message 
> indicates the first retry attempt and that it had retried 0 times in the 
> past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-12089:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.2
   3.0.0-alpha4
   Status: Resolved  (was: Patch Available)

> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The message is misleading since it has already tried once. This message 
> indicates the first retry attempt and that it had retried 0 times in the 
> past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-12089:
-

+1

No tests needed for this; findbugs are not related.

Committed to {{trunk}}, {{branch-2}} and {branch-2.8}} branches. Thanks for 
your contribution [~ebadger].

> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The message is misleading since it has already tried once. This message 
> indicates the first retry attempt and that it had retried 0 times in the 
> past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12091) [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage

2017-07-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12091:
-

Assignee: Virajith Jalaparti

> [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to 
> the correct external storage
> --
>
> Key: HDFS-12091
> URL: https://issues.apache.org/jira/browse/HDFS-12091
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12091) [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage

2017-07-05 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12091:
-

 Summary: [READ] Check that the replicas served from a 
{{ProvidedVolumeImpl}} belong to the correct external storage
 Key: HDFS-12091
 URL: https://issues.apache.org/jira/browse/HDFS-12091
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10480) Add an admin command to list currently open files

2017-07-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075168#comment-16075168
 ] 

Andrew Wang commented on HDFS-10480:


LGTM +1 thanks Manoj!

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-10480.02.patch, HDFS-10480.03.patch, 
> HDFS-10480.04.patch, HDFS-10480.05.patch, HDFS-10480.06.patch, 
> HDFS-10480.07.patch, HDFS-10480-branch-2.01.patch, 
> HDFS-10480-branch-2.8.01.patch, HDFS-10480-trunk-1.patch, 
> HDFS-10480-trunk.patch
>
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-07-05 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075157#comment-16075157
 ] 

Virajith Jalaparti edited comment on HDFS-9806 at 7/5/17 6:07 PM:
--

An updated design document is attached. This JIRA will be limited to a 
read-only implementation for {{PROVIDED}} storage, i.e., using HDFS to read 
files/data in remote stores assuming that the data on the remote store does not 
change. The work on the write-path will be tracked as part of HDFS-12090 (the 
design document for this has been posted to HDFS-12090).


was (Author: virajith):
An updated design document is attached. This JIRA will be limited to a 
read-only implementation for {{PROVIDED}} storage, i.e., using HDFS to read 
files/data in remote stores assuming that the data on the remote store does not 
change. The work on the write-path will be tracked as part of HDFS-12090.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12090) Handling writes from HDFS to Provided storages

2017-07-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12090:
--
Attachment: HDFS-12090-design.001.pdf

Posting the first version of the design document on how writes/updates to 
{{PROVIDED}} storage will be handled. The primary use-case of this feature will 
be data backup from HDFS to other storage systems (either objects stores like 
s3 or a system that implements the {{org.apache.hadoop.fs.FileSystem}} API).

> Handling writes from HDFS to Provided storages
> --
>
> Key: HDFS-12090
> URL: https://issues.apache.org/jira/browse/HDFS-12090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12090-design.001.pdf
>
>
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA will deal with how data 
> can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11696) Fix warnings from Spotbugs in hadoop-hdfs

2017-07-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075160#comment-16075160
 ] 

Andrew Wang commented on HDFS-11696:


Sorry, I read the patch incorrectly before. There's still a behavior change 
with this patch though. Previously, we did not throw an error when sub-flags 
are specified without their parent. This is not documented to work, but we have 
an init script that does this so there are probably others out there that will 
hit this same issue.

Is there a way to address the spotbugs issues without changing the option 
parsing behavior?

> Fix warnings from Spotbugs in hadoop-hdfs
> -
>
> Key: HDFS-11696
> URL: https://issues.apache.org/jira/browse/HDFS-11696
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: findbugsHtml.html, HADOOP-14337.001.patch, 
> HADOOP-14337.002.patch, HADOOP-14337.003.patch, HDFS-11696.004.patch, 
> HDFS-11696.005.patch, HDFS-11696.006.patch, HDFS-11696.007.patch
>
>
> There are totally 12 findbugs issues generated after switching from Findbugs 
> to Spotbugs across the project in HADOOP-14316. This JIRA focus on cleaning 
> up the part of warnings under scope of HDFS(mainly contained in hadoop-hdfs 
> and hadoop-hdfs-client).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-07-05 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075157#comment-16075157
 ] 

Virajith Jalaparti commented on HDFS-9806:
--

An updated design document is attached. This JIRA will be limited to a 
read-only implementation for {{PROVIDED}} storage, i.e., using HDFS to read 
files/data in remote stores assuming that the data on the remote store does not 
change. The work on the write-path will be tracked as part of HDFS-12090.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12090) Handling writes from HDFS to Provided storages

2017-07-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12090:
-

Assignee: (was: Virajith Jalaparti)

> Handling writes from HDFS to Provided storages
> --
>
> Key: HDFS-12090
> URL: https://issues.apache.org/jira/browse/HDFS-12090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Virajith Jalaparti
>
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA will deal with how data 
> can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12090) Handling writes from HDFS to Provided storages

2017-07-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12090:
--
Description: HDFS-9806 introduces the concept of {{PROVIDED}} storage, 
which makes data in external storage systems accessible through HDFS. However, 
HDFS-9806 is limited to data being read through HDFS. This JIRA will deal with 
how data can be written to such {{PROVIDED}} storages from HDFS.  (was: 
HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
external storage systems accessible through HDFS. However, HDFS-9806 is limited 
to data being read through HDFS. This JIRA is to keep track of how data can be 
written to such {{PROVIDED}} storages from HDFS.)

> Handling writes from HDFS to Provided storages
> --
>
> Key: HDFS-12090
> URL: https://issues.apache.org/jira/browse/HDFS-12090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA will deal with how data 
> can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12076) Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR

2017-07-05 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12076:
--
Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-7240

> Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR
> 
>
> Key: HDFS-12076
> URL: https://issues.apache.org/jira/browse/HDFS-12076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
>
> We should review the cases where we are returning FAILED_INTERNAL_ERROR in 
> SCM and KSM. If appropriate we should return better error codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12090) Handling writes from HDFS to Provided storages

2017-07-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12090:
--
Description: HDFS-9806 introduces the concept of {{PROVIDED}} storage, 
which makes data in external storage systems accessible through HDFS. However, 
HDFS-9806 is limited to data being read through HDFS. This JIRA is to keep 
track of how data can be written to such {{PROVIDED}} storages from HDFS.  
(was: HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes 
data in external storage systems accessible through HDFS. This JIRA is to keep 
track of how data can be written to such {{PROVIDED}} storages from HDFS.)

> Handling writes from HDFS to Provided storages
> --
>
> Key: HDFS-12090
> URL: https://issues.apache.org/jira/browse/HDFS-12090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA is to keep track of how 
> data can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12076) Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR

2017-07-05 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12076:
-

Assignee: Chen Liang

> Ozone: Review all cases where we are returning FAILED_INTERNAL_ERROR
> 
>
> Key: HDFS-12076
> URL: https://issues.apache.org/jira/browse/HDFS-12076
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
>
> We should review the cases where we are returning FAILED_INTERNAL_ERROR in 
> SCM and KSM. If appropriate we should return better error codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12090) Handling writes from HDFS to Provided storages

2017-07-05 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12090:
-

 Summary: Handling writes from HDFS to Provided storages
 Key: HDFS-12090
 URL: https://issues.apache.org/jira/browse/HDFS-12090
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti


HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
external storage systems accessible through HDFS. This JIRA is to keep track of 
how data can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-07-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Attachment: HDFS-9806-design.002.pdf

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12080) Ozone: Fix UT failure in TestOzoneConfigurationFields

2017-07-05 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075124#comment-16075124
 ] 

Chen Liang commented on HDFS-12080:
---

+1 on v001 patch, pending Jenkins

> Ozone: Fix UT failure in TestOzoneConfigurationFields
> -
>
> Key: HDFS-12080
> URL: https://issues.apache.org/jira/browse/HDFS-12080
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-12080-HDFS-7240.001.patch
>
>
> HDFS-12023 added a test case {{TestOzoneConfigurationFields}} to make sure 
> ozone configuration properties is fully documented in ozone-default.xml. This 
> is currently failing because
> 1. ozone-default.xml has 1 property not used anywhere
> {code}
>   ozone.scm.internal.bind.host
> {code}
> 2. Some cblock properties are missing in ozone-default.xml
> {code}
>   dfs.cblock.scm.ipaddress
>   dfs.cblock.scm.port
>   dfs.cblock.jscsi-address
>   dfs.cblock.service.rpc-bind-host
>   dfs.cblock.jscsi.rpc-bind-host
> {code}
> this needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075059#comment-16075059
 ] 

Hadoop QA commented on HDFS-12089:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12089 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875783/HDFS-12089.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 105a86244f74 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a180ba4 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20164/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20164/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20164/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The 

[jira] [Updated] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-12089:
---
Status: Patch Available  (was: Open)

> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The message is misleading since it has already tried once. This message 
> indicates the first retry attempt and that it had retried 0 times in the 
> past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-12089:
---
Attachment: HDFS-12089.001.patch

Attaching patch. Changed "tried" to "retried". 

> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The message is misleading since it has already tried once. This message 
> indicates the first retry attempt and that it had retried 0 times in the 
> past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-05 Thread Eric Badger (JIRA)
Eric Badger created HDFS-12089:
--

 Summary: Fix ambiguous NN retry log message
 Key: HDFS-12089
 URL: https://issues.apache.org/jira/browse/HDFS-12089
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


{noformat}
INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
namenode: foobar. Already tried 0 time(s); retry policy is 
{noformat}
The message is misleading since it has already tried once. This message 
indicates the first retry attempt and that it had retried 0 times in the past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12080) Ozone: Fix UT failure in TestOzoneConfigurationFields

2017-07-05 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12080:
-
Attachment: HDFS-12080-HDFS-7240.001.patch

> Ozone: Fix UT failure in TestOzoneConfigurationFields
> -
>
> Key: HDFS-12080
> URL: https://issues.apache.org/jira/browse/HDFS-12080
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-12080-HDFS-7240.001.patch
>
>
> HDFS-12023 added a test case {{TestOzoneConfigurationFields}} to make sure 
> ozone configuration properties is fully documented in ozone-default.xml. This 
> is currently failing because
> 1. ozone-default.xml has 1 property not used anywhere
> {code}
>   ozone.scm.internal.bind.host
> {code}
> 2. Some cblock properties are missing in ozone-default.xml
> {code}
>   dfs.cblock.scm.ipaddress
>   dfs.cblock.scm.port
>   dfs.cblock.jscsi-address
>   dfs.cblock.service.rpc-bind-host
>   dfs.cblock.jscsi.rpc-bind-host
> {code}
> this needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12080) Ozone: Fix UT failure in TestOzoneConfigurationFields

2017-07-05 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12080:
-
Status: Patch Available  (was: Open)

> Ozone: Fix UT failure in TestOzoneConfigurationFields
> -
>
> Key: HDFS-12080
> URL: https://issues.apache.org/jira/browse/HDFS-12080
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-12080-HDFS-7240.001.patch
>
>
> HDFS-12023 added a test case {{TestOzoneConfigurationFields}} to make sure 
> ozone configuration properties is fully documented in ozone-default.xml. This 
> is currently failing because
> 1. ozone-default.xml has 1 property not used anywhere
> {code}
>   ozone.scm.internal.bind.host
> {code}
> 2. Some cblock properties are missing in ozone-default.xml
> {code}
>   dfs.cblock.scm.ipaddress
>   dfs.cblock.scm.port
>   dfs.cblock.jscsi-address
>   dfs.cblock.service.rpc-bind-host
>   dfs.cblock.jscsi.rpc-bind-host
> {code}
> this needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074865#comment-16074865
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
45s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 4s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
34s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875762/HDFS-12026.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 695c291a64f5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 0d2d073 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20162/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20162/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfspp: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and 

[jira] [Updated] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-07-05 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.001.patch

Fixed pragma and whitespace issues

> libhdfspp: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-05 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074728#comment-16074728
 ] 

Wei-Chiu Chuang commented on HDFS-12088:


Hi lufei,
could you explain the expected behavior? I was under the impression that the 
existing implementation is correct.

Removing an ec policy from ECPM has the potential to turn existing ec files 
corrupt, or crash namenode as well (if there's an NPE).

Now that I reviewed ECPM implementation again, and I am confused by what these 
operations are supposed to do. A "remove" should also remove an ec policy from 
fsimage as well, I suppose.

> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> ---
>
> Key: HDFS-12088
> URL: https://issues.apache.org/jira/browse/HDFS-12088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12088.001.patch
>
>
> When user remove an user defined EC policy, it needs to remove the policy 
> from the userPolicies Map but not only remove from the enabledPolicies 
> Map.Otherwise, after remove the user defined EC policy, user can recover the 
> EC policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11679) Ozone: SCM CLI: Implement list container command

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074578#comment-16074578
 ] 

Hadoop QA commented on HDFS-11679:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11679 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875726/HDFS-11679-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 87f9e905a2b2 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / b23b016 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20161/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20161/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20161/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: SCM CLI: Implement list container 

[jira] [Commented] (HDFS-12086) Ozone: Add the unit test for KSMMetrics

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074500#comment-16074500
 ] 

Hadoop QA commented on HDFS-12086:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.ozone.TestOzoneConfigurationFields |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12086 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875715/HDFS-12086-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 47da001c8fe0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / b23b016 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20160/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20160/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20160/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Add the unit test for KSMMetrics
> ---
>
> Key: HDFS-12086
> URL: https://issues.apache.org/jira/browse/HDFS-12086
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>   

[jira] [Updated] (HDFS-11679) Ozone: SCM CLI: Implement list container command

2017-07-05 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11679:
--
Attachment: HDFS-11679-HDFS-7240.004.patch

[~cheersyang] / [~vagarychen]
Thanks for your comments.
1. Agree to make -start and -count as required. And it will solve the issues 
that you both mentioned above.
2.
{quote}
Also if we list on an empty db
{quote}
Addressed

3. the output is converted to JSON format:
{code}
{"containerName":"ContainerForTesting00","leaderHost":"127.0.0.1","datanodeHosts":["127.0.0.1"]}
{"containerName":"ContainerForTesting01","leaderHost":"127.0.0.1","datanodeHosts":["127.0.0.1"]}
...
{code}

> Ozone: SCM CLI: Implement list container command
> 
>
> Key: HDFS-11679
> URL: https://issues.apache.org/jira/browse/HDFS-11679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>  Labels: command-line
> Attachments: HDFS-11679-HDFS-7240.001.patch, 
> HDFS-11679-HDFS-7240.002.patch, HDFS-11679-HDFS-7240.003.patch, 
> HDFS-11679-HDFS-7240.004.patch
>
>
> Implement the command to list containers
> {code}
> hdfs scm -container list -start  [-count <100> | -end 
> ]{code}
> Lists all containers known to SCM. The option -start allows the listing to 
> start from a specified container and -count controls the number of entries 
> returned but it is mutually exclusive with the -end option which returns keys 
> from the -start to -end range.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074421#comment-16074421
 ] 

Hadoop QA commented on HDFS-12088:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12088 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875708/HDFS-12088.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7702ad2dbdc5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b17e655 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20159/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20159/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20159/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20159/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> 

[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-05 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074420#comment-16074420
 ] 

Surendra Singh Lilhore commented on HDFS-11965:
---

Thanks [~umamaheswararao] for review..
I am agree with the  alternative approach.. I will update the patch soon..

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12086) Ozone: Add the unit test for KSMMetrics

2017-07-05 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12086:
-
Attachment: HDFS-12086-HDFS-7240.002.patch

The test failures are not related. Attach the updated patch to fix checkstyle 
issues.

> Ozone: Add the unit test for KSMMetrics
> ---
>
> Key: HDFS-12086
> URL: https://issues.apache.org/jira/browse/HDFS-12086
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12086-HDFS-7240.001.patch, 
> HDFS-12086-HDFS-7240.002.patch
>
>
> Currently the unit test for KSMMetrics is missing. And some metrics name is 
> inconsistent with that in documentation:
> * numVolumeModifies should be numVolumeUpdates
> * numBucketModifies should be numBucketUpdates



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-05 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074338#comment-16074338
 ] 

Uma Maheswara Rao G commented on HDFS-11965:


Hi [~surendrasingh]

Sorry for coming late. Thank you for all good work. Thanks Rakesh for nice 
reviews.
However, I had the following feedback on the patch. I also got chance to have a 
chat with Rakesh about alternative proposal below.
 
Comments:
# blockStorageMovementNeeded.increaseRetryCount(trackId); : I feel there could 
be some race condition when blocks has enough replicas in retry time. Lets 
assume, you found low redundancy and incremented retry count. When retrying the 
same element again, if block reaches enough replicas, you may not get chance to 
remove that retry count. that may leak in that case?
# isAllBlockSatisfyPolicy: I feel this method is having lot of duplicate code. 
this method no longer needed if we agree on below proposal.
# .
{code}
} else {
+blockStorageMovementNeeded.removeRetry(trackId);
+  }
{code}
log missing.
# Nice test cases 

*To simplify this, here is an alternative approach:*
How about you add another state  case FEW_LOW_REDUNDENCY_BLOCKS: ? If the 
status is FEW_LOW_REDUNDENCY_BLOCKS, you could call 
this.storageMovementsMonitor.add(blockCollectionID, false);
So, this will not remove xattrs while processing the result. 
Also you can change parameter name allBlockLocsAttemptedToSatisfy in ‘public 
void add(Long blockCollectionID,
  boolean allBlockLocsAttemptedToSatisfy)’ to noRetry. Right now this is 
boolean, if you want to make it more readable, we can make this as small object 
with reason and retry count.
Example: public void add(Long blockCollectionID, RetryInfo retryInfo)
RetryInfo contains: 1. reason for retry 2. retriedCount
Seems like you are trying to make definite retry for underReplicated. May be 
you could file another JIRA for handling definite retry for all Items 
generically. Today, if file is not able to satisfy for some reasons like not 
enough storages etc, we may keep retry. So, that new JIRA can investigate how 
to make retrials configurable. 

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11975) Provide a system-default EC policy

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074324#comment-16074324
 ] 

Hadoop QA commented on HDFS-11975:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.sink.TestFileSink |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875694/HDFS-11975-005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d23987b3b115 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b17e655 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-05 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12088:
-
Status: Patch Available  (was: Open)

> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> ---
>
> Key: HDFS-12088
> URL: https://issues.apache.org/jira/browse/HDFS-12088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12088.001.patch
>
>
> When user remove an user defined EC policy, it needs to remove the policy 
> from the userPolicies Map but not only remove from the enabledPolicies 
> Map.Otherwise, after remove the user defined EC policy, user can recover the 
> EC policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-05 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12088:
-
Attachment: HDFS-12088.001.patch

> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> ---
>
> Key: HDFS-12088
> URL: https://issues.apache.org/jira/browse/HDFS-12088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12088.001.patch
>
>
> When user remove an user defined EC policy, it needs to remove the policy 
> from the userPolicies Map but not only remove from the enabledPolicies 
> Map.Otherwise, after remove the user defined EC policy, user can recover the 
> EC policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-05 Thread lufei (JIRA)
lufei created HDFS-12088:


 Summary: Remove a user defined EC Policy,the policy is not removed 
from the userPolicies map
 Key: HDFS-12088
 URL: https://issues.apache.org/jira/browse/HDFS-12088
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, hdfs
Affects Versions: 3.0.0-alpha3
Reporter: lufei
Assignee: lufei
Priority: Critical


When user remove an user defined EC policy, it needs to remove the policy from 
the userPolicies Map but not only remove from the enabledPolicies 
Map.Otherwise, after remove the user defined EC policy, user can recover the EC 
policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled

2017-07-05 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12087:
-
Labels: hdfs-ec-3.0-nice-to-have  (was: )

> The error message is not friendly when set a path with the policy not enabled
> -
>
> Key: HDFS-12087
> URL: https://issues.apache.org/jira/browse/HDFS-12087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12087.001.patch
>
>
> First user add a policy by -addPolicies command but not enabled, then user 
> set a path with this policy. The error message displayed as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color}
> The policy 'XOR-2-1-128k' is added by user but not be enabled.The error 
> message is not promot user to enable the policy first.I think the error 
> message may be better as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable 
> the policy by '-enablePolicy' EC command before.{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org