[jira] [Updated] (HDDS-238) Add Node2Pipeline Map in SCM to track ratis/standalone pipelines.

2018-07-12 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-238:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add Node2Pipeline Map in SCM to track ratis/standalone pipelines.
> -
>
> Key: HDDS-238
> URL: https://issues.apache.org/jira/browse/HDDS-238
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-238.001.patch, HDDS-238.002.patch, 
> HDDS-238.003.patch
>
>
> This jira proposes to add a Node2Pipeline map which can be used to during 
> pipeline failure to identify a pipeline for a corresponding failed datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-238) Add Node2Pipeline Map in SCM to track ratis/standalone pipelines.

2018-07-12 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542536#comment-16542536
 ] 

Xiaoyu Yao commented on HDDS-238:
-

Thanks [~msingh] for the contribution. +1 for the v3 patch. I've commit it to 
trunk. 

> Add Node2Pipeline Map in SCM to track ratis/standalone pipelines.
> -
>
> Key: HDDS-238
> URL: https://issues.apache.org/jira/browse/HDDS-238
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-238.001.patch, HDDS-238.002.patch, 
> HDDS-238.003.patch
>
>
> This jira proposes to add a Node2Pipeline map which can be used to during 
> pipeline failure to identify a pipeline for a corresponding failed datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542524#comment-16542524
 ] 

genericqa commented on HDDS-251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m  9s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-251 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931420/HDDS-251.002.patch |
| Optional 

[jira] [Updated] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-248:
--
Status: Open  (was: Patch Available)

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542515#comment-16542515
 ] 

Anu Engineer commented on HDDS-248:
---

Let us postpone this JIRA. This change impacts the Container report publishing 
and node reports. cc: [~nandakumar131] , [~arpitagarwal] . This will unblock 
Nanda. Thanks

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-187:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~xyao], [~nandakumar131], [~bharatviswa] Thanks for comments and reviews. 
[~ajayydv] Thanks for the contribution. I have committed this patch to trunk.

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch, 
> HDDS-187.07.patch, HDDS-187.08.patch, HDDS-187.09.patch, HDDS-187.10.patch, 
> HDDS-187.11.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542477#comment-16542477
 ] 

Shashikant Banerjee commented on HDDS-226:
--

Failed tests are not related.

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch, 
> HDDS-226.03.patch, HDDS-226.04.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13663:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the contribution, [~shwetayakkali]!

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542456#comment-16542456
 ] 

Xiao Chen commented on HDFS-13663:
--

Test failures not related to this patch. Committing this

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542453#comment-16542453
 ] 

genericqa commented on HDDS-252:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 33m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 30s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 

[jira] [Commented] (HDFS-13733) RBF: Add Web UI configurations and descriptions to RBF document

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542448#comment-16542448
 ] 

genericqa commented on HDFS-13733:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931418/HDFS-13733.2.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f808636ce998 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1bc106a |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24591/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add Web UI configurations and descriptions to RBF document
> ---
>
> Key: HDFS-13733
> URL: https://issues.apache.org/jira/browse/HDFS-13733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13733.1.patch, HDFS-13733.2.patch
>
>
> Looks like Web UI configurations and descriptions are lack in the document at 
> the moment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-12 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542432#comment-16542432
 ] 

Lokesh Jain commented on HDDS-251:
--

Thanks [~bharatviswa] for reviewing the patch! Yes, the compilation failure was 
due to removal of throws StorageContainerException. I have handled it in the v2 
patch. The throws StorageContainerException is not useful now with the current 
code so I have removed it.

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-251:
-
Attachment: HDDS-251.002.patch

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13733) RBF: Add Web UI configurations and descriptions to RBF document

2018-07-12 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542416#comment-16542416
 ] 

Takanobu Asanuma commented on HDFS-13733:
-

Thanks for the review, [~elgoiri]! Updated the patch addressing your comment.

> RBF: Add Web UI configurations and descriptions to RBF document
> ---
>
> Key: HDFS-13733
> URL: https://issues.apache.org/jira/browse/HDFS-13733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13733.1.patch, HDFS-13733.2.patch
>
>
> Looks like Web UI configurations and descriptions are lack in the document at 
> the moment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13733) RBF: Add Web UI configurations and descriptions to RBF document

2018-07-12 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13733:

Attachment: HDFS-13733.2.patch

> RBF: Add Web UI configurations and descriptions to RBF document
> ---
>
> Key: HDFS-13733
> URL: https://issues.apache.org/jira/browse/HDFS-13733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13733.1.patch, HDFS-13733.2.patch
>
>
> Looks like Web UI configurations and descriptions are lack in the document at 
> the moment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542396#comment-16542396
 ] 

genericqa commented on HDFS-13663:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931394/HDFS-13663.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e33b9ae88bd9 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 556d9b3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24590/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Comment Edited] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542347#comment-16542347
 ] 

Bharat Viswanadham edited comment on HDDS-252 at 7/13/18 12:10 AM:
---

Attached patch v01.

Fixed test failure TestOzoneConfigurationFields. Other test failure is not 
related to this patch.


was (Author: bharatviswa):
Fixed test failure TestOzoneConfigurationFields. Other test failure is not 
related to this patch.

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-252.00.patch, HDDS-252.01.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542347#comment-16542347
 ] 

Bharat Viswanadham commented on HDDS-252:
-

Fixed test failure TestOzoneConfigurationFields. Other test failure is not 
related to this patch.

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-252.00.patch, HDDS-252.01.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-252:

Attachment: HDDS-252.01.patch

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-252.00.patch, HDDS-252.01.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542340#comment-16542340
 ] 

genericqa commented on HDDS-252:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 24s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | 

[jira] [Commented] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542319#comment-16542319
 ] 

genericqa commented on HDDS-226:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 40s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 12s{color} | 
{color:black} 

[jira] [Commented] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542312#comment-16542312
 ] 

genericqa commented on HDDS-226:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 44s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 22s{color} | 
{color:black} 

[jira] [Comment Edited] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542289#comment-16542289
 ] 

Bharat Viswanadham edited comment on HDDS-249 at 7/12/18 10:54 PM:
---

Hi [~hanishakoneru]

Thanks for the review.
 # before calling hddsVolume.getHddsRootDir().listFiles(), there is call for 
format which creates a version file inside hddsDir.  Consider a case format is 
succeded, the above logic check is fine for us, to safeguard we can add a check 
for the name of the file is version file. Consider the format failed to create 
VersionFile, then we return and consider that volume is failed. Because in that 
case, we return false. below is the code for that. if we return false, we add 
to failVolumeMap.

{code:java}
hddsVolume.format(clusterId);
} catch (IOException ex) {
logger.error("Error during formatting volume {}, exception is",
volumeRoot, ex);
return result;
}{code}
2. scmDir check is done here with file exists. So, that scmId matching check is 
not needed here, if above logic holds good.                                     
                                                                                
                                    
{code:java}
else if (!scmDir.exists()) {
// Already existing volume, and this is not first time dn is started
logger.error("Volume {} is in Inconsistent state, missing scm {} " +
"directory", volumeRoot, scmId);
}{code}


was (Author: bharatviswa):
Hi [~hanishakoneru]

Thanks for the review.
 # before calling hddsVolume.getHddsRootDir().listFiles(), there is call for 
format which creates a version file inside hddsDir.  So consider it is 
succeded, the above logic check is fine for us, to safeguard we can add a check 
for name of the file is version file. Consider the format failed to create 
VersionFile, then we return and consider that volume is failed. Because in that 
case, we return false. below is the code for that. if we return false, we add 
to failVolumeMap.

{code:java}
hddsVolume.format(clusterId);
} catch (IOException ex) {
logger.error("Error during formatting volume {}, exception is",
volumeRoot, ex);
return result;
}{code}
2. scmDir check is done here with file exists. So, that scmId matching check is 
not needed here, if above logic holds good.                                     
                                                                                
                                    
{code:java}
else if (!scmDir.exists()) {
// Already existing volume, and this is not first time dn is started
logger.error("Volume {} is in Inconsistent state, missing scm {} " +
"directory", volumeRoot, scmId);
}{code}

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542290#comment-16542290
 ] 

Xiao Chen commented on HDFS-13663:
--

+1 pending jenkins. Thanks Shweta!

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542289#comment-16542289
 ] 

Bharat Viswanadham commented on HDDS-249:
-

Hi [~hanishakoneru]

Thanks for the review.
 # before calling hddsVolume.getHddsRootDir().listFiles(), there is call for 
format which creates a version file inside hddsDir.  So consider it is 
succeded, the above logic check is fine for us, to safeguard we can add a check 
for name of the file is version file. Consider the format failed to create 
VersionFile, then we return and consider that volume is failed. Because in that 
case, we return false. below is the code for that. if we return false, we add 
to failVolumeMap.

{code:java}
hddsVolume.format(clusterId);
} catch (IOException ex) {
logger.error("Error during formatting volume {}, exception is",
volumeRoot, ex);
return result;
}{code}
2. scmDir check is done here with file exists. So, that scmId matching check is 
not needed here, if above logic holds good.                                     
                                                                                
                                    
{code:java}
else if (!scmDir.exists()) {
// Already existing volume, and this is not first time dn is started
logger.error("Volume {} is in Inconsistent state, missing scm {} " +
"directory", volumeRoot, scmId);
}{code}

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-252:
--
Fix Version/s: 0.2.1

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-252.00.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542284#comment-16542284
 ] 

genericqa commented on HDFS-13421:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12090 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
36s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} HDFS-12090 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 48s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
58s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 19 new + 1 
unchanged - 0 fixed = 20 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13421 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931387/HDFS-13421-HDFS-12090.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2428f87d584e 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12090 / eecb5ba |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 

[jira] [Commented] (HDFS-13734) Add Heapsize variables for HDFS daemons

2018-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542280#comment-16542280
 ] 

ASF GitHub Bot commented on HDFS-13734:
---

GitHub user bschell opened a pull request:

https://github.com/apache/hadoop/pull/403

HDFS-13734. Allow HDFS heapsizes to be configured seperately

Adds option for HDFS_NAMENODE_HEAPSIZE, HDFS_SECONDARYNAMENODE_HEAPSIZE, 
HDFS_JOURNALNODE_HEAPSIZE and HDFS_DATANODE_HEAPSIZE to hadoop-env.sh so that 
hdfs daemon JVM heapsizes can be separately configured. This matches the 
configuration of YARN daemon's heapsizes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bschell/hadoop bschelle/heapsize

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/403.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #403


commit e724ef08ed04f7d8d358b8674580c7c615285769
Author: Scheller 
Date:   2018-07-10T23:06:48Z

Allow Datanode and Namenode heapsize to be configured seperately

Adds option for HDFS_NAMENODE_HEAPSIZE, HDFS_SECONDARYNAMENODE_HEAPSIZE, 
HDFS_JOURNALNODE_HEAPSIZE and HDFS_DATANODE_HEAPSIZE to hadoop-env.sh so that 
namenode and datanode JVM heapsizes can be separately configured. This matches 
the configuration of YARN daemon's heapsizes.




> Add Heapsize variables for HDFS daemons
> ---
>
> Key: HDFS-13734
> URL: https://issues.apache.org/jira/browse/HDFS-13734
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, journal-node, namenode
>Affects Versions: 3.0.3
>Reporter: Brandon Scheller
>Priority: Major
>
> Currently there are no variables to set HDFS daemon heapsize differently. 
> While still possible through adding the -Xmx to HDFS_*DAEMON*_OPTS, this is 
> not intuitive for this relatively common setting.
> YARN currently has these separate YARN_*DAEMON*_HEAPSIZE variables supported 
> so it seems natural for HDFS too.
> It also looks like HDFS use to have this for namenode with 
> HADOOP_NAMENODE_INIT_HEAPSIZE
> This JIRA is to have these configurations added/supported



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542275#comment-16542275
 ] 

Shweta commented on HDFS-13663:
---

Hi Xiao,

I have updated the patch as mentioned by you above. 

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13663:
--
Attachment: (was: HDFS-13663.003.patch)

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13663:
--
Attachment: HDFS-13663.003.patch

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13663:
--
Attachment: (was: HDFS-13663.004.patch)

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542270#comment-16542270
 ] 

genericqa commented on HDDS-250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 14s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-250 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931373/HDDS-250.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542266#comment-16542266
 ] 

Hanisha Koneru commented on HDDS-249:
-

Thanks for working on this, [~bharatviswa].
 * I think this work is dependent on HDDS-241. Lets say we  have a situation 
where a volume has an scmDir but the VERSION file is missing. And during 
formatting, we fail to create the Version file. The following check in 
HddsVolumeUtil assumes that the file inside hdds dir is the VERSION file 
whereas, in this situation, it is an scmDir. We will end up creating two 
scmDirs with no Version File and return the volume as healthy.
{code:java}
File[] hddsFiles = hddsVolume.getHddsRootDir().listFiles();
if (hddsFiles !=null && hddsFiles.length == 1) {
  // DN started for first time or this is a newly added volume.
  // So we create scm directory. So only version file should be available.
  if (!scmDir.mkdir()) {
logger.error("Unable to create scmDir {}", scmDir);
  }
  result = true;
} else if (!scmDir.exists()) {
  // Already existing volume, and this is not first time dn is started
  logger.error("Volume {} is in Inconsistent state, missing scm {} " +
  "directory", volumeRoot, scmId);
} else {
  result = true;
}
{code}
Once we HDDS-241 goes in, we can detect an inconsistent volume and this 
situation can be avoided.

 * We will also need to verify that the scmId matches the name of the scmDir 
inside hddsVolume dir.

 * NIT : Unrelated to this change, VersionEndPointTask, line#80 - null check is 
for clusterId. Could you please fix that also along with this change. 

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13734) Add Heapsize variables for HDFS daemons

2018-07-12 Thread Brandon Scheller (JIRA)
Brandon Scheller created HDFS-13734:
---

 Summary: Add Heapsize variables for HDFS daemons
 Key: HDFS-13734
 URL: https://issues.apache.org/jira/browse/HDFS-13734
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, journal-node, namenode
Affects Versions: 3.0.3
Reporter: Brandon Scheller


Currently there are no variables to set HDFS daemon heapsize differently. While 
still possible through adding the -Xmx to HDFS_*DAEMON*_OPTS, this is not 
intuitive for this relatively common setting.

YARN currently has these separate YARN_*DAEMON*_HEAPSIZE variables supported so 
it seems natural for HDFS too.

It also looks like HDFS use to have this for namenode with 
HADOOP_NAMENODE_INIT_HEAPSIZE

This JIRA is to have these configurations added/supported



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542246#comment-16542246
 ] 

Xiao Chen commented on HDFS-13697:
--

Thanks all for joining the discussion today! Here's the discussion notes:

Attendees: [~zvenczel], [~xiaochen], [~daryn], [~xyao], [~jnp].
 - We reviewed the history of changes that landed us to where we are today 
(chronological order):
-# HADOOP-10698 (proxy support, which first stored the ugi at kmscp creation ) 
-# HADOOP-11176 (bug 1)
-# HADOOP-11368 (sslfactory truststore reloader thread leak)
-# HDFS-7718 (keyprovider cache implemented)
-# HADOOP-11482 (bug 2)
-# HADOOP-12787 (bug 3) 
-# HADOOP-13381 (bug 4) 
-# HADOOP-13749 (bug 5, which has many discussions) 
 - We discussed and had consensus on the following:
-# the client should retain the identity at client creation time. The current 
way of jumping among UGIs at method invocation should be changed.
-# the difficulty in caching the ugi at KMSCP creation time is that, the 
KeyProviderCache would make different callers sharing the same KMSCP.
- If we can figure out a way for {{KMSCP$sslfactory}} and its underlying 
{{KeyStoresFactory}} to be shared, then HDFS-7718 is a non-issue and we can 
revert the KeyProviderCache. This will restore the world to the state that each 
DFSClient will cache its own KMSCP. 
- It's probably still worth caching the KMSCP per DFSClient, since creating a 
KMSCP is not cheap. 
- No 'actual ugi' need to be figured out at KMSCP ctor. {{UGI#getCurrentUser}} 
should suffice.
- Daryn also brought up the point that it may be possible to use a 
shared/periodic method to check the truststore files, eliminating the need for 
the trust reloader manager to be created at all.
- Discussed about the possibility of doing all the above in a separate jira, 
and commit this one for now (then revert when the separate jira is done in the 
future). But this feels fragile and would incur more maintenance costs.

[~zvenczel], would you mind changing this jira to the more general, bigger 
issue? I'm thinking something like KMSClientProvider should cache and use the 
UGI at creation time.

> EDEK decrypt fails due to proxy user being lost because of empty 
> AccessControllerContext
> 
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
>  at org.apache.oozie.command.XCommand.call(XCommand.java:286)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>  at 
> 

[jira] [Comment Edited] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542219#comment-16542219
 ] 

Bharat Viswanadham edited comment on HDDS-251 at 7/12/18 9:36 PM:
--

Hi [~ljain]

Thank You for reporting and fixing this issue.

Patch is not compiling. This is caused by removing throws 
StorageContainerException for getContainerData() method.
{code:java}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-ozone-integration-test: Compilation 
failure
[ERROR] 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java:[110,7]
 exception 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException 
is never thrown in body of corresponding try statement

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-ozone-integration-test: Compilation 
failure
[ERROR] 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java:[138,7]
 exception 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException 
is never thrown in body of corresponding try statement

[ERROR] 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java:[263,7]
 exception 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException 
is never thrown in body of corresponding try statement


{code}


was (Author: bharatviswa):
Hi [~ljain]

Thank You for reporting and fixing this issue.

Patch is not compiling.
{code:java}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-ozone-integration-test: Compilation 
failure
[ERROR] 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java:[110,7]
 exception 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException 
is never thrown in body of corresponding try statement

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-ozone-integration-test: Compilation 
failure
[ERROR] 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java:[138,7]
 exception 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException 
is never thrown in body of corresponding try statement

{code}

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542219#comment-16542219
 ] 

Bharat Viswanadham commented on HDDS-251:
-

Hi [~ljain]

Thank You for reporting and fixing this issue.

Patch is not compiling.
{code:java}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-ozone-integration-test: Compilation 
failure
[ERROR] 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java:[110,7]
 exception 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException 
is never thrown in body of corresponding try statement

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-ozone-integration-test: Compilation 
failure
[ERROR] 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java:[138,7]
 exception 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException 
is never thrown in body of corresponding try statement

{code}

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542218#comment-16542218
 ] 

genericqa commented on HDDS-187:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931375/HDDS-187.11.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  cc  |
| uname | Linux e71cfb7e2afa 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 

[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-12 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542216#comment-16542216
 ] 

Ewan Higgs commented on HDFS-13421:
---

004
 * Rebased on HDFS-13310 patch 7.
 * Add new class {{BlockInputStream}} which is a facade around {{BlockReader}} 
and supports the {{InputStream}} interface. This is intended to allow the 
blocks to be read lazily through the MultipartUploader.
 * Connect {{BPOfferService}} to {{SyncServiceSatisfierDatanodeWorker}}

> [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
> ---
>
> Key: HDFS-13421
> URL: https://issues.apache.org/jira/browse/HDFS-13421
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13421-HDFS-12090.001.patch, 
> HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, 
> HDFS-13421-HDFS-12090.004.patch
>
>
> HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP 
> command in Datanode. 
> These have been broken up to make reviewing it easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-12 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13421:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
> ---
>
> Key: HDFS-13421
> URL: https://issues.apache.org/jira/browse/HDFS-13421
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13421-HDFS-12090.001.patch, 
> HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, 
> HDFS-13421-HDFS-12090.004.patch
>
>
> HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP 
> command in Datanode. 
> These have been broken up to make reviewing it easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-12 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13421:
--
Attachment: HDFS-13421-HDFS-12090.004.patch

> [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
> ---
>
> Key: HDFS-13421
> URL: https://issues.apache.org/jira/browse/HDFS-13421
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13421-HDFS-12090.001.patch, 
> HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, 
> HDFS-13421-HDFS-12090.004.patch
>
>
> HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP 
> command in Datanode. 
> These have been broken up to make reviewing it easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode

2018-07-12 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13421:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
> ---
>
> Key: HDFS-13421
> URL: https://issues.apache.org/jira/browse/HDFS-13421
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13421-HDFS-12090.001.patch, 
> HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, 
> HDFS-13421-HDFS-12090.004.patch
>
>
> HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP 
> command in Datanode. 
> These have been broken up to make reviewing it easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-252:

Attachment: HDDS-252.00.patch

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-252.00.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-252:

Status: Patch Available  (was: In Progress)

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-252.00.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542190#comment-16542190
 ] 

genericqa commented on HDDS-187:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 41s{color} | 
{color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdds_server-scm generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 19s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} 

[jira] [Commented] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542182#comment-16542182
 ] 

Shashikant Banerjee commented on HDDS-226:
--

As per offline discussion with [~anu], removed the ozoneBlockInfo class and 
added blockLength filed inside BlockId class in patch v4.

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch, 
> HDDS-226.03.patch, HDDS-226.04.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-226:
-
Attachment: HDDS-226.04.patch

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch, 
> HDDS-226.03.patch, HDDS-226.04.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542180#comment-16542180
 ] 

Xiao Chen edited comment on HDFS-13663 at 7/12/18 8:35 PM:
---

Hi Shweta,

Thanks for the update. Patch 3 looks really close. Could you remove the extra 
line above the {{break}}?

 


was (Author: xiaochen):
Hi Shweta,

Patch 3 looks really close. Could you remove the extra line above the {{break}}?

 

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch, HDFS-13663.004.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-226:
-
Attachment: (was: HDDS-226.04.patch)

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch, 
> HDDS-226.03.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542180#comment-16542180
 ] 

Xiao Chen commented on HDFS-13663:
--

Hi Shweta,

Patch 3 looks really close. Could you remove the extra line above the {{break}}?

 

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch, HDFS-13663.004.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-226:
-
Attachment: HDDS-226.04.patch

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch, 
> HDDS-226.03.patch, HDDS-226.04.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: HDDS-187.11.patch

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch, 
> HDDS-187.07.patch, HDDS-187.08.patch, HDDS-187.09.patch, HDDS-187.10.patch, 
> HDDS-187.11.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13728) Disk Balancer should not fail if volume usage is greater than capacity

2018-07-12 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542153#comment-16542153
 ] 

Stephen O'Donnell commented on HDFS-13728:
--

I was happy to leave this one to Gabor, but as I had the change made in my 
local branch to debug the original issue and just needed to add a test I have 
uploaded what I have with the WARN message that was suggested.

> Disk Balancer should not fail if volume usage is greater than capacity
> --
>
> Key: HDFS-13728
> URL: https://issues.apache.org/jira/browse/HDFS-13728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: diskbalancer
>Affects Versions: 3.0.3
>Reporter: Stephen O'Donnell
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13728.001.patch
>
>
> We have seen a couple of scenarios where the disk balancer fails because a 
> datanode reports more spaced used on a disk than its capacity, which should 
> not be possible.
> This is due to the check below in DiskBalancerVolume.java:
> {code}
>   public void setUsed(long dfsUsedSpace) {
> Preconditions.checkArgument(dfsUsedSpace < this.getCapacity(),
> "DiskBalancerVolume.setUsed: dfsUsedSpace(%s) < capacity(%s)",
> dfsUsedSpace, getCapacity());
> this.used = dfsUsedSpace;
>   }
> {code}
> While I agree that it should not be possible for a DN to report more usage on 
> a volume than its capacity, there seems to be some issue that causes this to 
> occur sometimes.
> In general, this full disk is what causes someone to want to run the Disk 
> Balancer, only to find it fails with the error.
> There appears to be nothing you can do to force the Disk Balancer to run at 
> this point, but in the scenarios I saw, some data was removed from the disk 
> and usage dropped below the capacity resolving the issue.
> Can we considered relaxing the above check, and if the usage is greater than 
> the capacity, just set the usage to the capacity so the calculations all work 
> ok?
> Eg something like this:
> {code}
>public void setUsed(long dfsUsedSpace) {
> -Preconditions.checkArgument(dfsUsedSpace < this.getCapacity());
> -this.used = dfsUsedSpace;
> +if (dfsUsedSpace > this.getCapacity()) {
> +  this.used = this.getCapacity();
> +} else {
> +  this.used = dfsUsedSpace;
> +}
>}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542146#comment-16542146
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks for the review [~bharatviswa].

Updated the patch to address your comment and also made the following changes.
 * Made ContainerData an abstract class. Each new ContainerType should extend 
this class.
 * Renamed some variables as per their usage.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13728) Disk Balancer should not fail if volume usage is greater than capacity

2018-07-12 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-13728:
-
Attachment: HDFS-13728.001.patch

> Disk Balancer should not fail if volume usage is greater than capacity
> --
>
> Key: HDFS-13728
> URL: https://issues.apache.org/jira/browse/HDFS-13728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: diskbalancer
>Affects Versions: 3.0.3
>Reporter: Stephen O'Donnell
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13728.001.patch
>
>
> We have seen a couple of scenarios where the disk balancer fails because a 
> datanode reports more spaced used on a disk than its capacity, which should 
> not be possible.
> This is due to the check below in DiskBalancerVolume.java:
> {code}
>   public void setUsed(long dfsUsedSpace) {
> Preconditions.checkArgument(dfsUsedSpace < this.getCapacity(),
> "DiskBalancerVolume.setUsed: dfsUsedSpace(%s) < capacity(%s)",
> dfsUsedSpace, getCapacity());
> this.used = dfsUsedSpace;
>   }
> {code}
> While I agree that it should not be possible for a DN to report more usage on 
> a volume than its capacity, there seems to be some issue that causes this to 
> occur sometimes.
> In general, this full disk is what causes someone to want to run the Disk 
> Balancer, only to find it fails with the error.
> There appears to be nothing you can do to force the Disk Balancer to run at 
> this point, but in the scenarios I saw, some data was removed from the disk 
> and usage dropped below the capacity resolving the issue.
> Can we considered relaxing the above check, and if the usage is greater than 
> the capacity, just set the usage to the capacity so the calculations all work 
> ok?
> Eg something like this:
> {code}
>public void setUsed(long dfsUsedSpace) {
> -Preconditions.checkArgument(dfsUsedSpace < this.getCapacity());
> -this.used = dfsUsedSpace;
> +if (dfsUsedSpace > this.getCapacity()) {
> +  this.used = this.getCapacity();
> +} else {
> +  this.used = dfsUsedSpace;
> +}
>}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Attachment: HDDS-250.001.patch

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: (was: HDDS-187.11.patch)

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch, 
> HDDS-187.07.patch, HDDS-187.08.patch, HDDS-187.09.patch, HDDS-187.10.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-228) Add the ReplicaMaps to ContainerStateManager

2018-07-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542129#comment-16542129
 ] 

Ajay Kumar commented on HDDS-228:
-

[~anu] thanks for review and commit. [~nandakumar131] thanks for review and 
discussion.

> Add the ReplicaMaps to ContainerStateManager
> 
>
> Key: HDDS-228
> URL: https://issues.apache.org/jira/browse/HDDS-228
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-228.00.patch, HDDS-228.01.patch, HDDS-228.02.patch, 
> HDDS-228.03.patch, HDDS-228.04.patch, HDDS-228.05.patch, HDDS-228.06.patch
>
>
> We need to maintain a list of data nodes in the SCM that tells us where a 
> container is located. This created from the container reports.  The HDDS-175 
> refactored the class to make this separation easy and this JIRA is a followup 
> that keeps a hash table to track this information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-234) Add SCM node report handler

2018-07-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542130#comment-16542130
 ] 

Ajay Kumar commented on HDDS-234:
-

[~anu] thanks for review and commit.

> Add SCM node report handler
> ---
>
> Key: HDDS-234
> URL: https://issues.apache.org/jira/browse/HDDS-234
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-234.00.patch, HDDS-234.01.patch, HDDS-234.02.patch, 
> HDDS-234.03.patch
>
>
> This ticket is opened to add SCM nodereport handler after the refactoring. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542127#comment-16542127
 ] 

Ajay Kumar commented on HDDS-187:
-

Patch v11 to address checkstyle issues.

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch, 
> HDDS-187.07.patch, HDDS-187.08.patch, HDDS-187.09.patch, HDDS-187.10.patch, 
> HDDS-187.11.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: HDDS-187.11.patch

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch, 
> HDDS-187.07.patch, HDDS-187.08.patch, HDDS-187.09.patch, HDDS-187.10.patch, 
> HDDS-187.11.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542113#comment-16542113
 ] 

genericqa commented on HDDS-249:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
12s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-249 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931364/HDDS-249.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2d8c58a5c354 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a08812a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/510/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
|  Test 

[jira] [Updated] (HDDS-234) Add SCM node report handler

2018-07-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-234:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~ajayydv] Thank you for the contribution. I have committed this to the trunk.

> Add SCM node report handler
> ---
>
> Key: HDDS-234
> URL: https://issues.apache.org/jira/browse/HDDS-234
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-234.00.patch, HDDS-234.01.patch, HDDS-234.02.patch, 
> HDDS-234.03.patch
>
>
> This ticket is opened to add SCM nodereport handler after the refactoring. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-181) CloseContainer should commit all pending open Keys on a datanode

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542092#comment-16542092
 ] 

genericqa commented on HDDS-181:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 34m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 35m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 39s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-181 |
| JIRA Patch URL | 

[jira] [Updated] (HDDS-228) Add the ReplicaMaps to ContainerStateManager

2018-07-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-228:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~nandakumar131] Thanks for the reviews. [~ajayydv]  Thanks for the 
contribution. I have committed this to the trunk.

Some acceptance tests failed, these failures are not related to this patch.
{noformat}
==
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage
==
RestClient without http port  | FAIL |
Test timeout 2 minutes exceeded.
--
RestClient with http port
RestClient with http port | FAIL |
Test timeout 2 minutes exceeded.
--
RestClient without host name  | FAIL |
Test timeout 2 minutes exceeded.
--





{noformat}
 

> Add the ReplicaMaps to ContainerStateManager
> 
>
> Key: HDDS-228
> URL: https://issues.apache.org/jira/browse/HDDS-228
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-228.00.patch, HDDS-228.01.patch, HDDS-228.02.patch, 
> HDDS-228.03.patch, HDDS-228.04.patch, HDDS-228.05.patch, HDDS-228.06.patch
>
>
> We need to maintain a list of data nodes in the SCM that tells us where a 
> container is located. This created from the container reports.  The HDDS-175 
> refactored the class to make this separation easy and this JIRA is a followup 
> that keeps a hash table to track this information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-252:
---

 Summary: Eliminate the datanode ID file
 Key: HDDS-252
 URL: https://issues.apache.org/jira/browse/HDDS-252
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
branch) is merged, we have a version file in each Volume which stores 
datanodeUuid and some additional fields in that file.

And also if this disk containing datanodeId path is removed, that DN will now 
be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-252) Eliminate the datanode ID file

2018-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-252 started by Bharat Viswanadham.
---
> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542029#comment-16542029
 ] 

Bharat Viswanadham commented on HDDS-249:
-

Fixed find bug issue, with the previous patch also it should not be reported, 
as checked not null and then called length method. Modified the patch a little 
bit to see if the findbug issue is fixed.

Attached patch v02.

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-249:

Attachment: HDDS-249.02.patch

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-12 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13310:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch, 
> HDFS-13310-HDFS-12090.004.patch, HDFS-13310-HDFS-12090.005.patch, 
> HDFS-13310-HDFS-12090.006.patch, HDFS-13310-HDFS-12090.007.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-12 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13310:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch, 
> HDFS-13310-HDFS-12090.004.patch, HDFS-13310-HDFS-12090.005.patch, 
> HDFS-13310-HDFS-12090.006.patch, HDFS-13310-HDFS-12090.007.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13729) Fix broken links to RBF documentation

2018-07-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13729:
-
   Resolution: Fixed
Fix Version/s: 2.9.2
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2 and branch-2.9. Thanks [~gabor.bota]!

> Fix broken links to RBF documentation
> -
>
> Key: HDFS-13729
> URL: https://issues.apache.org/jira/browse/HDFS-13729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: jwhitter
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, 
> hadoop_broken_link.png
>
>
> A broken link on the page [http://hadoop.apache.org/docs/current/]
>  * HDFS
>  ** HDFS Router based federation. See the [user 
> documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  for more details.
> The link for user documentation 
> [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  is not found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13729) Fix broken links to RBF documentation

2018-07-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542011#comment-16542011
 ] 

Akira Ajisaka commented on HDFS-13729:
--

LGTM, +1

> Fix broken links to RBF documentation
> -
>
> Key: HDFS-13729
> URL: https://issues.apache.org/jira/browse/HDFS-13729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: jwhitter
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, 
> hadoop_broken_link.png
>
>
> A broken link on the page [http://hadoop.apache.org/docs/current/]
>  * HDFS
>  ** HDFS Router based federation. See the [user 
> documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  for more details.
> The link for user documentation 
> [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  is not found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541965#comment-16541965
 ] 

Íñigo Goiri commented on HDFS-13475:


I think the proposal in  [^HDFS-13475.002.patch] is much cleaner; isolating the 
safe mode logic makes the code simpler.
I think the unit test covers the issue described in this JIRA.
My only concern is how long it takes as it is adding an extra 7 seconds which 
is basically doubling the test length.
I wonder if there is any way to speed this up a little; for example, cutting 
the times a little and waiting for it to get into RUNNING instead of just 
sleeping.

A minor nit regarding code, we currently have:
{code}
RouterSafemodeService safeModeService = this.router.getSafemodeService();
if (safeModeService != null) {
  this.router.updateRouterState(RouterServiceState.SAFEMODE);
  safeModeService.setManualSafeMode(true);
  return EnterSafeModeResponse.newInstance(verifySafeMode(true));
}
return EnterSafeModeResponse.newInstance(false);
{code}

It is personal preference, but I would go for a single point of exit:
{code}
boolean success = false;
RouterSafemodeService safeModeService = this.router.getSafemodeService();
if (safeModeService != null) {
  this.router.updateRouterState(RouterServiceState.SAFEMODE);
  safeModeService.setManualSafeMode(true);
  success = verifySafeMode(true);
}
return EnterSafeModeResponse.newInstance(success);
{code}

Similar for the other modified methods in {{RouterAdminServer}}.

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13475.000.patch, HDFS-13475.001.patch, 
> HDFS-13475.002.patch
>
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541953#comment-16541953
 ] 

genericqa commented on HDDS-226:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 13s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 15s{color} | 
{color:black} 

[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541940#comment-16541940
 ] 

genericqa commented on HDDS-10:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
54s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 15 new + 0 unchanged - 0 fixed = 15 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
16s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 62 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 12 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-10 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931344/HDDS-10-HDDS-4.03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  shellcheck  shelldocs  |
| uname | Linux 3a3447f6a7a0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 7ca0144 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDDS-Build/508/artifact/out/diff-patch-shellcheck.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/508/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/508/artifact/out/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/508/testReport/ |
| asflicense | 

[jira] [Commented] (HDFS-13733) RBF: Add Web UI configurations and descriptions to RBF document

2018-07-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541933#comment-16541933
 ] 

Íñigo Goiri commented on HDFS-13733:


Thanks [~tasanuma0829] for  [^HDFS-13733.1.patch].
As we are already tweaking this, I think it might be worth mentioning WebHDFS 
with the following 
[link|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html]
 (relative).

> RBF: Add Web UI configurations and descriptions to RBF document
> ---
>
> Key: HDFS-13733
> URL: https://issues.apache.org/jira/browse/HDFS-13733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13733.1.patch
>
>
> Looks like Web UI configurations and descriptions are lack in the document at 
> the moment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-181) CloseContainer should commit all pending open Keys on a datanode

2018-07-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-181:
-
Status: Patch Available  (was: Open)

> CloseContainer should commit all pending open Keys on a datanode
> 
>
> Key: HDDS-181
> URL: https://issues.apache.org/jira/browse/HDDS-181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-181.01.patch, HDDS-181.02.patch, HDDS-181.03.patch
>
>
> A close container command arrives in the Datanode by the SCM heartBeat 
> response.It will then be queued up over the ratis pipeline. Once the command 
> execution starts inside the Datanode, it will mark the container in CLOSING 
> State. All the pending open keys for the container now will be committed 
> followed by the transition of the container state from CLOSING to CLOSED. For 
> achieving this, all the open keys for a container need to be tracked.
> This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) docker changes to test secure ozone cluster

2018-07-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-10:
---
Attachment: HDDS-10-HDDS-4.03.patch

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch, HDDS-10-HDDS-4.03.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-07-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541834#comment-16541834
 ] 

Ajay Kumar commented on HDDS-10:


[~elek] thanks for suggestion. I think it makes sense. Patch v3 to address that.

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch, HDDS-10-HDDS-4.03.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541728#comment-16541728
 ] 

Shashikant Banerjee commented on HDDS-226:
--

patch v3 fixes the related test case failure with patch v2. The other test case 
failure is not related.

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch, 
> HDDS-226.03.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-226:
-
Attachment: HDDS-226.03.patch

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch, 
> HDDS-226.03.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-238) Add Node2Pipeline Map in SCM to track ratis/standalone pipelines.

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541718#comment-16541718
 ] 

genericqa commented on HDDS-238:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m  5s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
43s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-238 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931311/HDDS-238.003.patch |
| Optional Tests |  asflicense  compile 

[jira] [Commented] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541698#comment-16541698
 ] 

genericqa commented on HDDS-226:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 39s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 41s{color} | 
{color:black} 

[jira] [Work started] (HDFS-13732) Erasure Coding policy name is not coming when the new policy is set

2018-07-12 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13732 started by Zsolt Venczel.

> Erasure Coding policy name is not coming when the new policy is set
> ---
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: EC_Policy.PNG
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13732) Erasure Coding policy name is not coming when the new policy is set

2018-07-12 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel reassigned HDFS-13732:


Assignee: Zsolt Venczel

> Erasure Coding policy name is not coming when the new policy is set
> ---
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: EC_Policy.PNG
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13731) Investigate TestReencryption timeouts

2018-07-12 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13731 started by Zsolt Venczel.

> Investigate TestReencryption timeouts
> -
>
> Key: HDFS-13731
> URL: https://issues.apache.org/jira/browse/HDFS-13731
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Major
>
> HDFS-12837 fixed some flakiness of Reencryption related tests. But as 
> [~zvenczel]'s comment, there are a few timeouts still. We should investigate 
> that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13731) Investigate TestReencryption timeouts

2018-07-12 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel reassigned HDFS-13731:


Assignee: Zsolt Venczel

> Investigate TestReencryption timeouts
> -
>
> Key: HDFS-13731
> URL: https://issues.apache.org/jira/browse/HDFS-13731
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Major
>
> HDFS-12837 fixed some flakiness of Reencryption related tests. But as 
> [~zvenczel]'s comment, there are a few timeouts still. We should investigate 
> that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-07-12 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13596 started by Zsolt Venczel.

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Zsolt Venczel
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at 

[jira] [Assigned] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-07-12 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel reassigned HDFS-13596:


Assignee: Zsolt Venczel

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Zsolt Venczel
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at 

[jira] [Comment Edited] (HDFS-13730) BlockReaderRemote.sendReadResult throws NPE

2018-07-12 Thread Yuanbo Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541624#comment-16541624
 ] 

Yuanbo Liu edited comment on HDFS-13730 at 7/12/18 1:21 PM:


[~jojochuang] I'd suggest to return null instead of "" to keep consistent with 
socket.getRemoteAddress()


was (Author: yuanbo):
[~jojochuang] I'd suggested to return null instead of "" to keep consistent 
with socket.getRemoteAddress()

> BlockReaderRemote.sendReadResult throws NPE
> ---
>
> Key: HDFS-13730
> URL: https://issues.apache.org/jira/browse/HDFS-13730
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0 + HBASE-20403.
> (hbase-site.xml) hbase.rs.prefetchblocksonopen=true
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Major
> Attachments: HDFS-13730.001.patch
>
>
> Found the following exception thrown in a HBase RegionServer log (Hadoop 
> 3.0.0 + HBase 2.0.0. The hbase prefetch bug HBASE-20403 was fixed on this 
> cluster, but I am not sure if that's related at all):
> {noformat}
> 2018-07-11 11:10:44,462 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Stream moved/closed or 
> prefetch 
> cancelled?path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180711003954/449fa9bf5a7483295493258b5af50abc/meta/e9de0683f8a9413a94183c752bea0ca5,
>  offset=216505135,
> end=2309991906
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.net.NioInetPeer.getRemoteAddressString(NioInetPeer.java:99)
> at 
> org.apache.hadoop.hdfs.net.EncryptedPeer.getRemoteAddressString(EncryptedPeer.java:105)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.sendReadResult(BlockReaderRemote.java:330)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:165)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1050)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:992)
> at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1348)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1312)
> at org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:331)
> at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:805)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1565)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1769)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){noformat}
> The relevant Hadoop code:
> {code:java|title=BlockReaderRemote#sendReadResult}
> void sendReadResult(Status statusCode) {
>   assert !sentStatusCode : "already sent status code to " + peer;
>   try {
> writeReadResult(peer.getOutputStream(), statusCode);
> sentStatusCode = true;
>   } catch (IOException e) {
> // It's ok not to be able to send this. But something is probably wrong.
> LOG.info("Could not send read status (" + statusCode + ") to datanode " +
> peer.getRemoteAddressString() + ": " + e.getMessage());
>   }
> }
> {code}
> So the NPE was thrown within a exception handler. A possible explanation 
> could be that the socket was closed so client couldn't write, and 
> Socket#getRemoteSocketAddress() returns null when the socket is closed.
> Suggest check for nullity and return an empty string in 
> NioInetPeer.getRemoteAddressString.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDFS-13730) BlockReaderRemote.sendReadResult throws NPE

2018-07-12 Thread Yuanbo Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541624#comment-16541624
 ] 

Yuanbo Liu commented on HDFS-13730:
---

[~jojochuang] I'd suggested to return null instead of "" to keep consistent 
with socket.getRemoteAddress()

> BlockReaderRemote.sendReadResult throws NPE
> ---
>
> Key: HDFS-13730
> URL: https://issues.apache.org/jira/browse/HDFS-13730
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0 + HBASE-20403.
> (hbase-site.xml) hbase.rs.prefetchblocksonopen=true
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Major
> Attachments: HDFS-13730.001.patch
>
>
> Found the following exception thrown in a HBase RegionServer log (Hadoop 
> 3.0.0 + HBase 2.0.0. The hbase prefetch bug HBASE-20403 was fixed on this 
> cluster, but I am not sure if that's related at all):
> {noformat}
> 2018-07-11 11:10:44,462 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Stream moved/closed or 
> prefetch 
> cancelled?path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180711003954/449fa9bf5a7483295493258b5af50abc/meta/e9de0683f8a9413a94183c752bea0ca5,
>  offset=216505135,
> end=2309991906
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.net.NioInetPeer.getRemoteAddressString(NioInetPeer.java:99)
> at 
> org.apache.hadoop.hdfs.net.EncryptedPeer.getRemoteAddressString(EncryptedPeer.java:105)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.sendReadResult(BlockReaderRemote.java:330)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:165)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1050)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:992)
> at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1348)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1312)
> at org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:331)
> at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:805)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1565)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1769)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){noformat}
> The relevant Hadoop code:
> {code:java|title=BlockReaderRemote#sendReadResult}
> void sendReadResult(Status statusCode) {
>   assert !sentStatusCode : "already sent status code to " + peer;
>   try {
> writeReadResult(peer.getOutputStream(), statusCode);
> sentStatusCode = true;
>   } catch (IOException e) {
> // It's ok not to be able to send this. But something is probably wrong.
> LOG.info("Could not send read status (" + statusCode + ") to datanode " +
> peer.getRemoteAddressString() + ": " + e.getMessage());
>   }
> }
> {code}
> So the NPE was thrown within a exception handler. A possible explanation 
> could be that the socket was closed so client couldn't write, and 
> Socket#getRemoteSocketAddress() returns null when the socket is closed.
> Suggest check for nullity and return an empty string in 
> NioInetPeer.getRemoteAddressString.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13730) BlockReaderRemote.sendReadResult throws NPE

2018-07-12 Thread Yuanbo Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-13730:
--
Attachment: HDFS-13730.001.patch

> BlockReaderRemote.sendReadResult throws NPE
> ---
>
> Key: HDFS-13730
> URL: https://issues.apache.org/jira/browse/HDFS-13730
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0 + HBASE-20403.
> (hbase-site.xml) hbase.rs.prefetchblocksonopen=true
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Major
> Attachments: HDFS-13730.001.patch
>
>
> Found the following exception thrown in a HBase RegionServer log (Hadoop 
> 3.0.0 + HBase 2.0.0. The hbase prefetch bug HBASE-20403 was fixed on this 
> cluster, but I am not sure if that's related at all):
> {noformat}
> 2018-07-11 11:10:44,462 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Stream moved/closed or 
> prefetch 
> cancelled?path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180711003954/449fa9bf5a7483295493258b5af50abc/meta/e9de0683f8a9413a94183c752bea0ca5,
>  offset=216505135,
> end=2309991906
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.net.NioInetPeer.getRemoteAddressString(NioInetPeer.java:99)
> at 
> org.apache.hadoop.hdfs.net.EncryptedPeer.getRemoteAddressString(EncryptedPeer.java:105)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.sendReadResult(BlockReaderRemote.java:330)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:165)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1050)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:992)
> at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1348)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1312)
> at org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:331)
> at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:805)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1565)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1769)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){noformat}
> The relevant Hadoop code:
> {code:java|title=BlockReaderRemote#sendReadResult}
> void sendReadResult(Status statusCode) {
>   assert !sentStatusCode : "already sent status code to " + peer;
>   try {
> writeReadResult(peer.getOutputStream(), statusCode);
> sentStatusCode = true;
>   } catch (IOException e) {
> // It's ok not to be able to send this. But something is probably wrong.
> LOG.info("Could not send read status (" + statusCode + ") to datanode " +
> peer.getRemoteAddressString() + ": " + e.getMessage());
>   }
> }
> {code}
> So the NPE was thrown within a exception handler. A possible explanation 
> could be that the socket was closed so client couldn't write, and 
> Socket#getRemoteSocketAddress() returns null when the socket is closed.
> Suggest check for nullity and return an empty string in 
> NioInetPeer.getRemoteAddressString.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13729) Fix broken links to RBF documentation

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541613#comment-16541613
 ] 

genericqa commented on HDFS-13729:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HDFS-13729 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931319/HDFS-13729-branch-2.001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 97dab191663e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / fbe7192 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Max. process+thread count | 65 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24588/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix broken links to RBF documentation
> -
>
> Key: HDFS-13729
> URL: https://issues.apache.org/jira/browse/HDFS-13729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: jwhitter
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, 
> hadoop_broken_link.png
>
>
> A broken link on the page [http://hadoop.apache.org/docs/current/]
>  * HDFS
>  ** HDFS Router based federation. See the [user 
> documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  for more details.
> The link for user documentation 
> [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  is not found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-12 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541599#comment-16541599
 ] 

Jack Bearden commented on HDFS-13722:
-

Thanks [~aw]

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: trunk
> Fix For: 3.2.0
>
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13729) Fix broken links to RBF documentation

2018-07-12 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541595#comment-16541595
 ] 

Gabor Bota commented on HDFS-13729:
---

I've uploaded a patch for branch-2, which applies to branch-2.9 as well.

> Fix broken links to RBF documentation
> -
>
> Key: HDFS-13729
> URL: https://issues.apache.org/jira/browse/HDFS-13729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: jwhitter
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, 
> hadoop_broken_link.png
>
>
> A broken link on the page [http://hadoop.apache.org/docs/current/]
>  * HDFS
>  ** HDFS Router based federation. See the [user 
> documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  for more details.
> The link for user documentation 
> [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  is not found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13729) Fix broken links to RBF documentation

2018-07-12 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13729:
--
Attachment: HDFS-13729-branch-2.001.patch

> Fix broken links to RBF documentation
> -
>
> Key: HDFS-13729
> URL: https://issues.apache.org/jira/browse/HDFS-13729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: jwhitter
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, 
> hadoop_broken_link.png
>
>
> A broken link on the page [http://hadoop.apache.org/docs/current/]
>  * HDFS
>  ** HDFS Router based federation. See the [user 
> documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  for more details.
> The link for user documentation 
> [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  is not found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13730) BlockReaderRemote.sendReadResult throws NPE

2018-07-12 Thread Yuanbo Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HDFS-13730:
-

Assignee: Yuanbo Liu

> BlockReaderRemote.sendReadResult throws NPE
> ---
>
> Key: HDFS-13730
> URL: https://issues.apache.org/jira/browse/HDFS-13730
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0 + HBASE-20403.
> (hbase-site.xml) hbase.rs.prefetchblocksonopen=true
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Major
>
> Found the following exception thrown in a HBase RegionServer log (Hadoop 
> 3.0.0 + HBase 2.0.0. The hbase prefetch bug HBASE-20403 was fixed on this 
> cluster, but I am not sure if that's related at all):
> {noformat}
> 2018-07-11 11:10:44,462 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Stream moved/closed or 
> prefetch 
> cancelled?path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180711003954/449fa9bf5a7483295493258b5af50abc/meta/e9de0683f8a9413a94183c752bea0ca5,
>  offset=216505135,
> end=2309991906
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.net.NioInetPeer.getRemoteAddressString(NioInetPeer.java:99)
> at 
> org.apache.hadoop.hdfs.net.EncryptedPeer.getRemoteAddressString(EncryptedPeer.java:105)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.sendReadResult(BlockReaderRemote.java:330)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:165)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1050)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:992)
> at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1348)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1312)
> at org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:331)
> at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:805)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1565)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1769)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){noformat}
> The relevant Hadoop code:
> {code:java|title=BlockReaderRemote#sendReadResult}
> void sendReadResult(Status statusCode) {
>   assert !sentStatusCode : "already sent status code to " + peer;
>   try {
> writeReadResult(peer.getOutputStream(), statusCode);
> sentStatusCode = true;
>   } catch (IOException e) {
> // It's ok not to be able to send this. But something is probably wrong.
> LOG.info("Could not send read status (" + statusCode + ") to datanode " +
> peer.getRemoteAddressString() + ": " + e.getMessage());
>   }
> }
> {code}
> So the NPE was thrown within a exception handler. A possible explanation 
> could be that the socket was closed so client couldn't write, and 
> Socket#getRemoteSocketAddress() returns null when the socket is closed.
> Suggest check for nullity and return an empty string in 
> NioInetPeer.getRemoteAddressString.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-226) Client should update block length in OM while committing the key

2018-07-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541502#comment-16541502
 ] 

Shashikant Banerjee edited comment on HDDS-226 at 7/12/18 11:42 AM:


Thanks [~anu], for the review comments.
{quote}Another orthogonal Question: Sorry for these random comments. I see we 
have BlockID class then we create a new class called OmBlockInfo and add one 
more field, blockLength. Why is this not added as part of BlockID
{quote}
As a part of BlockCommitProtocol, we need to update each Block with the actual 
block length written as well as BlockCommitSequenceId.
 BlockId class as i suppose is used for identifying a unique block across the 
system. So, I thought it would be less confusing to maintain per Block 
associated Info for OzoneManager in a separate class. BlockCommitSequenceId 
will be added to the OzoneBlockInfo class in subsequent patches(Mentioned in a 
TODO item)
{quote}I am very confused with this code. Can you please check? Why are 
checking keyArgs, did you intend to check blockInfoList?
{quote}
Yes, it was a mistake. Addressed in the latest patch.
{quote}OmKeysArgs.java: More of a question, when would this sum be not equal to 
the datasize ?
{quote}
In the earlier patch, when we create a key we set Keylength and once we commit 
the key to OM, it was just updating each of the blockLengths , not exactly 
updating total size. Hence, once the key is committed, the actual key size will 
be determined by the sum total of all block lengths.
 Patch v2 now, updates each of the block lengths as well as the total size of 
the Key, so the related code which you pointed out is removed.

Rest all the review comments are addressed.


was (Author: shashikant):
Thanks [~anu], for the review comments.

{quote}
Another orthogonal Question: Sorry for these random comments. I see we have 
BlockID class then we create a new class called OmBlockInfo and add one more 
field, blockLength. Why is this not added as part of BlockID
{quote}
As a part of BlockCommitProtocol, we need to update each Block with the actual 
block length written as well as BlockCommitSequenceId.
BlockId class as i suppose is used for identifying a unique block across the 
system. So, I thought it would be less confusing to maintain
per Block associated Info for OzoneManager in a separate class. 
BlockCommitSequenceId will be added to the OzoneBlockInfo class in subsequent 
patches(Mentioned in a TODO item)

{quote}
I am very confused with this code. Can you please check? Why are checking 
keyArgs, did you intend to check blockInfoList?
{quote}

Yes, it was a mistake. Addressed in the latest patch.

{quote}
OmKeysArgs.java: More of a question, when would this sum be not equal to the 
datasize ?
{quote}

In the earlier patch, when we create a key we set Keylength and once we commit 
the key to OM, it was just updating each of the blockLengths , not exactly 
updating total size. Hence, once the key is committed, the actual key size will 
be determined by the sum total of all block lengths.
Patch v2 now, updates each of the block lengths as well as the total size of 
the Key, so the related code which you pointed out is removed.

Rest all the review comments are addressed.

> Client should update block length in OM while committing the key
> 
>
> Key: HDDS-226
> URL: https://issues.apache.org/jira/browse/HDDS-226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-226.00.patch, HDDS-226.01.patch, HDDS-226.02.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >