[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-10 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Attachment: HDFS-13245.011.patch

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-10 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Patch Available  (was: Open)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-10 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471511#comment-16471511
 ] 

maobaolong commented on HDFS-13480:
---

[~elgoiri] [~linyiqun] Please take a look. Thank you.

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-10 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Open  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-10 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13480:
--
Status: Patch Available  (was: Open)

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-10 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13480:
--
Attachment: HDFS-13480.001.patch

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-10 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13245:
--
Attachment: (was: HDFS-13245.001)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-10 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13245:
--
Attachment: HDFS-13245.001

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001, HDFS-13245.001.patch, 
> HDFS-13245.002.patch, HDFS-13245.003.patch, HDFS-13245.004.patch, 
> HDFS-13245.005.patch, HDFS-13245.006.patch, HDFS-13245.007.patch, 
> HDFS-13245.008.patch, HDFS-13245.009.patch, HDFS-13245.010.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471461#comment-16471461
 ] 

genericqa commented on HDFS-13346:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
2s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13346 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922946/HDFS-13346.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 029b993546fb 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d76fbbc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24177/testReport/ |
| Max. process+thread count | 960 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24177/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Fix synchronization of router quota and ns quota
> -
>
> Key: HDFS-13346
> URL: 

[jira] [Commented] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471455#comment-16471455
 ] 

Mukul Kumar Singh commented on HDDS-21:
---

Thanks for working on this [~ljain]. The patch looks really good to me.
Please find my review comment as following.

1) KeyHandler.java:254, there is an extra "-" in the javadoc.
2) Key parsing path should not be changed from "/{volume}/{bucket}/{keys:.*}" 
this will not let the RestServer manage keys of type 
"/vol/bucket/keynamepart1/keynamepart2"
i.e. key names which are delimited by "/". I feel for renames, we should pass 
the toname as an argument to the rest request. This can be done using 
"builder.addParameter" in RestClient.

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-21.001.patch, HDDS-21.002.patch, 
> HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-39) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-05-10 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-39:
--
Status: Patch Available  (was: Open)

> Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using 
> maven protoc plugin
> --
>
> Key: HDDS-39
> URL: https://issues.apache.org/jira/browse/HDDS-39
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-39.003.patch, HDFS-13389-HDFS-7240.001.patch, 
> HDFS-13389-HDFS-7240.002.patch
>
>
> Currently all the Ozone/HDFS/Cblock proto files are compiled using proto 2.5, 
> this can be changed to use proto3 compiler.
> This change will help in performance improvement as well because currently in 
> the client path, the xceiver client ratis converts proto2 classes to proto3 
> using byte string manipulation.
> Please note that for rest of hadoop (except Ozone/Cblock/HDSL), the protoc 
> version will still remain 2.5 as this proto compilation will be done through 
> the following plugin. 
> https://www.xolstice.org/protobuf-maven-plugin/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-39) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-05-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471448#comment-16471448
 ] 

Mukul Kumar Singh commented on HDDS-39:
---

[~xyao] and [~szetszwo] Please have a look at v3 patch.

This patch compiles DatanodeContainerProtocol.proto using proto3 compiler. This 
patch will further avoid buffer copy in XceiverClientRatis and 
ContainerStateMachine.
This change will also enables native client integration for Ozone using grpc 
C++ apis.

Most of the changes are import changes, actual changes are limited to 
{{XceiverClientRatis}}, {{ContainerStateMachine}} and 
{{hadoop-hdds/common/pom.xml}}.

> Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using 
> maven protoc plugin
> --
>
> Key: HDDS-39
> URL: https://issues.apache.org/jira/browse/HDDS-39
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-39.003.patch, HDFS-13389-HDFS-7240.001.patch, 
> HDFS-13389-HDFS-7240.002.patch
>
>
> Currently all the Ozone/HDFS/Cblock proto files are compiled using proto 2.5, 
> this can be changed to use proto3 compiler.
> This change will help in performance improvement as well because currently in 
> the client path, the xceiver client ratis converts proto2 classes to proto3 
> using byte string manipulation.
> Please note that for rest of hadoop (except Ozone/Cblock/HDSL), the protoc 
> version will still remain 2.5 as this proto compilation will be done through 
> the following plugin. 
> https://www.xolstice.org/protobuf-maven-plugin/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-39) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-05-10 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-39:
--
Attachment: HDDS-39.003.patch

> Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using 
> maven protoc plugin
> --
>
> Key: HDDS-39
> URL: https://issues.apache.org/jira/browse/HDDS-39
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-39.003.patch, HDFS-13389-HDFS-7240.001.patch, 
> HDFS-13389-HDFS-7240.002.patch
>
>
> Currently all the Ozone/HDFS/Cblock proto files are compiled using proto 2.5, 
> this can be changed to use proto3 compiler.
> This change will help in performance improvement as well because currently in 
> the client path, the xceiver client ratis converts proto2 classes to proto3 
> using byte string manipulation.
> Please note that for rest of hadoop (except Ozone/Cblock/HDSL), the protoc 
> version will still remain 2.5 as this proto compilation will be done through 
> the following plugin. 
> https://www.xolstice.org/protobuf-maven-plugin/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471416#comment-16471416
 ] 

genericqa commented on HDFS-13542:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13542 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922938/HDFS-13542.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 41117c7f9479 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7369f41 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24176/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24176/testReport/ |
| Max. process+thread count | 3214 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota

2018-05-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471405#comment-16471405
 ] 

Yiqun Lin commented on HDFS-13346:
--

Thanks [~elgoiri] for the review.
{quote}
In HDFS-13346.005.patch, I would prefer using a Whitebox instead of having to 
expose the quota module but I'm OK with it.
{quote}
Had used Whitebox in the latest patch.
Attach the patch and pending Jenkins.


> RBF: Fix synchronization of router quota and ns quota
> -
>
> Key: HDFS-13346
> URL: https://issues.apache.org/jira/browse/HDFS-13346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: incompatible
> Attachments: HDFS-13346.001.patch, HDFS-13346.002.patch, 
> HDFS-13346.003.patch, HDFS-13346.004.patch, HDFS-13346.005.patch, 
> HDFS-13346.006.patch
>
>
> Check Router Quota and ns Quota:
> {code}
> $ hdfs dfsrouteradmin -ls /ns10t
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /ns10tns10->/ns10t  hadp  
> hadp  rwxr-xr-x [NsQuota: 150/319, 
> SsQuota: -/-]
> /ns10t/ns1mountpoint  ns1->/a/tthadp  
> hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> $ hdfs dfs -count -q hdfs://ns10/ns10t
>  150-155none inf3 
>  302  0 hdfs://ns10/ns10t
> {code}
> Update Router Quota:
> {code:java}
> $ hdfs dfsrouteradmin -setQuota /ns10t -nsQuota 400
> Successfully set quota for mount point /ns10t
> {code}
> Check Router Quota and ns Quota:
> {code:java}
> $ hdfs dfsrouteradmin -ls /ns10t
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /ns10tns10->/ns10t  hadp  
> hadp  rwxr-xr-x [NsQuota: 400/319, 
> SsQuota: -/-]
> /ns10t/ns1mountpoint  ns1->/a/tthadp  
> hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> $ hdfs dfs -count -q hdfs://ns10/ns10t
>  150-155none inf3 
>  302  0 hdfs://ns10/ns10t
> {code}
> Now Router Quota has updated successfully, but ns Quota not.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota

2018-05-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13346:
-
Attachment: HDFS-13346.006.patch

> RBF: Fix synchronization of router quota and ns quota
> -
>
> Key: HDFS-13346
> URL: https://issues.apache.org/jira/browse/HDFS-13346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: incompatible
> Attachments: HDFS-13346.001.patch, HDFS-13346.002.patch, 
> HDFS-13346.003.patch, HDFS-13346.004.patch, HDFS-13346.005.patch, 
> HDFS-13346.006.patch
>
>
> Check Router Quota and ns Quota:
> {code}
> $ hdfs dfsrouteradmin -ls /ns10t
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /ns10tns10->/ns10t  hadp  
> hadp  rwxr-xr-x [NsQuota: 150/319, 
> SsQuota: -/-]
> /ns10t/ns1mountpoint  ns1->/a/tthadp  
> hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> $ hdfs dfs -count -q hdfs://ns10/ns10t
>  150-155none inf3 
>  302  0 hdfs://ns10/ns10t
> {code}
> Update Router Quota:
> {code:java}
> $ hdfs dfsrouteradmin -setQuota /ns10t -nsQuota 400
> Successfully set quota for mount point /ns10t
> {code}
> Check Router Quota and ns Quota:
> {code:java}
> $ hdfs dfsrouteradmin -ls /ns10t
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /ns10tns10->/ns10t  hadp  
> hadp  rwxr-xr-x [NsQuota: 400/319, 
> SsQuota: -/-]
> /ns10t/ns1mountpoint  ns1->/a/tthadp  
> hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> $ hdfs dfs -count -q hdfs://ns10/ns10t
>  150-155none inf3 
>  302  0 hdfs://ns10/ns10t
> {code}
> Now Router Quota has updated successfully, but ns Quota not.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471377#comment-16471377
 ] 

genericqa commented on HDDS-34:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 54s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields 

[jira] [Commented] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471338#comment-16471338
 ] 

Hudson commented on HDDS-43:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14167 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14167/])
HDDS-43: Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md. 
(bharat: rev 84b305f11a67e6f420e33e1ec30640b8214997e1)
* (edit) hadoop-ozone/acceptance-test/README.md


> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md
> -
>
> Key: HDDS-43
> URL: https://issues.apache.org/jira/browse/HDDS-43
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-43.001.patch
>
>
> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471337#comment-16471337
 ] 

Hudson commented on HDDS-34:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14167 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14167/])
HDDS-34. Remove .meta file during creation of container Contributed by 
(aengineer: rev 30293f6065c9e5b41c07cd670c7a6a1768d1434b)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerData.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/InfoContainerHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java


> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch, HDDS-34.002.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-10 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471324#comment-16471324
 ] 

Anbang Hu commented on HDFS-13542:
--

Related report that shows testNeededReplicationWhileAppending is failing: 
[https://builds.apache.org/job/hadoop-trunk-win/453/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/]

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, 
> HDFS-13542-branch-2.001.patch, HDFS-13542.000.patch, HDFS-13542.001.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-10 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471321#comment-16471321
 ] 

Anbang Hu commented on HDFS-13542:
--

*Console output*

Before patch:
{color:#FF}2018-05-10T21:01:54.4766448Z [ERROR] Tests run: 21, Failures: 0, 
Errors: 1, Skipped: 0, Time elapsed: 60.687 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager{color}
{color:#FF}2018-05-10T21:01:54.4766958Z [ERROR] 
testNeededReplicationWhileAppending(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager)
 Time elapsed: 0.038 s <<< ERROR!{color}
{color:#FF}2018-05-10T21:01:54.4767413Z java.io.IOException: Could not 
fully delete 
D:\_work\4\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name1{color}
{color:#FF}2018-05-10T21:01:54.4767595Z    at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1043){color}
{color:#FF}2018-05-10T21:01:54.4767757Z    at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879){color}
{color:#FF}2018-05-10T21:01:54.4768016Z    at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:513){color}
{color:#FF}2018-05-10T21:01:54.4768425Z    at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:472){color}
{color:#FF}2018-05-10T21:01:54.4768632Z    at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:443){color}
{color:#FF}2018-05-10T21:01:54.4768796Z    at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){color}
{color:#FF}2018-05-10T21:01:54.4768961Z    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#FF}2018-05-10T21:01:54.4769125Z    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#FF}2018-05-10T21:01:54.4769271Z    at 
java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#FF}2018-05-10T21:01:54.4769434Z    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#FF}2018-05-10T21:01:54.4770449Z    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#FF}2018-05-10T21:01:54.4770611Z    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#FF}2018-05-10T21:01:54.4770788Z    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#FF}2018-05-10T21:01:54.4770949Z    at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
After patch:
{color:#14892c}2018-05-10T00:36:25.9725758Z [INFO] Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager{color}
{color:#14892c}2018-05-10T00:37:32.3218337Z [INFO] Tests run: 21, Failures: 0, 
Errors: 0, Skipped: 0, Time elapsed: 66.335 s - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager{color}

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, 
> HDFS-13542-branch-2.001.patch, HDFS-13542.000.patch, HDFS-13542.001.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 

[jira] [Commented] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471319#comment-16471319
 ] 

Hudson commented on HDDS-37:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14166 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14166/])
HDDS-37. Remove dependency of hadoop-hdds-common and (aengineer: rev 
db1ab0fc1674177fdbe8f50c557aa4052ce77efc)
* (edit) hadoop-ozone/tools/pom.xml


> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-37.001.patch
>
>
> Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} are already 
> defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can remove it 
> from {{hadoop-ozone/tools/pom.xml}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-43:
---
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

Thank You [~nandakumar131] for review and [~Sandeep Nemuri] for reporting and 
fixing the issue.

> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md
> -
>
> Key: HDDS-43
> URL: https://issues.apache.org/jira/browse/HDDS-43
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-43.001.patch
>
>
> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-34:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~bharatviswa] Thank you for the contribution. I have committed this to the 
trunk

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch, HDDS-34.002.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471310#comment-16471310
 ] 

genericqa commented on HDFS-13544:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 15 unchanged - 1 fixed = 15 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.TestDatanodeRegistration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13544 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922909/HDFS-13544.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2ee56a41bc5f 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 48d0b54 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | 

[jira] [Updated] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-37:
-
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

[~Sandeep Nemuri] Thanks for the patch, Welcome to ozone.  [~nandakumar131]  
Thanks for filing the issue and testing and reviewing it.

 

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-37.001.patch
>
>
> Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} are already 
> defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can remove it 
> from {{hadoop-ozone/tools/pom.xml}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471301#comment-16471301
 ] 

genericqa commented on HDDS-43:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-43 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922927/HDDS-43.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 63ac540e6f47 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7369f41 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/acceptance-test U: hadoop-ozone/acceptance-test |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/78/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md
> -
>
> Key: HDDS-43
> URL: https://issues.apache.org/jira/browse/HDDS-43
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Trivial
> Attachments: HDDS-43.001.patch
>
>
> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-10 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471299#comment-16471299
 ] 

Anbang Hu commented on HDFS-13542:
--

Uploaded a new version that fix the style.

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, 
> HDFS-13542-branch-2.001.patch, HDFS-13542.000.patch, HDFS-13542.001.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-10 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13542:
-
Attachment: HDFS-13542.001.patch
HDFS-13542-branch-2.001.patch

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, 
> HDFS-13542-branch-2.001.patch, HDFS-13542.000.patch, HDFS-13542.001.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13079) Provide a config to start namenode in safemode state upto a certain transaction id

2018-05-10 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471282#comment-16471282
 ] 

Hanisha Koneru commented on HDFS-13079:
---

Thanks [~shashikant] for updating the patch.

{{FSImageTransactionalStorageInspector#getImagesWithTransactionId}} should 
return the images with txId << user supplied txId, right.

Lets say we have three fsImages - fsImage_0012, fsImage_0025 and fsImage_0040 
and we want to load uptill txId 30. The function above should return both 
fsImage_0025 and fsImage_0012, and in that order.
{code}
179  ret = new LinkedList();
180  for (FSImageFile img : foundImages) {
181if (ret.isEmpty() && txid <= img.txId) {
182 ret.add(img);
183}
{code}
In the if condition above, if txId (30) is <= img.txId (40), that image should 
not be returned. 




> Provide a config to start namenode in safemode state upto a certain 
> transaction id
> --
>
> Key: HDFS-13079
> URL: https://issues.apache.org/jira/browse/HDFS-13079
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13079.001.patch, HDFS-13079.002.patch, 
> HDFS-13079.003.patch
>
>
> In some cases it necessary to rollback the Namenode back to a certain 
> transaction id. This is especially needed when the user issues a {{rm -Rf 
> -skipTrash}} by mistake.
> Rolling back to a transaction id helps in taking a peek at the filesystem at 
> a particular instant. This jira proposes to provide a configuration variable 
> using which the namenode can be started upto a certain transaction id. The 
> filesystem will be in a readonly safemode which cannot be overridden 
> manually. It will only be overridden by removing the config value from the 
> config file. Please also note that this will not cause any changes in the 
> filesystem state, the filesystem will be in safemode state and no changes to 
> the filesystem state will be allowed.
> Please note that in case a checkpoint has already happened and the requested 
> transaction id has been subsumed in an FSImage, then the namenode will be 
> started with the next nearest transaction id. Further FSImage files and edits 
> will be ignored.
> If the checkpoint hasn't happen then the namenode will be started with the 
> exact transaction id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Sandeep Nemuri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471268#comment-16471268
 ] 

Sandeep Nemuri commented on HDDS-37:


Thanks for reviewing the patch [~nandakumar131].

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-37.001.patch
>
>
> Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} are already 
> defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can remove it 
> from {{hadoop-ozone/tools/pom.xml}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471267#comment-16471267
 ] 

Íñigo Goiri commented on HDFS-13544:


If getting the ns id is too hard, I'm OK with the journal id.
Can you post some log examples with a few of the new logs?

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471266#comment-16471266
 ] 

genericqa commented on HDFS-13443:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 30s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}259m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.federation.router.TestRouterMountTableCacheRefresh |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471265#comment-16471265
 ] 

genericqa commented on HDDS-37:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922925/HDDS-37.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 7be130b83f1c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7369f41 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/76/testReport/ |
| Max. process+thread count | 379 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/76/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-37.001.patch
>

[jira] [Commented] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471264#comment-16471264
 ] 

Nanda kumar commented on HDDS-37:
-

+1 (non-binding), LGTM.
Acceptance test results:
{noformat}
==
Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes| PASS |
--
Test rest interface   | PASS |
--
Test ozone cli| PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
Acceptance| PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
{noformat}

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-37.001.patch
>
>
> Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} are already 
> defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can remove it 
> from {{hadoop-ozone/tools/pom.xml}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471247#comment-16471247
 ] 

Bharat Viswanadham edited comment on HDDS-34 at 5/10/18 10:44 PM:
--

[~anu] Thanks for review.

Rebased the patch.


was (Author: bharatviswa):
[~anu] rebased the patch.

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch, HDDS-34.002.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471247#comment-16471247
 ] 

Bharat Viswanadham commented on HDDS-34:


[~anu] rebased the patch.

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch, HDDS-34.002.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-34:
---
Attachment: HDDS-34.002.patch

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch, HDDS-34.002.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-05-10 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-13272:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.2
   Status: Resolved  (was: Patch Available)

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-13272-branch-2.000.patch, 
> HDFS-13272-branch-2.001.patch, testout-HDFS-13272.001, testout-branch-2
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-05-10 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471243#comment-16471243
 ] 

Chris Douglas commented on HDFS-13272:
--

Thanks [~xkrogen], sorry for the delay.

bq. There were no new failures with the patch applied and it took about the 
same time to run all of them
{{TestClientProtocolForPipelineRecovery}} also fails on branch-2, the other 
tests... often pass. This doesn't seem to introduce any regressions.

+1 I committed this

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-13272-branch-2.000.patch, 
> HDFS-13272-branch-2.001.patch, testout-HDFS-13272.001, testout-branch-2
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471239#comment-16471239
 ] 

Bharat Viswanadham commented on HDDS-43:


+1 LGTM.

I will commit this shortly.

> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md
> -
>
> Key: HDDS-43
> URL: https://issues.apache.org/jira/browse/HDDS-43
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Trivial
> Attachments: HDDS-43.001.patch
>
>
> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471236#comment-16471236
 ] 

Nanda kumar commented on HDDS-43:
-

+1 (non-binding), LGTM.

> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md
> -
>
> Key: HDDS-43
> URL: https://issues.apache.org/jira/browse/HDDS-43
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Trivial
> Attachments: HDDS-43.001.patch
>
>
> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread Sandeep Nemuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-43:
---
Status: Patch Available  (was: Open)

Attaching the changes, Kindly review.

> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md
> -
>
> Key: HDDS-43
> URL: https://issues.apache.org/jira/browse/HDDS-43
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Trivial
> Attachments: HDDS-43.001.patch
>
>
> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread Sandeep Nemuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-43:
---
Attachment: HDDS-43.001.patch

> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md
> -
>
> Key: HDDS-43
> URL: https://issues.apache.org/jira/browse/HDDS-43
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Trivial
> Attachments: HDDS-43.001.patch
>
>
> Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-43) Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md

2018-05-10 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-43:
--

 Summary: Rename hdsl to hdds in 
hadoop-ozone/acceptance-test/README.md
 Key: HDDS-43
 URL: https://issues.apache.org/jira/browse/HDDS-43
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Sandeep Nemuri
Assignee: Sandeep Nemuri


Rename hdsl to hdds in hadoop-ozone/acceptance-test/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-16) Remove Pipeline from Datanode Container Protocol protobuf definition.

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-16?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471210#comment-16471210
 ] 

Hudson commented on HDDS-16:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14165 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14165/])
HDDS-16. Remove Pipeline from Datanode Container Protocol protobuf (xyao: rev 
7369f410202ea0583606aab2b4771c740d45e231)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/BlockID.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ChunkInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/InfoContainerHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyData.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerData.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneContainerTranslation.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java


> Remove Pipeline from Datanode Container Protocol protobuf definition.
> -
>
> Key: HDDS-16
> URL: https://issues.apache.org/jira/browse/HDDS-16
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native, Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: reviewed
> Fix For: 0.2.1
>
> Attachments: HDDS-16.001.patch, HDDS-16.002.patch, HDDS-16.003.patch, 
> HDDS-16.004.patch
>
>
> The current Ozone code passes pipeline information to datanodes as well. 
> However datanodes do not use this information.
> Hence Pipeline should be removed from ozone datanode commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-37:

Status: Patch Available  (was: Open)

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-37.001.patch
>
>
> Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} are already 
> defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can remove it 
> from {{hadoop-ozone/tools/pom.xml}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Sandeep Nemuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-37:
---
Attachment: HDDS-37.001.patch

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-37.001.patch
>
>
> Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} are already 
> defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can remove it 
> from {{hadoop-ozone/tools/pom.xml}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-37:

Description: Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} 
are already defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can 
remove it from {{hadoop-ozone/tools/pom.xml}}  (was: {{)

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>
> Since {{hadoop-hdds-common}} and {{hadoop-hdds-server-scm}} are already 
> defined as dependency in parent pom {{hadoop-ozone/pom.xml}} we can remove it 
> from {{hadoop-ozone/tools/pom.xml}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-37:

Summary: Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm 
from hadoop-ozone/tools/pom.xml  (was: Changing  tag in hadoop-hdds 
& hadoop-ozone pom.xml to )

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>
> The parent pom file of {{hadoop-hdds}} & {{hadoop-ozone}} has 
> {{}} tag to manage the dependency of sub-modules, this should be 
> managed using {{}} tag and not through {{}}
> Files:
>  * hadoop-hdds/pom.xml
>  * hadoop-ozone/pom.xml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-37) Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from hadoop-ozone/tools/pom.xml

2018-05-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-37:

Description: {{  (was: The parent pom file of {{hadoop-hdds}} & 
{{hadoop-ozone}} has {{}} tag to manage the dependency of 
sub-modules, this should be managed using {{}} tag and 
not through {{}}

Files:
 * hadoop-hdds/pom.xml
 * hadoop-ozone/pom.xml)

> Remove dependency of hadoop-hdds-common and hadoop-hdds-server-scm from 
> hadoop-ozone/tools/pom.xml
> --
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>
> {{



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471181#comment-16471181
 ] 

genericqa commented on HDDS-42:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
77m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
14s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 35s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} framework in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 28s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  6s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m  5s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Updated] (HDDS-16) Remove Pipeline from Datanode Container Protocol protobuf definition.

2018-05-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-16:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~msingh] for the contribution and all for the reviews. I've committed 
the patch to the trunk. 

> Remove Pipeline from Datanode Container Protocol protobuf definition.
> -
>
> Key: HDDS-16
> URL: https://issues.apache.org/jira/browse/HDDS-16
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native, Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: reviewed
> Fix For: 0.2.1
>
> Attachments: HDDS-16.001.patch, HDDS-16.002.patch, HDDS-16.003.patch, 
> HDDS-16.004.patch
>
>
> The current Ozone code passes pipeline information to datanodes as well. 
> However datanodes do not use this information.
> Hence Pipeline should be removed from ozone datanode commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-16) Remove Pipeline from Datanode Container Protocol protobuf definition.

2018-05-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-16:
---
Labels: reviewed  (was: )

> Remove Pipeline from Datanode Container Protocol protobuf definition.
> -
>
> Key: HDDS-16
> URL: https://issues.apache.org/jira/browse/HDDS-16
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native, Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: reviewed
> Fix For: 0.2.1
>
> Attachments: HDDS-16.001.patch, HDDS-16.002.patch, HDDS-16.003.patch, 
> HDDS-16.004.patch
>
>
> The current Ozone code passes pipeline information to datanodes as well. 
> However datanodes do not use this information.
> Hence Pipeline should be removed from ozone datanode commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-05-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471170#comment-16471170
 ] 

Ted Yu commented on HDFS-13515:
---

Can you log the remote address in case of exception ?

> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2018-05-10 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471159#comment-16471159
 ] 

Chen Liang commented on HDFS-13541:
---

Thought a bit more about passing additional fields of the connection to SASL 
resolver. Passing the Server#connection object will unlikely to work because it 
is specific to ipc.Server, but SASL resolver is more general, e.g. DN side does 
not have the ipc.Server instance but still do SASL server side resolution. I 
will explore alternative ways.

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: NameNode Port based selective encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471154#comment-16471154
 ] 

Hanisha Koneru commented on HDFS-13544:
---

Thanks [~anu] and [~elgoiri] for the reviews.

Namservice Id is not passed to journal currently. We can pass it on but there 
are a few operations which do not specify the nameservice id - doUpgrade and 
doPreUpgrade. But I think that should be ok.

I am ok with using either the nameservice id or the journal id. Since its just 
in the logs and would be accessed for debugging purposes.

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-16) Remove Pipeline from Datanode Container Protocol protobuf definition.

2018-05-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-16?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471148#comment-16471148
 ] 

Xiaoyu Yao commented on HDDS-16:


Thanks [~msingh] for working on this. +1 for v4 patch. I will commit it 
shortly. 

> Remove Pipeline from Datanode Container Protocol protobuf definition.
> -
>
> Key: HDDS-16
> URL: https://issues.apache.org/jira/browse/HDDS-16
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native, Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-16.001.patch, HDDS-16.002.patch, HDDS-16.003.patch, 
> HDDS-16.004.patch
>
>
> The current Ozone code passes pipeline information to datanodes as well. 
> However datanodes do not use this information.
> Hence Pipeline should be removed from ozone datanode commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471147#comment-16471147
 ] 

Xiao Chen commented on HDFS-13539:
--

Thanks for the comment [~ajayydv]. Could you elaborate?

{code}
try {
  spy.read();
  fail("read should have failed");
} catch (IOException expected) {
  LOG.info("Exception caught", expected);
  GenericTestUtils.assertExceptionContains(msg, expected);
}
{code}
This checks that an exception is thrown, and the exception message is what we 
used to create the injected IOE.

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch, HDFS-13539.02.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471139#comment-16471139
 ] 

Íñigo Goiri commented on HDFS-13544:


[^HDFS-13544.001.patch] looks good.
Is the nameservice id easily available? Journal id can be mapped but it 
requires a level of indirection.

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471127#comment-16471127
 ] 

Anu Engineer commented on HDFS-13544:
-

+1, pending jenkins. I suggest that we apply this patch to 3.1.1, 3.0.3 and 
branch-2. It is a good improvement to have when we are looking at the logs.

 

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471108#comment-16471108
 ] 

Anu Engineer commented on HDDS-34:
--

[~bharatviswa] This patch is not applying now, can you please rebase against 
the latest trunk? Thanks in advance.

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13544:
--
Status: Patch Available  (was: Open)

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13544:
--
Attachment: HDFS-13544.001.patch

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13544.001.patch
>
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13544) Improve logging for JournalNode in federated cluster

2018-05-10 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13544:
--
Summary: Improve logging for JournalNode in federated cluster  (was: 
Improve logging in JournalNode for federated cluster)

> Improve logging for JournalNode in federated cluster
> 
>
> Key: HDFS-13544
> URL: https://issues.apache.org/jira/browse/HDFS-13544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> In a federated cluster,when two namespaces utilize the same JournalSet, it is 
> difficult to decode some of the log statements as to which Namespace it is 
> logging for. 
> For example, the following two log statements do not tell us which Namespace 
> the edit log belongs to.
> {code:java}
> INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
> for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
> lastWriterEpoch: 1 lastCommittedTxId: 10
> INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
> startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 
> endTxId: 10 isInProgress: true is not the right length{code}
> We should add the NameserviceID or the JournalID to appropriate JournalNode 
> logs to help with debugging.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471100#comment-16471100
 ] 

Xiaoyu Yao commented on HDDS-17:


[~anu], thanks for working on it. The patch looks good to me overall. Here are 
few minor comments:

 

ContainerID.java

Line 41: "positive int' should be "positive long"

 

 

Node2ContainerMap.java

Line 70: Should we use atomic APIs offered by ConcurrentHashMap like 
putIfAbsent, etc. without the synchronization. This way we can take the full 
advantage of it for better throughput compared with a synchronized map.  

 

Line 99: How do we plan to share the cycle for further report (like size/stats 
update) processing without looping over the containers?

 

Line 155: same as Line 70.

 

Line 159: Should we return a immutable collection here?

 

 

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch, HDDS-17.002.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471103#comment-16471103
 ] 

Anu Engineer commented on HDDS-34:
--

I will commit this shortly.

 

 

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471098#comment-16471098
 ] 

genericqa commented on HDDS-34:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 40s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
| 

[jira] [Commented] (HDDS-31) Fix TestSCMCli

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-31?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471061#comment-16471061
 ] 

Hudson commented on HDDS-31:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14164 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14164/])
HDDS-31. Fix TestSCMCli. Contributed by Lokesh Jain. (aengineer: rev 
48d0b548492a3fc0b072543be81b5e1b0ea1f278)
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListContainerHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/CreateContainerHandler.java


> Fix TestSCMCli
> --
>
> Key: HDDS-31
> URL: https://issues.apache.org/jira/browse/HDDS-31
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-31.001.patch
>
>
> [ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container 
> -create
> ]> but was:<[]>
> [ERROR]   TestSCMCli.testListContainerCommand:406
> [ERROR] Errors:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471062#comment-16471062
 ] 

Ajay Kumar commented on HDFS-13539:
---

[~xiaochen] thanks for working on this. Patch LGTM. One suggestion i have is 
for check in test case. Since we are not asserting an exception thrown from 
code shall we check for logging message printed via this patch? 
i.e " Found null currentLocatedBlock. pos=0, blockEnd=-1, fileLength=4"

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch, HDFS-13539.02.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13544) Improve logging in JournalNode for federated cluster

2018-05-10 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-13544:
-

 Summary: Improve logging in JournalNode for federated cluster
 Key: HDFS-13544
 URL: https://issues.apache.org/jira/browse/HDFS-13544
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, hdfs
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


In a federated cluster,when two namespaces utilize the same JournalSet, it is 
difficult to decode some of the log statements as to which Namespace it is 
logging for. 
For example, the following two log statements do not tell us which Namespace 
the edit log belongs to.
{code:java}
INFO  server.Journal (Journal.java:prepareRecovery(773)) - Prepared recovery 
for segment 1: segmentState { startTxId: 1 endTxId: 10 isInProgress: true } 
lastWriterEpoch: 1 lastCommittedTxId: 10

INFO  server.Journal (Journal.java:acceptRecovery(826)) - Synchronizing log 
startTxId: 1 endTxId: 11 isInProgress: true: old segment startTxId: 1 endTxId: 
10 isInProgress: true is not the right length{code}

We should add the NameserviceID or the JournalID to appropriate JournalNode 
logs to help with debugging.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-31) Fix TestSCMCli

2018-05-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-31:
-
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

[~xyao] Thanks for filing this issue. [~ljain] Thanks for the contribution. I 
have committed this to the trunk.

> Fix TestSCMCli
> --
>
> Key: HDDS-31
> URL: https://issues.apache.org/jira/browse/HDDS-31
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-31.001.patch
>
>
> [ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container 
> -create
> ]> but was:<[]>
> [ERROR]   TestSCMCli.testListContainerCommand:406
> [ERROR] Errors:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-37) Changing tag in hadoop-hdds & hadoop-ozone pom.xml to

2018-05-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HDDS-37:
---

Assignee: Sandeep Nemuri  (was: Nanda kumar)

> Changing  tag in hadoop-hdds & hadoop-ozone pom.xml to 
> 
> -
>
> Key: HDDS-37
> URL: https://issues.apache.org/jira/browse/HDDS-37
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>
> The parent pom file of {{hadoop-hdds}} & {{hadoop-ozone}} has 
> {{}} tag to manage the dependency of sub-modules, this should be 
> managed using {{}} tag and not through {{}}
> Files:
>  * hadoop-hdds/pom.xml
>  * hadoop-ozone/pom.xml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471004#comment-16471004
 ] 

Tsz Wo Nicholas Sze commented on HDDS-42:
-

[~anu], thanks for the quick review and commit.

> Inconsistent module names and descriptions
> --
>
> Key: HDDS-42
> URL: https://issues.apache.org/jira/browse/HDDS-42
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: o42_20180510.patch
>
>
> The hdds/ozone module names and descriptions are inconsistent:
> - Missing "Hadoop" in some cases.
> - Inconsistent use of acronyms.
> - Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471000#comment-16471000
 ] 

Anu Engineer commented on HDDS-40:
--

Just so that you know, 

 bq. mvn install -Phdds,ozone-acceptance-test,dist -DskipTests
passed
{quote}
==
Acceptance
==
Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes| PASS |
--
Test rest interface   | PASS |
--
Test ozone cli| PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
Acceptance| PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==

{quote}

if you are fixing the README.md, you might want to change the mvn -Phdsl to 
-Phdds.


> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-40.001.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470998#comment-16470998
 ] 

Anu Engineer commented on HDDS-40:
--

When I run from command line,

{quote} ## Development

You can run manually the robot tests with `robot` cli. (See robotframework docs 
to install it.)

 1. Go to the `src/test/robotframework`
 2. Execute `robot -v basedir:${PWD}/../../.. -v VERSION:3.2.0-SNAPSHOT .`
{quote}

I seem to get this following failure.
{quote}
Robotframework.Acceptance.Ozone :: Smoke test to start cluster wit... | FAIL |
Suite setup failed:
Variable '${hadoopversion}' not found.

Also suite teardown failed:
Several failures occurred:

1) Variable '${hadoopversion}' not found.

2) Variable '${hddsversion}' not found.

7 critical tests, 0 passed, 7 failed
7 tests total, 0 passed, 7 failed

{quote}

Do I need to specify the HDDS version now? 

> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-40.001.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-31) Fix TestSCMCli

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-31?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470987#comment-16470987
 ] 

genericqa commented on HDDS-31:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 53s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.ozone.container.common.impl.TestContainerDeletionChoosingPolicy |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HDDS-30) Fix TestContainerSQLCli

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-30?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470968#comment-16470968
 ] 

Hudson commented on HDDS-30:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14162 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14162/])
HDDS-30. Fix TestContainerSQLCli. Contributed by Shashikant Banerjee. 
(aengineer: rev 7482963f1a250f1791a5164817b608dbf2556433)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java


> Fix TestContainerSQLCli
> ---
>
> Key: HDDS-30
> URL: https://issues.apache.org/jira/browse/HDDS-30
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.21
>
> Attachments: HDDS-30.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470969#comment-16470969
 ] 

Hudson commented on HDDS-42:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14162 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14162/])
HDDS-42. Inconsistent module names and descriptions. Contributed by Tsz 
(aengineer: rev f8b540049dbc7916220d6fa95c025c5a854a31f7)
* (edit) hadoop-hdds/framework/pom.xml
* (edit) hadoop-ozone/pom.xml
* (edit) hadoop-ozone/acceptance-test/pom.xml
* (edit) hadoop-hdds/server-scm/pom.xml
* (edit) hadoop-hdds/tools/pom.xml
* (edit) hadoop-ozone/common/pom.xml
* (edit) hadoop-hdds/container-service/pom.xml
* (edit) hadoop-hdds/pom.xml
* (edit) hadoop-hdds/common/pom.xml
* (edit) hadoop-ozone/integration-test/pom.xml
* (edit) hadoop-ozone/ozone-manager/pom.xml
* (edit) hadoop-hdds/client/pom.xml


> Inconsistent module names and descriptions
> --
>
> Key: HDDS-42
> URL: https://issues.apache.org/jira/browse/HDDS-42
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: o42_20180510.patch
>
>
> The hdds/ozone module names and descriptions are inconsistent:
> - Missing "Hadoop" in some cases.
> - Inconsistent use of acronyms.
> - Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-40) Separating packaging of Ozone/HDDS from the main Hadoop

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470965#comment-16470965
 ] 

Anu Engineer commented on HDDS-40:
--

+1, I will commit this now. There is a small search and replace error. I will 
fix that while committing.
 {{docker-compose.yaml}}
{quote} # to you under the Apache License, HDDS_VERSION 2.0 {quote}

it should be:
bq. # to you under the Apache License, Version 2.0 (the


> Separating packaging of Ozone/HDDS from the main Hadoop
> ---
>
> Key: HDDS-40
> URL: https://issues.apache.org/jira/browse/HDDS-40
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-40.001.patch
>
>
> According to the community vote, Ozone/Hdds release cycle should be 
> independent from the Hadoop release cycle.
> To make it possible we need a separated ozone package.
> *The current state:*
> We have just one output tar/directory under hadoop-dist (hadoop-3.2.0). It 
> includes all the hdfs/yarn/mapreduce/hdds binaries and libraries. (Jar files 
> are put in separated directory).
> The hdds components and hdfs compobebts all could be started from the bin. 
> *Proposed version*
> Create a sepearated hadoop-dist/ozone-2.1.0 which contains only the hdfs AND 
> hdds components. Both the hdfs namenode and hdds datanode/scm/ksm could be 
> started from the ozone-2.1.0 package. 
> Hdds packages would be removed from the original hadoop-3.2.0 directory.
> This is a relatively small change. On further JIRAs we need to :
>  * Create a shaded datanode plugin which could be used with any existing 
> hadoop cluster
>  * Use standalone ObjectStore/Ozone server instead of the Namenode+Datanod 
> plugin.
>  * Add test cases for both the ozone-only and the mixed clusters (ozone + 
> hdfs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-31) Fix TestSCMCli

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-31?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470952#comment-16470952
 ] 

Anu Engineer commented on HDDS-31:
--

+1, pending Jenkins.

> Fix TestSCMCli
> --
>
> Key: HDDS-31
> URL: https://issues.apache.org/jira/browse/HDDS-31
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-31.001.patch
>
>
> [ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container 
> -create
> ]> but was:<[]>
> [ERROR]   TestSCMCli.testListContainerCommand:406
> [ERROR] Errors:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470939#comment-16470939
 ] 

Anu Engineer commented on HDDS-34:
--

Had an off-line chat with [~bharatviswa] , we will add the hash back with 
closing of a contianer path. I am ok with removing it for now.

+1, pending Jenkins.

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-42:
-
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

[~szetszwo] Thanks for the contribution. I have committed this to the trunk.

> Inconsistent module names and descriptions
> --
>
> Key: HDDS-42
> URL: https://issues.apache.org/jira/browse/HDDS-42
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: o42_20180510.patch
>
>
> The hdds/ozone module names and descriptions are inconsistent:
> - Missing "Hadoop" in some cases.
> - Inconsistent use of acronyms.
> - Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470932#comment-16470932
 ] 

Anu Engineer commented on HDDS-42:
--

+1, Thanks for the fixes. I will commit this now. I appreciate you taking care 
of this.

 

> Inconsistent module names and descriptions
> --
>
> Key: HDDS-42
> URL: https://issues.apache.org/jira/browse/HDDS-42
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o42_20180510.patch
>
>
> The hdds/ozone module names and descriptions are inconsistent:
> - Missing "Hadoop" in some cases.
> - Inconsistent use of acronyms.
> - Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470929#comment-16470929
 ] 

Tsz Wo Nicholas Sze commented on HDDS-42:
-

o42_20180510.patch: proposed changes


> Inconsistent module names and descriptions
> --
>
> Key: HDDS-42
> URL: https://issues.apache.org/jira/browse/HDDS-42
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o42_20180510.patch
>
>
> The hdds/ozone module names and descriptions are inconsistent:
> - Missing "Hadoop" in some cases.
> - Inconsistent use of acronyms.
> - Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-42:

Status: Patch Available  (was: Open)

> Inconsistent module names and descriptions
> --
>
> Key: HDDS-42
> URL: https://issues.apache.org/jira/browse/HDDS-42
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o42_20180510.patch
>
>
> The hdds/ozone module names and descriptions are inconsistent:
> - Missing "Hadoop" in some cases.
> - Inconsistent use of acronyms.
> - Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-42:

Attachment: o42_20180510.patch

> Inconsistent module names and descriptions
> --
>
> Key: HDDS-42
> URL: https://issues.apache.org/jira/browse/HDDS-42
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o42_20180510.patch
>
>
> The hdds/ozone module names and descriptions are inconsistent:
> - Missing "Hadoop" in some cases.
> - Inconsistent use of acronyms.
> - Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-30) Fix TestContainerSQLCli

2018-05-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-30:
-
   Resolution: Fixed
Fix Version/s: 0.21
   Status: Resolved  (was: Patch Available)

[~xyao] Thanks for the review. [~shashikant] Thanks for the contribution. I 
have committed this patch to trunk.

> Fix TestContainerSQLCli
> ---
>
> Key: HDDS-30
> URL: https://issues.apache.org/jira/browse/HDDS-30
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.21
>
> Attachments: HDDS-30.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-30) Fix TestContainerSQLCli

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-30?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470910#comment-16470910
 ] 

Anu Engineer commented on HDDS-30:
--

i will commit this now.

 

> Fix TestContainerSQLCli
> ---
>
> Key: HDDS-30
> URL: https://issues.apache.org/jira/browse/HDDS-30
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-30.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-42) Inconsistent module names and descriptions

2018-05-10 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDDS-42:
---

 Summary: Inconsistent module names and descriptions
 Key: HDDS-42
 URL: https://issues.apache.org/jira/browse/HDDS-42
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


The hdds/ozone module names and descriptions are inconsistent:
- Missing "Hadoop" in some cases.
- Inconsistent use of acronyms.
- Inconsistent capitalization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470907#comment-16470907
 ] 

Anu Engineer commented on HDDS-34:
--

Looks good to me, one small issue that I did not understand.

why are we removing this line? {{optional string hash = 5;}}

 

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-10 Thread Mohammad Arshad (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470905#comment-16470905
 ] 

Mohammad Arshad commented on HDFS-13443:


Re-based the patch.

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch, HDFS-13443.008.patch, 
> HDFS-13443.009.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-10 Thread Mohammad Arshad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Arshad updated HDFS-13443:
---
Attachment: HDFS-13443.009.patch

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch, HDFS-13443.008.patch, 
> HDFS-13443.009.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-34) Remove .meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-34:
---
Summary: Remove .meta file during creation of container  (was: Remove meta 
file during creation of container)

> Remove .meta file during creation of container
> --
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-34) Remove meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-34:
---
Attachment: HDDS-34.001.patch

> Remove meta file during creation of container
> -
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-34) Remove meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-34:
---
Status: Patch Available  (was: Open)

> Remove meta file during creation of container
> -
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-34) Remove meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-34:
---
Fix Version/s: 0.2.1

> Remove meta file during creation of container
> -
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-34) Remove meta file during creation of container

2018-05-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-34:
---
Component/s: Ozone Datanode

> Remove meta file during creation of container
> -
>
> Key: HDDS-34
> URL: https://issues.apache.org/jira/browse/HDDS-34
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-34.001.patch
>
>
> During container creation, a .container and .meta files are created.
> .meta file stores container file name and hash. This file is not required.
> This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-30) Fix TestContainerSQLCli

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-30?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470890#comment-16470890
 ] 

genericqa commented on HDDS-30:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 56s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.container.common.impl.TestContainerDeletionChoosingPolicy |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-30 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922816/HDDS-30.00.patch |
| Optional 

[jira] [Updated] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-05-10 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-13399:

Attachment: HDFS-13399-HDFS-12943.007.patch

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, 
> HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch, 
> HDFS-13399-HDFS-12943.005.patch, HDFS-13399-HDFS-12943.006.patch, 
> HDFS-13399-HDFS-12943.007.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >