[jira] [Comment Edited] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550271#comment-16550271
 ] 

Takanobu Asanuma edited comment on HDFS-13743 at 7/20/18 5:52 AM:
--

Thanks for the review, [~linyiqun]. Uploaded 5th patch adding the log.
{quote}what will be happening if we return empty string (not null) as the 
default service?
{quote}
{{NullPointerException}} which I faced doesn't occur if using empty string. 
Then, the result is
{noformat}
$ hadoop fs -ls hdfs://localhost:/
ls: Cannot locate a registered namenode for  from null
{noformat}
This error is generated from {{RouterRpcClient#getNamenodesForNameservice}}. I 
think this is right behavior though the message is odd with empty string. (BTW, 
there is another problem that {{routerId}} in {{RouterRpcClient}} is always 
null. I filed it in HDFS-13750.)


was (Author: tasanuma0829):
Thanks for the review, [~linyiqun]. Uploaded 5th patch adding the log.

bq. what will be happening if we return empty string (not null) as the default 
service? 

{{NullPointerException}} which I faced doesn't occur if using empty string. 
Then, the result is

{noformat}
$ hadoop fs -ls hdfs://localhost:/
ls: Cannot locate a registered namenode for  from null
{noformat}

This error is generated from {{RouterRpcClient#getNamenodesForNameservice}}. I 
think this is right behavior though the message is odd with empty string. (BTW, 
there is another problem that {{routerId}} in {{RouterRpcClient}} is always 
null. I will file it in another jira.)

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch, HDFS-13743.5.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13750) RBF: Router ID in RouterRpcClient is always null

2018-07-19 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-13750:
---

 Summary: RBF: Router ID in RouterRpcClient is always null
 Key: HDFS-13750
 URL: https://issues.apache.org/jira/browse/HDFS-13750
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550271#comment-16550271
 ] 

Takanobu Asanuma commented on HDFS-13743:
-

Thanks for the review, [~linyiqun]. Uploaded 5th patch adding the log.

bq. what will be happening if we return empty string (not null) as the default 
service? 

{{NullPointerException}} which I faced doesn't occur if using empty string. 
Then, the result is

{noformat}
$ hadoop fs -ls hdfs://localhost:/
ls: Cannot locate a registered namenode for  from null
{noformat}

This error is generated from {{RouterRpcClient#getNamenodesForNameservice}}. I 
think this is right behavior though the message is odd with empty string. (BTW, 
there is another problem that {{routerId}} in {{RouterRpcClient}} is always 
null. I will file it in another jira.)

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch, HDFS-13743.5.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-239) Add Pipeline StateManager to track and transition pipeline states

2018-07-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-239:
---
Attachment: HDDS-239.006.patch

> Add Pipeline StateManager to track and transition pipeline states
> -
>
> Key: HDDS-239
> URL: https://issues.apache.org/jira/browse/HDDS-239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-239.001.patch, HDDS-239.002.patch, 
> HDDS-239.003.patch, HDDS-239.004.patch, HDDS-239.005.patch, HDDS-239.006.patch
>
>
> With addition of pipeline recovery in Ozone, pipeline failures need to be 
> handled both in Ozone client as well as SCM. This jira adds a pipeline state 
> Manager to manage pipeline state transitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550268#comment-16550268
 ] 

Bharat Viswanadham commented on HDDS-249:
-

Hi [~hanishakoneru]

Thanks for the review.

Addressed your review comments in patch v05.

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch, 
> HDDS-249.03.patch, HDDS-249.04.patch, HDDS-249.05.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13743:

Attachment: HDFS-13743.5.patch

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch, HDFS-13743.5.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-249:

Attachment: HDDS-249.05.patch

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch, 
> HDDS-249.03.patch, HDDS-249.04.patch, HDDS-249.05.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13622) dfs mkdir should not report the directory which to be created

2018-07-19 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550263#comment-16550263
 ] 

Xiao Chen commented on HDFS-13622:
--

Thanks for revving Shweta. As checked offline, it's not possible to execute the 
null parent path for mkdir.

We're pretty close. Pending items:
 * There is still 1 unnecessary space change. 
 * "{{-mkdir returned there is No file or directory like testChild}}" still 
seems a bit unclear to me. Can we change it to "{{-mkdir returned No file or 
directory but has testChild in the path}}" ?

> dfs mkdir should not report the directory which to be created
> -
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch, 
> HDFS-13622.04.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13622) dfs mkdir should not report the directory which to be created

2018-07-19 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13622:
-
Status: Patch Available  (was: Open)

> dfs mkdir should not report the directory which to be created
> -
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch, 
> HDFS-13622.04.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-239) Add Pipeline StateManager to track and transition pipeline states

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550248#comment-16550248
 ] 

genericqa commented on HDDS-239:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdds_server-scm generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 22s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestBlockDeletingService |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2018-07-19 Thread lindongdong (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550192#comment-16550192
 ] 

lindongdong commented on HDFS-2:


[~vinayrpet], I totally agree with you. 
{panel:title=the opinion}
I agree that this will avoid accidental deletes.
But still it should be able to format on demand. i.e. When the answer to prompt 
is 'yes' during reformat. Similar to Namenode dir format.
Namenode calls the JournalNode format() only after confirmation in the prompt 
(or "-force") was mentioned.
Ideally, we should be passing on the confirmation of prompt via RPC to 
JournalNode as well.
{panel}


> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-2.001.patch, HDFS-2.002.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> setStorageInfo(nsInfo);
> ...
> unlockAll();
> sd.clearDirectory();
> writeProperties(sd);
> createPaxosDir();
> analyzeStorage();
> {code}
> This would make the behavior similar to {{namenode -format -nonInteractive}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550157#comment-16550157
 ] 

Yiqun Lin edited comment on HDFS-13743 at 7/20/18 3:16 AM:
---

The latest patch almost looks good, one minor comment:
 Would you mind adding following log in {{if (defaultNameService == null) 
{...}}}? It will let us know which config is real be used.
{noformat}
LOG.warn("{} and {} is not set. Fallback to {} as the default name service.",
  DFS_ROUTER_DEFAULT_NAMESERVICE, DFS_NAMESERVICE_ID, DFS_NAMESERVICES);
{noformat}
Other looks good to me.

One more question from me: what will be happening if we return empty string 
(not null) as the default service? [~tasanuma0829], would you mind doing a 
verification as you mentioned in description?


was (Author: linyiqun):
The latest patch almost looks good, one minor comment:
 Would you mind adding following log in {{if (defaultNameService == null) 
{...}}}? It will let us know which config is real be used.
{noformat}
LOG.warn("{} and {} is not set. Fallback to {} as the default name service.",
  DFS_ROUTER_DEFAULT_NAMESERVICE, DFS_NAMESERVICE_ID, DFS_NAMESERVICES);
{noformat}
Other looks good to me.

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550157#comment-16550157
 ] 

Yiqun Lin commented on HDFS-13743:
--

The latest patch almost looks good, one minor comment:
 Would you mind adding following log in {{if (defaultNameService == null) 
{...}}}? It will let us know which config is real be used.
{noformat}
LOG.warn("{} and {} is not set. Fallback to {} as the default name service.",
  DFS_ROUTER_DEFAULT_NAMESERVICE, DFS_NAMESERVICE_ID, DFS_NAMESERVICES);
{noformat}
Other looks good to me.

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-239) Add Pipeline StateManager to track and transition pipeline states

2018-07-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-239:
---
Attachment: HDDS-239.005.patch

> Add Pipeline StateManager to track and transition pipeline states
> -
>
> Key: HDDS-239
> URL: https://issues.apache.org/jira/browse/HDDS-239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-239.001.patch, HDDS-239.002.patch, 
> HDDS-239.003.patch, HDDS-239.004.patch, HDDS-239.005.patch
>
>
> With addition of pipeline recovery in Ozone, pipeline failures need to be 
> handled both in Ozone client as well as SCM. This jira adds a pipeline state 
> Manager to manage pipeline state transitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550133#comment-16550133
 ] 

genericqa commented on HDFS-13743:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
19s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13743 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932348/HDFS-13743.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 39696fc893da 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e6873df |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24617/testReport/ |
| Max. process+thread count | 1413 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24617/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> 

[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550119#comment-16550119
 ] 

genericqa commented on HDFS-13448:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13448 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931168/HDFS-13448.14.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux ec79c31d734f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 

[jira] [Commented] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550109#comment-16550109
 ] 

genericqa commented on HDDS-271:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdds_container-service generated 2 new + 2 
unchanged - 0 fixed = 4 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-271 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932345/HDDS-271.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fc88b59a2f6d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e6873df |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/561/artifact/out/diff-javadoc-javadoc-hadoop-hdds_container-service.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/561/testReport/ |
| Max. process+thread count | 311 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/561/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550097#comment-16550097
 ] 

Íñigo Goiri commented on HDFS-13743:


 [^HDFS-13743.4.patch] LGTM.
+1 from my side but let's wait for Yetus and [~linyiqun].

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550081#comment-16550081
 ] 

Takanobu Asanuma commented on HDFS-13743:
-

Thanks for the review, [~elgoiri]. Uploaded the 4th patch fixing it.

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13743:

Attachment: HDFS-13743.4.patch

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch, HDFS-13743.4.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-269) Refactor IdentifiableEventPayload to use a long ID

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550078#comment-16550078
 ] 

genericqa commented on HDDS-269:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} framework in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932337/HDDS-269.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4217d3f52969 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e6873df |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/560/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/framework 

[jira] [Commented] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550072#comment-16550072
 ] 

Bharat Viswanadham commented on HDDS-271:
-

In this patch just added new Iterator classes, in further jira's will integrate 
this into ContainerData.

> Create a block iterator to iterate blocks in a container
> 
>
> Key: HDDS-271
> URL: https://issues.apache.org/jira/browse/HDDS-271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-271.00.patch
>
>
> Create a block iterator to scan all blocks in a container.
> This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-271:

Fix Version/s: 0.2.1

> Create a block iterator to iterate blocks in a container
> 
>
> Key: HDDS-271
> URL: https://issues.apache.org/jira/browse/HDDS-271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-271.00.patch
>
>
> Create a block iterator to scan all blocks in a container.
> This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-271:

Status: Patch Available  (was: In Progress)

> Create a block iterator to iterate blocks in a container
> 
>
> Key: HDDS-271
> URL: https://issues.apache.org/jira/browse/HDDS-271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-271.00.patch
>
>
> Create a block iterator to scan all blocks in a container.
> This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-271:

Attachment: HDDS-271.00.patch

> Create a block iterator to iterate blocks in a container
> 
>
> Key: HDDS-271
> URL: https://issues.apache.org/jira/browse/HDDS-271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-271.00.patch
>
>
> Create a block iterator to scan all blocks in a container.
> This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-271 started by Bharat Viswanadham.
---
> Create a block iterator to iterate blocks in a container
> 
>
> Key: HDDS-271
> URL: https://issues.apache.org/jira/browse/HDDS-271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Create a block iterator to scan all blocks in a container.
> This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-271:
---

 Summary: Create a block iterator to iterate blocks in a container
 Key: HDDS-271
 URL: https://issues.apache.org/jira/browse/HDDS-271
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham


Create a block iterator to scan all blocks in a container.

This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-271:
---

Assignee: Bharat Viswanadham

> Create a block iterator to iterate blocks in a container
> 
>
> Key: HDDS-271
> URL: https://issues.apache.org/jira/browse/HDDS-271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Create a block iterator to scan all blocks in a container.
> This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-07-19 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550021#comment-16550021
 ] 

Daniel Templeton commented on HDFS-13448:
-

OK, the build server is back up.  A bunch of tests failed, but they don't look 
related.  I just kicked off another run.  What it's done, I'll compare results 
to see if there are any persistent failures.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.10.patch, HDFS-13448.11.patch, 
> HDFS-13448.12.patch, HDFS-13448.13.patch, HDFS-13448.14.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550018#comment-16550018
 ] 

genericqa commented on HDDS-266:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdds/container-service generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 52s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 34s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Found reliance on default encoding in new 
org.apache.hadoop.ozone.container.common.impl.ContainerData(ContainerProtos$ContainerType,
 long, int, int):in new 
org.apache.hadoop.ozone.container.common.impl.ContainerData(ContainerProtos$ContainerType,
 

[jira] [Updated] (HDDS-269) Refactor IdentifiableEventPayload to use a long ID

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-269:

Status: Patch Available  (was: Open)

> Refactor IdentifiableEventPayload to use a long ID
> --
>
> Key: HDDS-269
> URL: https://issues.apache.org/jira/browse/HDDS-269
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-269.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13622) dfs mkdir should not report the directory which to be created

2018-07-19 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13622:
--
Attachment: HDFS-13622.04.patch

> dfs mkdir should not report the directory which to be created
> -
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch, 
> HDFS-13622.04.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13622) dfs mkdir should not report the directory which to be created

2018-07-19 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550008#comment-16550008
 ] 

Shweta commented on HDFS-13622:
---

Hi [~xiaochen],

Thanks for you suggestions. I have updated the code as suggested. Please 
review. 

> dfs mkdir should not report the directory which to be created
> -
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch, 
> HDFS-13622.04.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-209) createVolume command throws error when user is not present locally but creates the volume

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-209.
-
Resolution: Duplicate

This is a dup of HDDS-138

> createVolume command throws error when user is not present locally but 
> creates the volume
> -
>
> Key: HDDS-209
> URL: https://issues.apache.org/jira/browse/HDDS-209
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> user "test_user3" does not exist locally. 
> When -createVolume command is ran for the user "test_user3", it throws error 
> on standard output but successfully creates the volume.
> The exit code for the command execution is 0.
>  
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -createVolume /testvolume121 -user test_user3
> 2018-07-02 06:01:37,020 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:01:37,605 WARN security.ShellBasedUnixGroupsMapping: unable to 
> return groups for user test_user3
> PartialGroupNameException The user name 'test_user3' is not found. id: 
> test_user3: no such user
> id: test_user3: no such user
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>  at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>  at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>  at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>  at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>  at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>  at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>  at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>  at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>  at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>  at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>  at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>  at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>  at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>  at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>  at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1547)
>  at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1535)
>  at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>  at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>  at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>  at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-07-02 06:01:37,611 [main] INFO - Creating Volume: testvolume121, with 
> test_user3 as owner and quota set to 1152921504606846976 bytes.
> {noformat}
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume / -user test_user3
> 2018-07-02 06:02:20,385 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "test_user3"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "testvolume121",
>  "createdOn" : "Thu, 05 Jun +50470 19:07:00 GMT",
>  "createdBy" : "test_user3"
> } ]
> {noformat}
> Expectation :
> --
> Error stack should not be thrown on standard output if the volume is 
> successfully created with non-existing user.

[jira] [Updated] (HDDS-188) TestKSMMetrcis should not use the deprecated WhiteBox class

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-188:

Labels: newbie  (was: )

> TestKSMMetrcis should not use the deprecated WhiteBox class
> ---
>
> Key: HDDS-188
> URL: https://issues.apache.org/jira/browse/HDDS-188
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> TestKSMMetrcis (also needs to be renamed) should stop using 
> {{org.apache.hadoop.test.Whitebox}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-174) Shell error messages are often cryptic

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-174:

Labels: newbie  (was: )

> Shell error messages are often cryptic
> --
>
> Key: HDDS-174
> URL: https://issues.apache.org/jira/browse/HDDS-174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Nanda kumar
>Priority: Critical
>  Labels: newbie
> Fix For: 0.2.1
>
>
> Error messages in the Ozone shell are often too cryptic. e.g.
> {code}
> $ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
> Command Failed : Create key failed, error:INTERNAL_ERROR
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-165) Add unit test for OzoneHddsDatanodeService

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-165:

Labels: newbie test  (was: test)

> Add unit test for OzoneHddsDatanodeService
> --
>
> Key: HDDS-165
> URL: https://issues.apache.org/jira/browse/HDDS-165
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
>
> We have to add unit-test for {{OzoneHddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13622) dfs mkdir should not report the directory which to be created

2018-07-19 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13622:
--
Attachment: (was: HDFS-13622.04.patch)

> dfs mkdir should not report the directory which to be created
> -
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-164) Add unit test for HddsDatanodeService

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-164:

Labels: newbie test  (was: test)

> Add unit test for HddsDatanodeService
> -
>
> Key: HDDS-164
> URL: https://issues.apache.org/jira/browse/HDDS-164
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
>
> We have to add unit-test for {{HddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-138) createVolume bug with non-existent user

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-138:

Labels: newbie usability  (was: usability)

> createVolume bug with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie, usability
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-61) Fix Ozone related doc links in hadoop-project/src/site/site.xml

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-61?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-61:
---
Labels: newbie  (was: )

> Fix Ozone related doc links in hadoop-project/src/site/site.xml
> ---
>
> Key: HDDS-61
> URL: https://issues.apache.org/jira/browse/HDDS-61
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: newbie
>
> Because Hdds profile is off by default, the links in the generated site will 
> be invalid without specifying maven profile -Phdds. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13622) dfs mkdir should not report the directory which to be created

2018-07-19 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13622:
--
Attachment: HDFS-13622.04.patch

> dfs mkdir should not report the directory which to be created
> -
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch, 
> HDFS-13622.04.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-139) Output of createVolume can be improved

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-139:
---

Assignee: Junping Du  (was: Anu Engineer)

> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Junping Du
>Priority: Major
>  Labels: newbie, usability
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-139) Output of createVolume can be improved

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-139:

Labels: newbie usability  (was: usability)

> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Anu Engineer
>Priority: Major
>  Labels: newbie, usability
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-270) Move generic container utils to ContianerUitls

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-270:
---

 Summary: Move generic container utils to ContianerUitls
 Key: HDDS-270
 URL: https://issues.apache.org/jira/browse/HDDS-270
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Some container util functions such as getContainerFile() are common for all 
ContainerTypes. These functions should be moved to ContainerUtils.

Also moved some fucntions to KeyValueContainer as applicable.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-269) Refactor IdentifiableEventPayload to use a long ID

2018-07-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-269:

Attachment: HDDS-269.00.patch

> Refactor IdentifiableEventPayload to use a long ID
> --
>
> Key: HDDS-269
> URL: https://issues.apache.org/jira/browse/HDDS-269
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-269.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-216:

Status: Patch Available  (was: Open)

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-117) Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-117:
---

Assignee: Junping Du

> Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.
> 
>
> Key: HDDS-117
> URL: https://issues.apache.org/jira/browse/HDDS-117
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Junping Du
>Priority: Major
>  Labels: newbie
>
> It will be very helpful to have a wrapper for set/get Standalone, Ratis and 
> Rest Ports in DatanodeDetails.
> Search and Replace usage of DatanodeDetails#newPort directly in current code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-131) Replace pipeline info from container info with a pipeline id

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-131.
-
Resolution: Implemented

This has been implemented with HDDS-16 and HDDS-175. 

> Replace pipeline info from container info with a pipeline id
> 
>
> Key: HDDS-131
> URL: https://issues.apache.org/jira/browse/HDDS-131
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> Currently, in the containerInfo object, the complete pipeline object is 
> stored. The idea here is to decouple the pipeline info from container info 
> and replace it with a pipeline Id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-117) Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-117:

Description: 
It will be very helpful to have a wrapper for set/get Standalone, Ratis and 
Rest Ports in DatanodeDetails.

Search and Replace usage of DatanodeDetails#newPort directly in current code. 

  was:It will be very helpful to have a wrapper for set/get Standalone, Ratis 
and Rest Ports in DatanodeDetails.


> Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.
> 
>
> Key: HDDS-117
> URL: https://issues.apache.org/jira/browse/HDDS-117
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Priority: Major
>  Labels: newbie
>
> It will be very helpful to have a wrapper for set/get Standalone, Ratis and 
> Rest Ports in DatanodeDetails.
> Search and Replace usage of DatanodeDetails#newPort directly in current code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-117) Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-117:

Labels: newbie  (was: )

> Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.
> 
>
> Key: HDDS-117
> URL: https://issues.apache.org/jira/browse/HDDS-117
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Priority: Major
>  Labels: newbie
>
> It will be very helpful to have a wrapper for set/get Standalone, Ratis and 
> Rest Ports in DatanodeDetails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-266:

Fix Version/s: 0.2.1

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-199) Implement ReplicationManager to replicate Closed Containers

2018-07-19 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549948#comment-16549948
 ] 

Xiaoyu Yao commented on HDDS-199:
-

Thanks [~elek] for working on this. The patch looks good to me. I just have a 
few comments below:

SCMEvents.java

Line 34-35: NIT: unused imports

 

SCMContainerPlacementRandom.java

Line 92-93: NIT: blank change.

 

SCMContainerPlacementCapacity.java

Line 100: can be removed as the super.chooseDatanodes() already removed the 
excludedNodes?

 

ScmConfigKeys.java

Line 250: Update TestCommonConfigurationFields?

 

StorageContainerManager.java

Line 222: we need to ensure the LeaseManager instance 
commandWatcherLeaseManager is shutdown upon SCM stop around line 585.

 

 

ReplicationCommandWatcher.java

Line 36: NIT: unused imports

 

ReplicationManager.java

Line 160: please update the title of the JIRA to reflect we handle under 
replicated container only after this and open a separate Jira. 

 

TestReplicationManager.java

Line 139: should we put it within try{} final{} to ensure the proper stop of 
lease manger?

 

 

> Implement ReplicationManager to replicate Closed Containers
> ---
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch, HDDS-199.005.patch, 
> HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, 
> HDDS-199.009.patch, HDDS-199.010.patch, HDDS-199.011.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-269) Refactor IdentifiableEventPayload to use a long ID

2018-07-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-269:
---

 Summary: Refactor IdentifiableEventPayload to use a long ID
 Key: HDDS-269
 URL: https://issues.apache.org/jira/browse/HDDS-269
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-268) Add SCM close container watcher

2018-07-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-268:
---

 Summary: Add SCM close container watcher
 Key: HDDS-268
 URL: https://issues.apache.org/jira/browse/HDDS-268
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549892#comment-16549892
 ] 

genericqa commented on HDFS-13746:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932273/HDFS-13746.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6e3ccaa80b96 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5836e0a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24615/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24615/testReport/ |
| Max. process+thread count | 2878 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24615/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Commented] (HDDS-256) Adding CommandStatusReport Handler

2018-07-19 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549883#comment-16549883
 ] 

Xiaoyu Yao commented on HDDS-256:
-

Thanks [~ajayydv] for working on this. Patch v2 looks good to me, I just have 
few minor comments:

 

 

CommandStatusReportHandler.java

Line 106/115: comments need to be updated.

 

 

TestCommandStatusReportHandler.java

Missing ASF license header.

Line 37: storagePath is never used and can be removed.

>  Adding CommandStatusReport Handler
> ---
>
> Key: HDDS-256
> URL: https://issues.apache.org/jira/browse/HDDS-256
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-256.00.patch, HDDS-256.01.patch
>
>
> CommandStatusReportPublisher publishes status of SCM commands via Heartbeats. 
> This is handler for those command status reports responsible for sending 
> command status to corresponding watchers. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-19 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549870#comment-16549870
 ] 

Hanisha Koneru commented on HDDS-249:
-

Thanks [~bharatviswa] for working on this.
Patch v04 LGTM.

I have just one minor comment. In TestEndPoint, line 206, when checking the 
output, can we verify that the "missing scm directory" error is for the 
expected scmId.

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch, 
> HDDS-249.03.patch, HDDS-249.04.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-266:

Attachment: HDDS-266.001.patch

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-266.001.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-266:

Status: Patch Available  (was: Open)

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-266.001.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-257:

Status: Patch Available  (was: Open)

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch, HDDS-257.002.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-257:

Status: Open  (was: Patch Available)

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch, HDDS-257.002.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549788#comment-16549788
 ] 

genericqa commented on HDDS-216:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 56s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestReplicateContainerHandler
 |
|   | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.ozone.container.common.TestBlockDeletingService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932276/HDDS-216.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dcb9e24b1e3f 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |

[jira] [Commented] (HDFS-13076) [SPS]: Cleanup work for HDFS-10285

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549781#comment-16549781
 ] 

genericqa commented on HDFS-13076:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  5m 
12s{color} | {color:red} root in HDFS-10285 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs-project in HDFS-10285 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdfs in HDFS-10285 failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m 
10s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-hdfs in HDFS-10285 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-hdfs in HDFS-10285 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m 
12s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 12s{color} | 
{color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 12s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 0 
unchanged - 1 fixed = 1 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithHA |
|   | hadoop.hdfs.tools.TestStoragePolicySatisfyAdminCommands |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
| 

[jira] [Updated] (HDDS-199) Implement ReplicationManager to replicate Closed Containers

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-199:

Status: Patch Available  (was: Open)

> Implement ReplicationManager to replicate Closed Containers
> ---
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch, HDDS-199.005.patch, 
> HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, 
> HDDS-199.009.patch, HDDS-199.010.patch, HDDS-199.011.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-199) Implement ReplicationManager to replicate Closed Containers

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-199:

Status: Open  (was: Patch Available)

> Implement ReplicationManager to replicate Closed Containers
> ---
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch, HDDS-199.005.patch, 
> HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, 
> HDDS-199.009.patch, HDDS-199.010.patch, HDDS-199.011.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549770#comment-16549770
 ] 

genericqa commented on HDDS-264:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
26s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-264 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932277/HDDS-264.001.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 306455c94c23 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5836e0a |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/558/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common U: hadoop-ozone/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/558/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-264.001.patch
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on 

[jira] [Updated] (HDDS-199) Implement ReplicationManager to replicate Closed Containers

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-199:

Summary: Implement ReplicationManager to replicate Closed Containers  (was: 
Implement ReplicationManager to replicate ClosedContainers)

> Implement ReplicationManager to replicate Closed Containers
> ---
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch, HDDS-199.005.patch, 
> HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, 
> HDDS-199.009.patch, HDDS-199.010.patch, HDDS-199.011.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-75) Ozone: Support CopyContainer

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-75:
---
Labels:   (was: OzonePostMerge)

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-75.005.patch, HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch, 
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-216:
-
Status: Open  (was: Patch Available)

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13532) RBF: Adding security

2018-07-19 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549714#comment-16549714
 ] 

CR Hota commented on HDFS-13532:


[~elgoiri] [~xiaochen] The assumption is that renewals will also go through the 
router, router issuing DTs and clients bypassing router and directly talking to 
namenodes for renewal/cancellation anyways won't work. As far as renewals are 
concerned, from clients perspective it will be one single call, which will 
either succeed or fail based on how renewals went through for all downstream 
name nodes. These details will be covered in the design document of router 
delegation token in 
[HDFS-13358|https://issues.apache.org/jira/browse/HDFS-13358].

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: RBF _ Security delegation token thoughts.pdf, 
> Security_for_Router-based Federation_design_doc.pdf
>
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-07-19 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549706#comment-16549706
 ] 

Wangda Tan commented on HDFS-13596:
---

Given there's no movement of this issue for 2 months and this is not a 
regression in 3.1.x, I just moved it to 3.1.2

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Zsolt Venczel
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
> 

[jira] [Updated] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-07-19 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HDFS-13596:
--
Target Version/s: 3.2.0, 3.0.4, 3.1.2  (was: 3.2.0, 3.1.1, 3.0.4)

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Zsolt Venczel
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> 

[jira] [Updated] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-264:
-
Status: Patch Available  (was: Open)

> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-264.001.patch
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on worker mode
> SUBCOMMAND is one of:
>  Admin Commands:
> jmxget get JMX exported values from NameNode or DataNode.
> Client Commands:
> classpath prints the class path needed to get the hadoop jar and the
>  required libraries
> envvars display computed Hadoop environment variables
> freon runs an ozone data generator
> genconf generate minimally required ozone configs and output to
>  ozone-site.xml in specified path
> genesis runs a collection of ozone benchmarks to help with tuning.
> getozoneconf get ozone config values from configuration
> noz ozone debug tool, convert ozone metadata into relational data
> o3 command line interface for ozone
> scmcli run the CLI of the Storage Container Manager
> version print the version
> Daemon Commands:
> datanode run a HDDS datanode
> om Ozone Manager
> scm run the Storage Container Manager service
> SUBCOMMAND may print help when invoked w/o parameters or with -h.
> {noformat}
>  
> 'oz' subcommand example :
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
> 2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-01597",
>  "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-19478",
>  "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  }
>  
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-216:
-
Status: Patch Available  (was: Open)

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549676#comment-16549676
 ] 

Sandeep Nemuri commented on HDDS-264:
-

Fixed the typo. Please review. 

> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-264.001.patch
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on worker mode
> SUBCOMMAND is one of:
>  Admin Commands:
> jmxget get JMX exported values from NameNode or DataNode.
> Client Commands:
> classpath prints the class path needed to get the hadoop jar and the
>  required libraries
> envvars display computed Hadoop environment variables
> freon runs an ozone data generator
> genconf generate minimally required ozone configs and output to
>  ozone-site.xml in specified path
> genesis runs a collection of ozone benchmarks to help with tuning.
> getozoneconf get ozone config values from configuration
> noz ozone debug tool, convert ozone metadata into relational data
> o3 command line interface for ozone
> scmcli run the CLI of the Storage Container Manager
> version print the version
> Daemon Commands:
> datanode run a HDDS datanode
> om Ozone Manager
> scm run the Storage Container Manager service
> SUBCOMMAND may print help when invoked w/o parameters or with -h.
> {noformat}
>  
> 'oz' subcommand example :
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
> 2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-01597",
>  "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-19478",
>  "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  }
>  
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-264:

Attachment: HDDS-264.001.patch

> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-264.001.patch
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on worker mode
> SUBCOMMAND is one of:
>  Admin Commands:
> jmxget get JMX exported values from NameNode or DataNode.
> Client Commands:
> classpath prints the class path needed to get the hadoop jar and the
>  required libraries
> envvars display computed Hadoop environment variables
> freon runs an ozone data generator
> genconf generate minimally required ozone configs and output to
>  ozone-site.xml in specified path
> genesis runs a collection of ozone benchmarks to help with tuning.
> getozoneconf get ozone config values from configuration
> noz ozone debug tool, convert ozone metadata into relational data
> o3 command line interface for ozone
> scmcli run the CLI of the Storage Container Manager
> version print the version
> Daemon Commands:
> datanode run a HDDS datanode
> om Ozone Manager
> scm run the Storage Container Manager service
> SUBCOMMAND may print help when invoked w/o parameters or with -h.
> {noformat}
>  
> 'oz' subcommand example :
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
> 2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-01597",
>  "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-19478",
>  "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  }
>  
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-216:

Attachment: HDDS-216.002.patch

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-07-19 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549673#comment-16549673
 ] 

Sandeep Nemuri commented on HDDS-216:
-

Thanks for the review [~bharatviswa] and [~nandakumar131].

attaching v2 patch addressing the comments. 

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.2.1
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-267) Handle consistency issues during container update/close

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-267:
---

 Summary: Handle consistency issues during container update/close
 Key: HDDS-267
 URL: https://issues.apache.org/jira/browse/HDDS-267
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


During container update and close, the .container file on disk is modified. We 
should make sure that the in-memory state and the on-disk state for a container 
are consistent. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-266:
---

 Summary: Integrate checksum into .container file
 Key: HDDS-266
 URL: https://issues.apache.org/jira/browse/HDDS-266
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Currently, each container metadata has 2 files - .container and .checksum file.
In this Jira, we propose to integrate the checksum into the .container file 
itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-19 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
  Attachment: HDFS-13746.002.patch
Target Version/s: 3.0.0
  Status: Patch Available  (was: In Progress)

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.002.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail 100(MAX_RETRIES) times before 
> declaring failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-19 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13746 started by Siyao Meng.
-
> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.002.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail 100(MAX_RETRIES) times before 
> declaring failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-19 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549648#comment-16549648
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks [~ljain]. I have created HDDS-265 to track this.

I will go ahead and commit patch v03 if there are no further comments.

Test failures are unrelated to this patch.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-265:
---

 Summary: Move numPendingDeletionBlocks and deleteTransactionId 
from ContainerData to KeyValueContainerData
 Key: HDDS-265
 URL: https://issues.apache.org/jira/browse/HDDS-265
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


"numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
KeyValueContainers. As such they should be moved to KeyValueContainerData from 
ContainerData.

ContainerReport should also be refactored to take in this change. 

Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-19 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
Fix Version/s: 3.0.0

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail 100(MAX_RETRIES) times before 
> declaring failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-19 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
Fix Version/s: (was: 3.0.0)

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail 100(MAX_RETRIES) times before 
> declaring failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13746:
---
Fix Version/s: (was: 3.0.0)

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail 100(MAX_RETRIES) times before 
> declaring failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-07-19 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13746:
--
Fix Version/s: 3.0.0

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail 100(MAX_RETRIES) times before 
> declaring failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-239) Add Pipeline StateManager to track and transition pipeline states

2018-07-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549612#comment-16549612
 ] 

genericqa commented on HDDS-239:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdds_server-scm generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 33s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 57s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.container.TestContainerMapping |
|   | hadoop.hdds.scm.container.closer.TestContainerCloser 

[jira] [Commented] (HDFS-13076) [SPS]: Cleanup work for HDFS-10285

2018-07-19 Thread Rakesh R (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549591#comment-16549591
 ] 

Rakesh R commented on HDFS-13076:
-

Thanks [~umamaheswararao] for pointing out this. I've rebased the branch code 
with trunk and triggered another build.

> [SPS]: Cleanup work for HDFS-10285
> --
>
> Key: HDFS-13076
> URL: https://issues.apache.org/jira/browse/HDFS-13076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, HDFS-13076-HDFS-10285-00.patch, 
> HDFS-13076-HDFS-10285-01.patch
>
>
> This Jira is to run aggregated HDFS-10285 branch patch against trunk and 
> check for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13749) Implement a new client protocol method to get NameNode state

2018-07-19 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13749:
---

 Summary: Implement a new client protocol method to get NameNode 
state
 Key: HDFS-13749
 URL: https://issues.apache.org/jira/browse/HDFS-13749
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chao Sun
Assignee: Chao Sun


Currently {{HAServiceProtocol#getServiceStatus}} requires super user privilege. 
Therefore, as a temporary solution, in HDFS-12976 we discover NameNode state by 
calling {{reportBadBlocks}}. Here, we'll properly implement this by adding a 
new method in client protocol to get the NameNode state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13743) RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver

2018-07-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549531#comment-16549531
 ] 

Íñigo Goiri commented on HDFS-13743:


[^HDFS-13743.3.patch] looks pretty much it; the warning is the way to go.
The unit test is good 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24613/testReport/org.apache.hadoop.hdfs.server.federation.resolver/TestInitializeMountTableResolver/]
 and it runs fast (~0.36 seconds).

A minor nit, instead of:
{code}
if (nsIds.iterator().hasNext()) {
{code}
We should just do:
{code}
if (!nsIds.isEmpty()) {
{code}

> RBF: Router throws NullPointerException due to the invalid initialization of 
> MountTableResolver
> ---
>
> Key: HDFS-13743
> URL: https://issues.apache.org/jira/browse/HDFS-13743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13743.1.patch, HDFS-13743.2.patch, 
> HDFS-13743.3.patch
>
>
> When {{dfs.federation.router.default.nameserviceId}} isn't set and any other 
> default name service isn't found, clients can't submit requests to the router 
> because of {{NullPointerException}}.
>  # client side
> {noformat}
> $ hadoop fs -ls hdfs://router:/
> ls: java.lang.NullPointerException
> {noformat}
>  # Router log
> {noformat}
> java.lang.NullPointerException
> at java.util.TreeMap.getEntry(TreeMap.java:347)
> at java.util.TreeMap.containsKey(TreeMap.java:232)
> at java.util.TreeSet.contains(TreeSet.java:234)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {noformat}
> The cause of this error is that the initialization of {{MountTableResolver}} 
> doesn't work properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-256) Adding CommandStatusReport Handler

2018-07-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-256:
---
Fix Version/s: 0.2.1

>  Adding CommandStatusReport Handler
> ---
>
> Key: HDDS-256
> URL: https://issues.apache.org/jira/browse/HDDS-256
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-256.00.patch, HDDS-256.01.patch
>
>
> CommandStatusReportPublisher publishes status of SCM commands via Heartbeats. 
> This is handler for those command status reports responsible for sending 
> command status to corresponding watchers. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-256) Adding CommandStatusReport Handler

2018-07-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-256:
---
Affects Version/s: 0.2.1

>  Adding CommandStatusReport Handler
> ---
>
> Key: HDDS-256
> URL: https://issues.apache.org/jira/browse/HDDS-256
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-256.00.patch, HDDS-256.01.patch
>
>
> CommandStatusReportPublisher publishes status of SCM commands via Heartbeats. 
> This is handler for those command status reports responsible for sending 
> command status to corresponding watchers. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-264:
---

Assignee: Sandeep Nemuri

> 'oz' subcommand reference is not present in 'ozone' command help
> 
>
> Key: HDDS-264
> URL: https://issues.apache.org/jira/browse/HDDS-264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Sandeep Nemuri
>Priority: Minor
> Fix For: 0.2.1
>
>
> 'oz' subcommand is not present in ozone help.
>  
> ozone help:
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> OPTIONS is none or any of:
> --buildpaths attempt to add class files from build tree
> --config dir Hadoop config directory
> --daemon (start|status|stop) operate on a daemon
> --debug turn on shell script debug mode
> --help usage information
> --hostnames list[,of,host,names] hosts to use in worker mode
> --hosts filename list of hosts to use in worker mode
> --loglevel level set the log4j level for this command
> --workers turn on worker mode
> SUBCOMMAND is one of:
>  Admin Commands:
> jmxget get JMX exported values from NameNode or DataNode.
> Client Commands:
> classpath prints the class path needed to get the hadoop jar and the
>  required libraries
> envvars display computed Hadoop environment variables
> freon runs an ozone data generator
> genconf generate minimally required ozone configs and output to
>  ozone-site.xml in specified path
> genesis runs a collection of ozone benchmarks to help with tuning.
> getozoneconf get ozone config values from configuration
> noz ozone debug tool, convert ozone metadata into relational data
> o3 command line interface for ozone
> scmcli run the CLI of the Storage Container Manager
> version print the version
> Daemon Commands:
> datanode run a HDDS datanode
> om Ozone Manager
> scm run the Storage Container Manager service
> SUBCOMMAND may print help when invoked w/o parameters or with -h.
> {noformat}
>  
> 'oz' subcommand example :
> 
>  
> {noformat}
> hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
> 2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-01597",
>  "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "vol-0-19478",
>  "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
>  "createdBy" : "hadoop"
> }, {
>  "owner" : {
>  "name" : "hadoop"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  }
>  
> {noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13076) [SPS]: Cleanup work for HDFS-10285

2018-07-19 Thread Uma Maheswara Rao G (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549443#comment-16549443
 ] 

Uma Maheswara Rao G commented on HDFS-13076:


I think we should rebase code against latest trunk to solve this build issue. 
Seems like it resolved in trunk with HADOOP-15610

> [SPS]: Cleanup work for HDFS-10285
> --
>
> Key: HDFS-13076
> URL: https://issues.apache.org/jira/browse/HDFS-13076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, HDFS-13076-HDFS-10285-00.patch, 
> HDFS-13076-HDFS-10285-01.patch
>
>
> This Jira is to run aggregated HDFS-10285 branch patch against trunk and 
> check for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-239) Add Pipeline StateManager to track and transition pipeline states

2018-07-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-239:
---
Attachment: HDDS-239.004.patch

> Add Pipeline StateManager to track and transition pipeline states
> -
>
> Key: HDDS-239
> URL: https://issues.apache.org/jira/browse/HDDS-239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-239.001.patch, HDDS-239.002.patch, 
> HDDS-239.003.patch, HDDS-239.004.patch
>
>
> With addition of pipeline recovery in Ozone, pipeline failures need to be 
> handled both in Ozone client as well as SCM. This jira adds a pipeline state 
> Manager to manage pipeline state transitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-239) Add Pipeline StateManager to track and transition pipeline states

2018-07-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549403#comment-16549403
 ] 

Mukul Kumar Singh commented on HDDS-239:


Thanks for the review [~xyao], patch v4 fixes the review comments. 

PipelineManager.java

Line 112: NIT: an extra "pipeline"  in comments
bq. done
 
Line 127: should we use the lease manager to handle the CLOSING->CLOSED state 
transition when CLOSE event is not received timely? This could be handled as 
TODO in a separate JIRA.
bq. The pipeline will transition from closing to close when all the underlying 
containers have been closed on the pipeline. This is being done as part of a 
followup jira.


PipelineSelector.java

Line 72-73: should we move the pipelineLeaseManager/stateMachine to 
PipelineManager class so that the RatisManagerImpl and StandaloneMangerImpl may 
have different state machines? This way, the logic for state transition around 
line 261-266 can be consolidated into PipelineManager#initializePipeline().
bq. Interesting point. However keeping the statemachine in pipeline selector 
will ensure that the external behavior of the pipelines remains the same with 
respect to standAlone/Ratis pipelines.

Also, the createPipeline/closePipeline are not hooked up with state changes 
because we currently rely only getReplicationPipeline() for create pipeline.
bq. This is handled as part of the followup jira

Line 399: I think we need to handle the create timeout by releasing the nodes 
when the DN get closed. Is it possible to include it in this patch? We can 
leave the close pipeline, continer, etc. later.
bq. The next patch is ready right now. I just wanted to intoduce a pipeline 
state manager with this patch and ensure that the current pipeline code keeps 
on working. I will post the followup patch soon.
 
ScmConfigKeys.java:

Line 244: please update the unit test TestOzoneConfigurationFields that 
verifies this new configuration keys.
bq. updated the ozone-default.xml.

> Add Pipeline StateManager to track and transition pipeline states
> -
>
> Key: HDDS-239
> URL: https://issues.apache.org/jira/browse/HDDS-239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-239.001.patch, HDDS-239.002.patch, 
> HDDS-239.003.patch
>
>
> With addition of pipeline recovery in Ozone, pipeline failures need to be 
> handled both in Ozone client as well as SCM. This jira adds a pipeline state 
> Manager to manage pipeline state transitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-264:
---

 Summary: 'oz' subcommand reference is not present in 'ozone' 
command help
 Key: HDDS-264
 URL: https://issues.apache.org/jira/browse/HDDS-264
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


'oz' subcommand is not present in ozone help.

 

ozone help:



 
{noformat}
hadoop@8ceb8dfccb36:~/bin$ ./ozone
Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
OPTIONS is none or any of:
--buildpaths attempt to add class files from build tree
--config dir Hadoop config directory
--daemon (start|status|stop) operate on a daemon
--debug turn on shell script debug mode
--help usage information
--hostnames list[,of,host,names] hosts to use in worker mode
--hosts filename list of hosts to use in worker mode
--loglevel level set the log4j level for this command
--workers turn on worker mode
SUBCOMMAND is one of:

 Admin Commands:
jmxget get JMX exported values from NameNode or DataNode.
Client Commands:
classpath prints the class path needed to get the hadoop jar and the
 required libraries
envvars display computed Hadoop environment variables
freon runs an ozone data generator
genconf generate minimally required ozone configs and output to
 ozone-site.xml in specified path
genesis runs a collection of ozone benchmarks to help with tuning.
getozoneconf get ozone config values from configuration
noz ozone debug tool, convert ozone metadata into relational data
o3 command line interface for ozone
scmcli run the CLI of the Storage Container Manager
version print the version
Daemon Commands:
datanode run a HDDS datanode
om Ozone Manager
scm run the Storage Container Manager service
SUBCOMMAND may print help when invoked w/o parameters or with -h.
{noformat}
 

'oz' subcommand example :



 
{noformat}
hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
[ {
 "owner" : {
 "name" : "hadoop"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "vol-0-01597",
 "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
 "createdBy" : "hadoop"
}, {
 "owner" : {
 "name" : "hadoop"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "vol-0-19478",
 "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
 "createdBy" : "hadoop"
}, {
 "owner" : {
 "name" : "hadoop"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 }
 
{noformat}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >