[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520953#comment-16520953
 ] 

genericqa commented on HDFS-13695:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 28m 27s{color} 
| {color:red} root generated 1 new + 1561 unchanged - 2 fixed = 1562 total (was 
1563) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 
40s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}238m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13695 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928831/HDFS-13695.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a2a5b1dd8739 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1cdce86 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24486/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 

[jira] [Commented] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520937#comment-16520937
 ] 

Bharat Viswanadham commented on HDDS-183:
-

Attached patch. It depends on HDDS-173. 

Added DatanodeContainer similar to OzoneContainer. (Will rename it as 
OzoneContainer, when old OzoneContainer is removed)

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch, 
> HDDS-183-HDDS-48.02.patch
>
>
> This Jira adds following:
> 1. Use new VolumeSet.
> 2. build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Attachment: HDDS-183-HDDS-48.02.patch

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch, 
> HDDS-183-HDDS-48.02.patch
>
>
> This Jira adds following:
> 1. Use new VolumeSet.
> 2. build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Description: 
This Jira adds following:

1. Use new VolumeSet.

2. build container map from .container files during startup.

  was:
This class is used to handle keyValueContainer operations.

This Jira is to build container map from .container files during startup.


> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch
>
>
> This Jira adds following:
> 1. Use new VolumeSet.
> 2. build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520934#comment-16520934
 ] 

Bharat Viswanadham commented on HDDS-183:
-

Cancelling the patch, as we will not handle building container map in 
KeyValueContainerManager, it should be handled from OzoneContainer.

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Status: Open  (was: Patch Available)

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-22 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520924#comment-16520924
 ] 

Yiqun Lin commented on HDFS-13609:
--

Thanks for the explaination, [~xkrogen]. +1 for the fix.

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, 
> HDFS-13609-HDFS-12943.003.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-189) Update HDDS to start OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520900#comment-16520900
 ] 

Arpit Agarwal commented on HDDS-189:


Thanks [~elek], fixed that line.

> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-189.01.patch, HDDS-189.02.patch
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-189) Update HDDS to start OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-189:
---
Attachment: HDDS-189.02.patch

> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-189.01.patch, HDDS-189.02.patch
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-189) Update HDDS to start OzoneManager

2018-06-22 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520897#comment-16520897
 ] 

Elek, Marton commented on HDDS-189:
---

Looks good to me. Like the transition period. Last time I did it without that 
and I had hard time with the older branches.

One minor note: The end of the README.md is also outdated (broken by an other, 
older change):

{code}
cd dev-support/compose/ozone
{code}

Should be

{code}
cd hadoop-dist/target/compose/ozone
{code}

> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-189.01.patch
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-174) Shell error messages are often cryptic

2018-06-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520895#comment-16520895
 ] 

Arpit Agarwal commented on HDDS-174:


I made a trivial fix to get the real error message in the protobuf translator. 
Now it tells me the real error message, but the detailed cause/exception is 
still not printed.
{code}
$ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
{code}

For the detailed error, we need to go to the OzoneManager logs:
{code}
 2018-06-22 23:07:55 ERROR KeyManagerImpl:312 - Key open failed for volume:vol1 
bucket:bucket1 key:key1
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
Pipeline type=RATIS/replication=THREE couldn't be found for the new container. 
Do you have enoug
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:231)
at 
org.apache.hadoop.hdds.scm.container.ContainerStateManager.allocateContainer(ContainerStateManager.java:292)
at 
org.apache.hadoop.hdds.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:231)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:162)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:264)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6271)
{code}

> Shell error messages are often cryptic
> --
>
> Key: HDDS-174
> URL: https://issues.apache.org/jira/browse/HDDS-174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Nanda kumar
>Priority: Critical
> Fix For: 0.2.1
>
>
> Error messages in the Ozone shell are often too cryptic. e.g.
> {code}
> $ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
> Command Failed : Create key failed, error:INTERNAL_ERROR
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-174) Shell error messages are often cryptic

2018-06-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520895#comment-16520895
 ] 

Arpit Agarwal edited comment on HDDS-174 at 6/22/18 11:32 PM:
--

I made a trivial fix to get the real error code in the protobuf translator. Now 
it tells me the real code message, but the detailed cause/exception is still 
not printed.
{code:java}
$ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
{code}
For the detailed error, we need to go to the OzoneManager logs:
{code:java}
 2018-06-22 23:07:55 ERROR KeyManagerImpl:312 - Key open failed for volume:vol1 
bucket:bucket1 key:key1
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
Pipeline type=RATIS/replication=THREE couldn't be found for the new container. 
Do you have enoug
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:231)
at 
org.apache.hadoop.hdds.scm.container.ContainerStateManager.allocateContainer(ContainerStateManager.java:292)
at 
org.apache.hadoop.hdds.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:231)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:162)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:264)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6271)
{code}


was (Author: arpitagarwal):
I made a trivial fix to get the real error message in the protobuf translator. 
Now it tells me the real error message, but the detailed cause/exception is 
still not printed.
{code}
$ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
{code}

For the detailed error, we need to go to the OzoneManager logs:
{code}
 2018-06-22 23:07:55 ERROR KeyManagerImpl:312 - Key open failed for volume:vol1 
bucket:bucket1 key:key1
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
Pipeline type=RATIS/replication=THREE couldn't be found for the new container. 
Do you have enoug
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:231)
at 
org.apache.hadoop.hdds.scm.container.ContainerStateManager.allocateContainer(ContainerStateManager.java:292)
at 
org.apache.hadoop.hdds.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:231)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:162)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:264)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6271)
{code}

> Shell error messages are often cryptic
> --
>
> Key: HDDS-174
> URL: https://issues.apache.org/jira/browse/HDDS-174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Nanda kumar
>Priority: Critical
> Fix For: 0.2.1
>
>
> Error messages in the Ozone shell are often too cryptic. e.g.
> {code}
> $ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
> Command Failed : Create key failed, error:INTERNAL_ERROR
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-190) Improve shell error message for unrecognized option

2018-06-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-190:
--

 Summary: Improve shell error message for unrecognized option
 Key: HDDS-190
 URL: https://issues.apache.org/jira/browse/HDDS-190
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal
 Fix For: 0.2.1


The error message with an unrecognized option is unfriendly. E.g.
{code}
$ ozone oz -badOption
Unrecognized option: -badOptionERROR: null
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-190) Improve shell error message for unrecognized option

2018-06-22 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-190:
--
Labels: newbie  (was: )

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520859#comment-16520859
 ] 

Arpit Agarwal commented on HDDS-167:


v03 patch fixes javac and findbugs issues.

Still looking at the acceptance tests.

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Attachment: HDDS-167.03.patch

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-22 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520851#comment-16520851
 ] 

Konstantin Shvachko commented on HDFS-13609:


Erik, the last patch looks good. +1

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, 
> HDFS-13609-HDFS-12943.003.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520850#comment-16520850
 ] 

genericqa commented on HDFS-13609:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
 9s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m  
2s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
27s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928807/HDFS-13609-HDFS-12943.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 81c7fa97f970 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 292ccdc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24485/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24485/testReport/ |
| Max. process+thread count | 3085 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2018-06-22 Thread Uma Maheswara Rao G (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520837#comment-16520837
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


Thank you all. If there are no objections, I will bump the discuss thread on 
Monday.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-10285-consolidated-merge-patch-04.patch, 
> HDFS-10285-consolidated-merge-patch-05.patch, 
> HDFS-SPS-TestReport-20170708.pdf, SPS Modularization.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: HDFS-13695.v1.patch
Status: Patch Available  (was: Open)

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: (was: HDFS-13695.v1.patch)

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13695 started by Ian Pickering.

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: HDFS-13695.v1.patch

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13695 stopped by Ian Pickering.

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13695:
--

Assignee: Ian Pickering

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13688) Introduce msync API call

2018-06-22 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520803#comment-16520803
 ] 

Chen Liang edited comment on HDFS-13688 at 6/22/18 9:51 PM:


Post a WIP patch for early review. This patch depends on HDFS-12976, and needs 
to be applied on top of HDFS-12976 v002 patch.

Some notes on the patch for reviewers, comments are welcome!:
 # introduced per dfsclient AlignmentContext instance, which gets passed proxy 
provider. Existing code ensures that all proxies created from this provider 
will have this alignment context instance.
 # when server sets the last seen id in the rpc response, changed from 
lastWritten id to lastAppliedOrWrittenID
 # currently, using a local spin loop with 1ms interval, at most 1000 loops to 
wait for observer to catch up. This is based on the assumption that, with fast 
path tailing, the wait should not be very long, so the local spin might be 
sufficient and actually more efficient. This could be replaced by some other 
mechanism. 
 # leverage deferred response and a dedicated new thread pool of 10 thread to 
handle all msync, such that handler threads will not be handling (and 
potentially blocking) on msync call. 10 is hard coded, can be made configurable 
if more preferred.
 # currently, this is a call exposed through DFSClient and 
DistributedFilesystem, still needs to be called explicitly. Will need to make 
it that every single call to Observer is somehow piggybacked with msync.
 # for a client that already has a state id set in its alignmentContext, the 
msync call will directly calls into observer node to sync on this state id. But 
if there is no state id set in alignmentContext (e.g. a freshly started 
client). The client needs to first get the current state id from active NN, by 
making a "setup" call. Based on offline discussion with Konstantin, we may not 
have to introduce a new "setup" call. This can be done by making any call, as 
long as it is to active. Currently in ClientProtocol, there is getQuotaUsage 
which is annotated with activeOnly = true. So the current patch makes a 
getQuotaUsage call on root directory as a "setup" call.


was (Author: vagarychen):
Post a WIP patch for early review. This patch depends on HDFS-12976, and needs 
to be applied on top of HDFS-12976 v002 patch.

Some notes on the patch for reviewers, comments are welcome!:
# introduced per dfsclient AlignmentContext instance, which gets passed proxy 
provider. Existing code ensures that all proxies created from this provider 
will have this alignment context instance. 
# when server sets the last seen id in the rpc response, changed from 
lastWritten id to lastAppliedOrWrittenID
# currently, using a local spin loop with 1ms interval, at most 1000 loops to 
wait for observer to catch up.
# leverage deferred response and a dedicated new thread pool of 10 thread to 
handle all msync, such that handler threads will not be handling (and 
potentially blocking) on msync call. 10 is hard coded, can be made configurable 
if more preferred.
# currently, this is a call exposed through DFSClient and 
DistributedFilesystem, still needs to be called explicitly. Will need to make 
it that every single call to Observer is somehow piggybacked with msync.
# for a client that already has a state id set in its alignmentContext, the 
msync call will directly calls into observer node to sync on this state id. But 
if there is no state id set in alignmentContext (e.g. a freshly started 
client). The client needs to first get the current state id from active NN, by 
making a "setup" call. Based on offline discussion with Konstantin, we may not 
have to introduce a new "setup" call. This can be done by making any call, as 
long as it is to active. Currently in ClientProtocol, there is getQuotaUsage 
which is annotated with activeOnly = true. So the current patch makes a 
getQuotaUsage call on root directory as a "setup" call.

> Introduce msync API call
> 
>
> Key: HDFS-13688
> URL: https://issues.apache.org/jira/browse/HDFS-13688
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13688-HDFS-12943.WIP.patch
>
>
> As mentioned in the design doc in HDFS-12943, to ensure consistent read, we 
> need to introduce an RPC call {{msync}}. Specifically, client can issue a 
> msync call to Observer node along with a transactionID. The msync will only 
> return when the Observer's transactionID has caught up to the given ID. This 
> JIRA is to add this API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDDS-189) Update HDDS to start OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520804#comment-16520804
 ] 

Arpit Agarwal commented on HDDS-189:


This patch applies to the docker-hadoop-runner branch.

It updates the starter script to look for both ENSURE_KSM_INITIALIZED and 
ENSURE_OM_INITIALIZED. Eventually the ENSURE_KSM_INITIALIZED block will go away.

[~elek], [~anu], can you please take a look?

> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-189.01.patch
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-189) Update HDDS to start OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-189:
---
Attachment: HDDS-189.01.patch

> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-189.01.patch
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-189) Update HDDS to start OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-189:
--

 Summary: Update HDDS to start OzoneManager
 Key: HDDS-189
 URL: https://issues.apache.org/jira/browse/HDDS-189
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HDDS-167 is renaming KeySpaceManager to OzoneManager.

So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13688) Introduce msync API call

2018-06-22 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520803#comment-16520803
 ] 

Chen Liang commented on HDFS-13688:
---

Post a WIP patch for early review. This patch depends on HDFS-12976, and needs 
to be applied on top of HDFS-12976 v002 patch.

Some notes on the patch for reviewers, comments are welcome!:
# introduced per dfsclient AlignmentContext instance, which gets passed proxy 
provider. Existing code ensures that all proxies created from this provider 
will have this alignment context instance. 
# when server sets the last seen id in the rpc response, changed from 
lastWritten id to lastAppliedOrWrittenID
# currently, using a local spin loop with 1ms interval, at most 1000 loops to 
wait for observer to catch up.
# leverage deferred response and a dedicated new thread pool of 10 thread to 
handle all msync, such that handler threads will not be handling (and 
potentially blocking) on msync call. 10 is hard coded, can be made configurable 
if more preferred.
# currently, this is a call exposed through DFSClient and 
DistributedFilesystem, still needs to be called explicitly. Will need to make 
it that every single call to Observer is somehow piggybacked with msync.
# for a client that already has a state id set in its alignmentContext, the 
msync call will directly calls into observer node to sync on this state id. But 
if there is no state id set in alignmentContext (e.g. a freshly started 
client). The client needs to first get the current state id from active NN, by 
making a "setup" call. Based on offline discussion with Konstantin, we may not 
have to introduce a new "setup" call. This can be done by making any call, as 
long as it is to active. Currently in ClientProtocol, there is getQuotaUsage 
which is annotated with activeOnly = true. So the current patch makes a 
getQuotaUsage call on root directory as a "setup" call.

> Introduce msync API call
> 
>
> Key: HDFS-13688
> URL: https://issues.apache.org/jira/browse/HDFS-13688
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13688-HDFS-12943.WIP.patch
>
>
> As mentioned in the design doc in HDFS-12943, to ensure consistent read, we 
> need to introduce an RPC call {{msync}}. Specifically, client can issue a 
> msync call to Observer node along with a transactionID. The msync will only 
> return when the Observer's transactionID has caught up to the given ID. This 
> JIRA is to add this API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-189) Update HDDS to start OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-189:
---
Fix Version/s: 0.2.1

> Update HDDS to start OzoneManager
> -
>
> Key: HDDS-189
> URL: https://issues.apache.org/jira/browse/HDDS-189
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
>
> HDDS-167 is renaming KeySpaceManager to OzoneManager.
> So let's update Hadoop Runner accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13688) Introduce msync API call

2018-06-22 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13688:
--
Attachment: HDFS-13688-HDFS-12943.WIP.patch

> Introduce msync API call
> 
>
> Key: HDFS-13688
> URL: https://issues.apache.org/jira/browse/HDFS-13688
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13688-HDFS-12943.WIP.patch
>
>
> As mentioned in the design doc in HDFS-12943, to ensure consistent read, we 
> need to introduce an RPC call {{msync}}. Specifically, client can issue a 
> msync call to Observer node along with a transactionID. The msync will only 
> return when the Observer's transactionID has caught up to the given ID. This 
> JIRA is to add this API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Attachment: HDDS-167.02.patch

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-188) TestKSMMetrcis should not use the deprecated WhiteBox class

2018-06-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-188:
--

 Summary: TestKSMMetrcis should not use the deprecated WhiteBox 
class
 Key: HDDS-188
 URL: https://issues.apache.org/jira/browse/HDDS-188
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 0.2.1.


TestKSMMetrcis (also needs to be renamed) should stop using 
{{org.apache.hadoop.test.Whitebox}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Ian Pickering (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520747#comment-16520747
 ] 

Ian Pickering commented on HDFS-13695:
--

Thanks [~giovanni.fumarola] for creating this issue. I'll attach V1 of the 
patch.

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDFS-13695:

Description: Move logging to slf4j in HDFS package

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13695) Move logging to slf4j in HDFS package

2018-06-22 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created HDFS-13695:
---

 Summary: Move logging to slf4j in HDFS package
 Key: HDFS-13695
 URL: https://issues.apache.org/jira/browse/HDFS-13695
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-22 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520734#comment-16520734
 ] 

Erik Krogen edited comment on HDFS-13609 at 6/22/18 7:57 PM:
-

Thanks for looking at {{BackupImage}} [~shv]! I had no idea what was happening 
there :) I have removed this extra parameter altogether; now on the QJM 
{{optimizeLatency == inProgressOK}}. This reduced the scope of changes 
significantly.

[~linyiqun], thanks for taking a look! I agree with you on the Precondition 
check; I have incorporated into v003. For your second comment, it should be {{< 
0}}. A return value of {{highestTxnCount == 0}} is expected behavior if you 
have read all available edits and continue to request more; this is not an 
error situation, while seeing a value {{< 0}} is an error. Let me know if you 
disagree.

Uploaded v003 patch incorporating all of [~shv] and [~linyiqun]'s comments.


was (Author: xkrogen):
Thanks for looking at {{BackupImage}} [~shv]! I had no idea what was happening 
there :) I have removed this extra parameter altogether; now on the QJM 
{{optimizeLatency == inProgressOK}}. This reduced the scope of changes 
significantly.

[~linyiqun], thanks for taking a look! I agree with you on the Precondition 
check; I have incorporated into v003. For your second comment, it should be {{ 
< 0 }}. A return value of {{highestTxnCount == 0}} is expected behavior if you 
have read all available edits and continue to request more; this is not an 
error situation, while seeing a value {{ < 0 }} is an error. Let me know if you 
disagree.

Uploaded v003 patch incorporating all of [~shv] and [~linyiqun]'s comments.

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, 
> HDFS-13609-HDFS-12943.003.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520733#comment-16520733
 ] 

genericqa commented on HDDS-183:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
17s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
47s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
11s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Possible null pointer dereference in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerManager$KeyValueContainerReader.readVolume(File)
 due to return value of called method  Dereferenced at 
KeyValueContainerManager.java:org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerManager$KeyValueContainerReader.readVolume(File)
 due to return value of called method  Dereferenced at 
KeyValueContainerManager.java:[line 123] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-183 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928805/HDDS-183-HDDS-48.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a5d2790e359c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / ca192cb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-22 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520734#comment-16520734
 ] 

Erik Krogen commented on HDFS-13609:


Thanks for looking at {{BackupImage}} [~shv]! I had no idea what was happening 
there :) I have removed this extra parameter altogether; now on the QJM 
{{optimizeLatency == inProgressOK}}. This reduced the scope of changes 
significantly.

[~linyiqun], thanks for taking a look! I agree with you on the Precondition 
check; I have incorporated into v003. For your second comment, it should be {{ 
< 0 }}. A return value of {{highestTxnCount == 0}} is expected behavior if you 
have read all available edits and continue to request more; this is not an 
error situation, while seeing a value {{ < 0 }} is an error. Let me know if you 
disagree.

Uploaded v003 patch incorporating all of [~shv] and [~linyiqun]'s comments.

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, 
> HDFS-13609-HDFS-12943.003.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-22 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13609:
---
Attachment: HDFS-13609-HDFS-12943.003.patch

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, 
> HDFS-13609-HDFS-12943.003.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12977) Add stateId to RPC headers.

2018-06-22 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520712#comment-16520712
 ] 

Erik Krogen commented on HDFS-12977:


[~vagarychen] I think this is not necessary. By the time the RPC response is 
returned to the client, the transaction must have been written to the edit log, 
so it will be included in {{getLastWrittenTransactionId()}}. We want to return 
as low of an ID as possible because this means less wait time on the Observer 
to catch up to the given ID. I think there may be some confusion - IIUC this 
{{getLastWrittenTransactionId()}} will be fetched on the _active_, not the 
standby, and used as the ID we need to wait for to be caught up.

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch, HDFS_12977.trunk.004.patch, 
> HDFS_12977.trunk.005.patch, HDFS_12977.trunk.006.patch, 
> HDFS_12977.trunk.007.patch, HDFS_12977.trunk.008.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520706#comment-16520706
 ] 

Bharat Viswanadham edited comment on HDDS-183 at 6/22/18 7:11 PM:
--

Attached patch v01.

Fixed findbug issues.

Added missing logic to verify the checksum of the .container file.


was (Author: bharatviswa):
Fixed findbug issues.

Added missing logic to verify the checksum of the .container file.

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520706#comment-16520706
 ] 

Bharat Viswanadham commented on HDDS-183:
-

Fixed findbug issues.

Added missing logic to verify the checksum of the .container file.

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Attachment: HDDS-183-HDDS-48.01.patch

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Create over replicated queue

2018-06-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Description: Create over replicated queue to replicate over replicated 
containers in Ozone.  (was: Create under replicated queue to replicate under 
replicated containers in Ozone.)

> Create over replicated queue
> 
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
>
> Create over replicated queue to replicate over replicated containers in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-187) Create over replicated queue

2018-06-22 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-187:
---

 Summary: Create over replicated queue
 Key: HDDS-187
 URL: https://issues.apache.org/jira/browse/HDDS-187
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: 0.2.1


Create under replicated queue to replicate under replicated containers in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-186) Create under replicated queue

2018-06-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-186:

Description: Create under replicated queue to replicate under replicated 
containers in Ozone.  (was: Refactor ContainerInfo to remove Pipeline object 
from it. We can add below 4 fields to ContainerInfo to recreate pipeline if 
required:
# pipelineId
# replication type
# expected replication count
# DataNode where its replica exist)

> Create under replicated queue
> -
>
> Key: HDDS-186
> URL: https://issues.apache.org/jira/browse/HDDS-186
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
>
> Create under replicated queue to replicate under replicated containers in 
> Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-186) Create under replicated queue

2018-06-22 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-186:
---

 Summary: Create under replicated queue
 Key: HDDS-186
 URL: https://issues.apache.org/jira/browse/HDDS-186
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: 0.2.1


Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
fields to ContainerInfo to recreate pipeline if required:
# pipelineId
# replication type
# expected replication count
# DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable

2018-06-22 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520503#comment-16520503
 ] 

Kitti Nanasi edited comment on HDFS-13690 at 6/22/18 3:36 PM:
--

The new output looks like this now:
{code:java}
root@ad1edbfc9866:/# hdfs crypto -createZone -keyName mykey -path /zone
Could not create encryption zone: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed to connect 
to: http://localhost:9600/kms/v1/key/mykey/_metadata
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:486)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:894)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:394)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:391)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.getMetadata(LoadBalancingKMSClientProvider.java:391)
at 
org.apache.hadoop.crypto.key.KeyProviderExtension.getMetadata(KeyProviderExtension.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.ensureKeyIsInitialized(FSDirEncryptionZoneOp.java:125)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createEncryptionZone(FSNamesystem.java:7131)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createEncryptionZone(NameNodeRpcServer.java:2055)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createEncryptionZone(ClientNamenodeProtocolServerSideTranslatorPB.java:1449)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:144)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:482)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:477)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at 

[jira] [Commented] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable

2018-06-22 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520503#comment-16520503
 ] 

Kitti Nanasi commented on HDFS-13690:
-

The new output looks like this now: 

{code}
root@ad1edbfc9866:/# hdfs crypto -createZone -keyName mykey -path /zone
Could not create encryption zone: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed to connect 
to: http://localhost:9600/kms/v1/key/mykey/_metadata
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:486)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:894)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:394)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:391)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.getMetadata(LoadBalancingKMSClientProvider.java:391)
at 
org.apache.hadoop.crypto.key.KeyProviderExtension.getMetadata(KeyProviderExtension.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.ensureKeyIsInitialized(FSDirEncryptionZoneOp.java:125)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createEncryptionZone(FSNamesystem.java:7131)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createEncryptionZone(NameNodeRpcServer.java:2055)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createEncryptionZone(ClientNamenodeProtocolServerSideTranslatorPB.java:1449)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:144)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:482)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:477)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at 

[jira] [Updated] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable

2018-06-22 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13690:

Attachment: HDFS-13690.001.patch

> Improve error message when creating encryption zone while KMS is unreachable
> 
>
> Key: HDFS-13690
> URL: https://issues.apache.org/jira/browse/HDFS-13690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, hdfs, kms
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-13690.001.patch
>
>
> In failure testing, we stopped the KMS and then tried to run some encryption 
> related commands.
> {{hdfs crypto -createZone}} will complain with a short "RemoteException: 
> Connection refused." This message could be improved to explain that we cannot 
> connect to the KMSClientProvier.
> For example, {{hadoop key list}} while KMS is down will error:
> {code}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: 
> Connection refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13694) Making md5 computing being in parallel with image loading

2018-06-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520311#comment-16520311
 ] 

genericqa commented on HDFS-13694:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
1s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Should 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$DigestThread
 be a _static_ inner class?  At FSImageFormatProtobuf.java:inner class?  At 
FSImageFormatProtobuf.java:[lines 183-210] |
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13694 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928744/HDFS-13694-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a50fba5f9b8e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDDS-170) Fix TestBlockDeletingService#testBlockDeletionTimeout

2018-06-22 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-170:
-
Fix Version/s: 0.2.1

> Fix TestBlockDeletingService#testBlockDeletionTimeout
> -
>
> Key: HDDS-170
> URL: https://issues.apache.org/jira/browse/HDDS-170
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-170.001.patch
>
>
> TestBlockDeletingService#testBlockDeletionTimeout timesout while waiting for 
> expected error messsage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-178) DeleteBlocks should not be handled by open containers

2018-06-22 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-178:
-
Fix Version/s: 0.2.1

> DeleteBlocks should not be handled by open containers
> -
>
> Key: HDDS-178
> URL: https://issues.apache.org/jira/browse/HDDS-178
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-178.001.patch, HDDS-178.002.patch, 
> HDDS-178.003.patch, HDDS-178.004.patch
>
>
> In the case of open containers deleteBlocks command just adds an entry in the 
> log but does not delete the blocks. These blocks are deleted only when 
> container is closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13694) Making md5 computing being in parallel with image loading

2018-06-22 Thread zhouyingchao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HDFS-13694:

Attachment: HDFS-13694-001.patch
Status: Patch Available  (was: Open)

Test the patch against a fsimage of a 70PB 2.4 cluster (200million files and 
300million blocks), the image loading time be reduced from 1210 seconds to 1105 
seconds.

> Making md5 computing being in parallel with image loading
> -
>
> Key: HDFS-13694
> URL: https://issues.apache.org/jira/browse/HDFS-13694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhouyingchao
>Priority: Major
> Attachments: HDFS-13694-001.patch
>
>
> During namenode image loading, it firstly compute the md5 and then load the 
> image. Actually these two steps can be in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13694) Making md5 computing being in parallel with image loading

2018-06-22 Thread zhouyingchao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HDFS-13694:

Attachment: (was: HDFS-13694-001.patch)

> Making md5 computing being in parallel with image loading
> -
>
> Key: HDFS-13694
> URL: https://issues.apache.org/jira/browse/HDFS-13694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhouyingchao
>Priority: Major
>
> During namenode image loading, it firstly compute the md5 and then load the 
> image. Actually these two steps can be in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13694) Making md5 computing being in parallel with image loading

2018-06-22 Thread zhouyingchao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HDFS-13694:

Attachment: HDFS-13694-001.patch

> Making md5 computing being in parallel with image loading
> -
>
> Key: HDFS-13694
> URL: https://issues.apache.org/jira/browse/HDFS-13694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhouyingchao
>Priority: Major
> Attachments: HDFS-13694-001.patch
>
>
> During namenode image loading, it firstly compute the md5 and then load the 
> image. Actually these two steps can be in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone

2018-06-22 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520170#comment-16520170
 ] 

Takanobu Asanuma commented on HDDS-184:
---

I've confirmed below in my local mac. Kindly help to review it.
{noformat}
$ cd hadoop-ozone/acceptance-test
$ mvn integration-test -Phdds,ozone-acceptance-test,dist -DskipTests
...
==
Acceptance
==
Acceptance.Basic
==
Acceptance.Basic.Basic :: Smoketest ozone cluster startup
==
Test rest interface   | PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Basic.Basic :: Smoketest ozone cluster startup | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage
==
RestClient without http port  | PASS |
--
RestClient with http port | PASS |
--
RestClient without host name  | PASS |
--
RpcClient with port   | PASS |
--
RpcClient without host| PASS |
--
RpcClient without scheme  | PASS |
--
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage| PASS |
6 critical tests, 6 passed, 0 failed
6 tests total, 6 passed, 0 failed
==
Acceptance.Basic  | PASS |
9 critical tests, 9 passed, 0 failed
9 tests total, 9 passed, 0 failed
==
Acceptance| PASS |
9 critical tests, 9 passed, 0 failed
9 tests total, 9 passed, 0 failed
==
...
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
{noformat}

> Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone
> 
>
> Key: HDDS-184
> URL: https://issues.apache.org/jira/browse/HDDS-184
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-184.1.patch
>
>
> This is a separated task from HADOOP-15495 for simplicity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13654) The default secret signature for httpfs is "hadoop httpfs secret", This should be a random string for better security.

2018-06-22 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-13654:
---

Assignee: Takanobu Asanuma

> The default secret signature for httpfs is "hadoop httpfs secret", This 
> should be a random string for better security. 
> ---
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Pulkit Bhardwaj
>Assignee: Takanobu Asanuma
>Priority: Minor
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13654) The default secret signature for httpfs is "hadoop httpfs secret", This should be a random string for better security.

2018-06-22 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520143#comment-16520143
 ] 

Takanobu Asanuma commented on HDFS-13654:
-

Thanks for creating the issue, [~pbhardwaj]. I'd like to work on this.

I think it is important and will raise the priority.

> The default secret signature for httpfs is "hadoop httpfs secret", This 
> should be a random string for better security. 
> ---
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Pulkit Bhardwaj
>Priority: Minor
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520138#comment-16520138
 ] 

genericqa commented on HDDS-175:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-hdds/common generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 46s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} 

[jira] [Commented] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520133#comment-16520133
 ] 

genericqa commented on HDDS-183:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
42s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
51s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
19s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/container-service generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Possible null pointer dereference in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerManager$KeyValueContainerReader.readVolume(File)
 due to return value of called method  Dereferenced at 
KeyValueContainerManager.java:org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerManager$KeyValueContainerReader.readVolume(File)
 due to return value of called method  Dereferenced at 
KeyValueContainerManager.java:[line 132] |
|  |  Possible null pointer dereference in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerManager$KeyValueContainerReader.readVolume(File)
 due to return value of called method  Dereferenced at 
KeyValueContainerManager.java:org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerManager$KeyValueContainerReader.readVolume(File)
 due to return value of called method  Dereferenced at 
KeyValueContainerManager.java:[line 137] |
|  |  Possible null pointer dereference in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerManager$KeyValueContainerReader.readVolume(File)
 due to return value of called method  Dereferenced at 

[jira] [Created] (HDFS-13694) Making md5 computing being in parallel with image loading

2018-06-22 Thread zhouyingchao (JIRA)
zhouyingchao created HDFS-13694:
---

 Summary: Making md5 computing being in parallel with image loading
 Key: HDFS-13694
 URL: https://issues.apache.org/jira/browse/HDFS-13694
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: zhouyingchao


During namenode image loading, it firstly compute the md5 and then load the 
image. Actually these two steps can be in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-22 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520115#comment-16520115
 ] 

Yiqun Lin commented on HDFS-13609:
--

Hi [~xkrogen], recently I am learning *Consistent Reads from Standby Node*. 
Just reviewed on this, two comments:
 * Looks like we should do a Precondition check when getting {{maxTxnsPerRpc}} 
value from configuration. If a invalid max -txns value configured (0 or <0), no 
edits data will be returned.

 * 
{code:java}
  private void selectRpcInputStreams(Collection streams,
  long fromTxnId, boolean onlyDurableTxns) throws IOException {
...
 
int highestTxnCount = responseCounts.get(responseCounts.size() - 1);
if (LOG.isDebugEnabled() || highestTxnCount < 0) {
  ...
  msg.append(">");
  if (highestTxnCount < 0) {
throw new IOException("Did not get any valid JournaledEdits " +
"responses: " + msg);
  } else {
LOG.debug(msg.toString());
  }
}
...
}
{code}
Here {{highestTxnCount < 0}} is accurate? Seems {{highestTxnCount <= 0}} is 
right. The txnCount can be 0 returned by {{JournaledEditsCache#retrieveEdits}} 
(For example, when requestedStartTxn > highestTxnId).

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Fix Version/s: 0.2.1

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-183-HDDS-48.00.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520107#comment-16520107
 ] 

Bharat Viswanadham commented on HDFS-13692:
---

Thank You [~linyiqun] for committing changes.

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Status: Patch Available  (was: Open)

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-183-HDDS-48.00.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Attachment: HDDS-183-HDDS-48.00.patch

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-183-HDDS-48.00.patch
>
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-185) TestCloseContainerByPipeline#testCloseContainerViaRatis fail intermittently

2018-06-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-185:
-
Description: 
CloseCommand gets picked up via datanode by the Hearbeat response it receives 
from SCM. It may happen, one of the follower node not yet received the 
heartBeat response from SCM whereas the leader has got issued the 
closeContainer Command encoded from its own HeartBeat response from SCM. In 
such a case, leader will close the container followed by the followers.

The follower on which the container is closed Via ratis, has not yet received 
any CloseContainer command from SCM directly and hence on that 
closeCommandHandler is not invoked yet. hence the assertion on the below code 
in one of the followers:
{code:java}
if (!containerData.isOpen()) {
  // make sure the closeContainerHandler on the Datanode is invoked
  Assert.assertTrue(
  datanodeService.getDatanodeStateMachine().getCommandDispatcher()
  .getCloseContainerHandler().getInvocationCount() > 0);
  return true;
}{code}
 

  was:
CloseCommand gets picked up via datanode by the Hearbeat response it receives 
from SCM. It may happen, one of the follower node not yet received the 
heartBeat response from SCM whereas the leader has got issued the 
closeContainer Command encoded from its own HeartBeat response from SCM. In 
such a case, leader will close the container followed by the followers.

The follower on which the container is closed Via ratis, has not yet received 
any CloseContainer command from SCM directly and hence on that 
closeCommandHandler will never be invoked. hence the assertion on the below 
code in one of the followers:
{code:java}
if (!containerData.isOpen()) {
  // make sure the closeContainerHandler on the Datanode is invoked
  Assert.assertTrue(
  datanodeService.getDatanodeStateMachine().getCommandDispatcher()
  .getCloseContainerHandler().getInvocationCount() > 0);
  return true;
}{code}
 


> TestCloseContainerByPipeline#testCloseContainerViaRatis fail intermittently
> ---
>
> Key: HDDS-185
> URL: https://issues.apache.org/jira/browse/HDDS-185
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> CloseCommand gets picked up via datanode by the Hearbeat response it receives 
> from SCM. It may happen, one of the follower node not yet received the 
> heartBeat response from SCM whereas the leader has got issued the 
> closeContainer Command encoded from its own HeartBeat response from SCM. In 
> such a case, leader will close the container followed by the followers.
> The follower on which the container is closed Via ratis, has not yet received 
> any CloseContainer command from SCM directly and hence on that 
> closeCommandHandler is not invoked yet. hence the assertion on the below code 
> in one of the followers:
> {code:java}
> if (!containerData.isOpen()) {
>   // make sure the closeContainerHandler on the Datanode is invoked
>   Assert.assertTrue(
>   datanodeService.getDatanodeStateMachine().getCommandDispatcher()
>   .getCloseContainerHandler().getInvocationCount() > 0);
>   return true;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-185) TestCloseContainerByPipeline#testCloseContainerViaRatis fail intermittently

2018-06-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-185:
-
Description: 
CloseCommand gets picked up via datanode by the Hearbeat response it receives 
from SCM. It may happen, one of the follower node not yet received the 
heartBeat response from SCM whereas the leader has got issued the 
closeContainer Command encoded from its own HeartBeat response from SCM. In 
such a case, leader will close the container followed by the followers.

The follower on which the container is closed Via ratis, has not yet received 
any CloseContainer command from SCM directly and hence on that 
closeCommandHandler will never be invoked. hence the assertion on the below 
code in one of the followers:
{code:java}
if (!containerData.isOpen()) {
  // make sure the closeContainerHandler on the Datanode is invoked
  Assert.assertTrue(
  datanodeService.getDatanodeStateMachine().getCommandDispatcher()
  .getCloseContainerHandler().getInvocationCount() > 0);
  return true;
}{code}
 

> TestCloseContainerByPipeline#testCloseContainerViaRatis fail intermittently
> ---
>
> Key: HDDS-185
> URL: https://issues.apache.org/jira/browse/HDDS-185
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> CloseCommand gets picked up via datanode by the Hearbeat response it receives 
> from SCM. It may happen, one of the follower node not yet received the 
> heartBeat response from SCM whereas the leader has got issued the 
> closeContainer Command encoded from its own HeartBeat response from SCM. In 
> such a case, leader will close the container followed by the followers.
> The follower on which the container is closed Via ratis, has not yet received 
> any CloseContainer command from SCM directly and hence on that 
> closeCommandHandler will never be invoked. hence the assertion on the below 
> code in one of the followers:
> {code:java}
> if (!containerData.isOpen()) {
>   // make sure the closeContainerHandler on the Datanode is invoked
>   Assert.assertTrue(
>   datanodeService.getDatanodeStateMachine().getCommandDispatcher()
>   .getCloseContainerHandler().getInvocationCount() > 0);
>   return true;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-185) TestCloseContainerByPipeline#testCloseContainerViaRatis fail intermittently

2018-06-22 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-185:


 Summary: TestCloseContainerByPipeline#testCloseContainerViaRatis 
fail intermittently
 Key: HDDS-185
 URL: https://issues.apache.org/jira/browse/HDDS-185
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org