[jira] [Commented] (HDFS-11828) [READ] Refactor FsDatasetImpl to use the BlockAlias from ClientProtocol for PROVIDED blocks.

2017-08-22 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137942#comment-16137942
 ] 

Virajith Jalaparti commented on HDFS-11828:
---

 [~ehiggs], Similar to HDFS-11639, we can make this a sub-task of HDFS-12090?

> [READ] Refactor FsDatasetImpl to use the BlockAlias from ClientProtocol for 
> PROVIDED blocks.
> 
>
> Key: HDFS-11828
> URL: https://issues.apache.org/jira/browse/HDFS-11828
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>
> From HDFS-11639:
> {quote}[~virajith]
> Looking over this patch, one thing that occurred to me is if it makes sense 
> to unify FileRegionProvider with BlockProvider? They both have very close 
> functionality.
> I like the use of BlockProvider#resolve(). If we unify FileRegionProvider 
> with BlockProvider, then resolve can return null if the block map is 
> accessible from the Datanodes also. If it is accessible only from the 
> Namenode, then a non-null value can be propagated to the Datanode.
> One of the motivations for adding the BlockAlias to the client protocol was 
> to have the blocks map only on the Namenode. In this scenario, the ReplicaMap 
> in FsDatasetImpl of will not have any replicas apriori. Thus, one way to 
> ensure that the FsDatasetImpl interface continues to function as today is to 
> create a FinalizedProvidedReplica in FsDatasetImpl#getBlockInputStream when 
> BlockAlias is not null.
> {quote}
> {quote}[~ehiggs]
> With the pending refactoring of the FsDatasetImpl which won't have replicas a 
> priori, I wonder if it makes sense for the Datanode to have a 
> FileRegionProvider or BlockProvider at all. They are given the appropriate 
> block ID and block alias in the readBlock or writeBlock message. Maybe I'm 
> overlooking what's still being provided.{quote}
> {quote}[~virajith]
> I was trying to reconcile the existing design (FsDatasetImpl knows about 
> provided blocks apriori) with the new design where FsDatasetImpl will not 
> know about these before but just constructs them on-the-fly using the 
> BlockAlias from readBlock or writeBlock. Using BlockProvider#resolve() allows 
> us to have both designs exist in parallel. I was wondering if we should still 
> retain the earlier given the latter design.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137938#comment-16137938
 ] 

Hadoop QA commented on HDFS-12283:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 15m 
42s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 42s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12283 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883255/HDFS-12283-HDFS-7240.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux f737b30f044a 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d0bd0f6 |
| Default Java | 1.8.0_144 |
| mvninstall | 

[jira] [Commented] (HDFS-11639) [READ] Encode the BlockAlias in the client protocol

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137939#comment-16137939
 ] 

Hadoop QA commented on HDFS-11639:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-11639 does not apply to HDFS-9806. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11639 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869427/HDFS-11639-HDFS-9806.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20816/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] Encode the BlockAlias in the client protocol
> ---
>
> Key: HDFS-11639
> URL: https://issues.apache.org/jira/browse/HDFS-11639
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-11639-HDFS-9806.001.patch, 
> HDFS-11639-HDFS-9806.002.patch, HDFS-11639-HDFS-9806.003.patch, 
> HDFS-11639-HDFS-9806.004.patch, HDFS-11639-HDFS-9806.005.patch
>
>
> As part of the {{PROVIDED}} storage type, we have a {{BlockAlias}} type which 
> encodes information about where the data comes from. i.e. URI, offset, 
> length, and nonce value. This data should be encoded in the protocol 
> ({{LocatedBlockProto}} and the {{BlockTokenIdentifier}}) when a block is 
> available using the PROVIDED storage type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11639) [READ] Encode the BlockAlias in the client protocol

2017-08-22 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137937#comment-16137937
 ] 

Virajith Jalaparti commented on HDFS-11639:
---

Hi [~ehiggs], As this change is required only for writes, we can move this to  
HDFS-12090. Are you OK with that?

> [READ] Encode the BlockAlias in the client protocol
> ---
>
> Key: HDFS-11639
> URL: https://issues.apache.org/jira/browse/HDFS-11639
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-11639-HDFS-9806.001.patch, 
> HDFS-11639-HDFS-9806.002.patch, HDFS-11639-HDFS-9806.003.patch, 
> HDFS-11639-HDFS-9806.004.patch, HDFS-11639-HDFS-9806.005.patch
>
>
> As part of the {{PROVIDED}} storage type, we have a {{BlockAlias}} type which 
> encodes information about where the data comes from. i.e. URI, offset, 
> length, and nonce value. This data should be encoded in the protocol 
> ({{LocatedBlockProto}} and the {{BlockTokenIdentifier}}) when a block is 
> available using the PROVIDED storage type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137912#comment-16137912
 ] 

Weiwei Yang edited comment on HDFS-12282 at 8/23/17 5:25 AM:
-

Ah [~anu], I think you are right, this worker thread processes HB and handles 
the states of nodes, if this gets slow down, there will be some lag on nodes 
status update. 

bq. Why do we need this in the path of anything heart beat related?

In earlier discussions, we agreed to use HB to send block deletion TXs. Do you 
want to re-discuss that? Maybe move this part away from HB processing and 
create a new RPC call for that? I am totally fine with that too, I think what 
you mentioned to keep HB light weight and non-disk-I/O involved makes sense to 
me. Existing implementation logic please refer to [^Block delete via HB between 
SCM and DN.png].

Thanks


was (Author: cheersyang):
Ah [~anu], I think you are right, this worker thread processes HB and handles 
the states of nodes, if this gets slow down, there will be some lag on nodes 
status update. 

bq. Why do we need this in the path of anything heart beat related?

In earlier discussions, we agreed to use HB to send block deletion TXs. Do you 
want to re-discuss that? Maybe move this part away from HB processing and 
create a new RPC call for that? I am totally fine with that too, I think what 
you mentioned to keep HB light weight and non-disk-I/O involved makes sense to 
me. Existing implementation logic please refer to [^Block delete via HB between 
SCM and DN.png]

Thanks

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282-HDFS-7240.003.patch, HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12327:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Committed to the feature branch, thanks [~linyiqun] for your contribution.

> Ozone: support setting timeout in background service
> 
>
> Key: HDFS-12327
> URL: https://issues.apache.org/jira/browse/HDFS-12327
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-12327-HDFS-7240.001.patch, 
> HDFS-12327-HDFS-7240.002.patch, HDFS-12327-HDFS-7240.003.patch
>
>
> The background should support timeout setting in case the task ran hung 
> caused by unpredictability sceneries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137912#comment-16137912
 ] 

Weiwei Yang edited comment on HDFS-12282 at 8/23/17 5:24 AM:
-

Ah [~anu], I think you are right, this worker thread processes HB and handles 
the states of nodes, if this gets slow down, there will be some lag on nodes 
status update. 

bq. Why do we need this in the path of anything heart beat related?

In earlier discussions, we agreed to use HB to send block deletion TXs. Do you 
want to re-discuss that? Maybe move this part away from HB processing and 
create a new RPC call for that? I am totally fine with that too, I think what 
you mentioned to keep HB light weight and non-disk-I/O involved makes sense to 
me. Existing implementation logic please refer to [^Block delete via HB between 
SCM and DN.png]

Thanks


was (Author: cheersyang):
Ah [~anu], I think you are right, this worker thread processes HB and handles 
the states of nodes, if this gets slow down, there will be some lag on nodes 
status update. 

bq. Why do we need this in the path of anything heart beat related?

In earlier discussions, we agreed to use HB to send block deletion TXs. Do you 
want to re-discuss that? Maybe move this part away from HB processing and 
create a new RPC call for that? I am totally fine with that too, I think what 
you mentioned to keep HB light weight and non-disk-I/O involved makes sense to 
me.

Thanks

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282-HDFS-7240.003.patch, HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137912#comment-16137912
 ] 

Weiwei Yang commented on HDFS-12282:


Ah [~anu], I think you are right, this worker thread processes HB and handles 
the states of nodes, if this gets slow down, there will be some lag on nodes 
status update. 

bq. Why do we need this in the path of anything heart beat related?

In earlier discussions, we agreed to use HB to send block deletion TXs. Do you 
want to re-discuss that? Maybe move this part away from HB processing and 
create a new RPC call for that? I am totally fine with that too, I think what 
you mentioned to keep HB light weight and non-disk-I/O involved makes sense to 
me.

Thanks

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282-HDFS-7240.003.patch, HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137899#comment-16137899
 ] 

Weiwei Yang commented on HDFS-12327:


+1 to v3 patch, I will commit this shortly.

> Ozone: support setting timeout in background service
> 
>
> Key: HDFS-12327
> URL: https://issues.apache.org/jira/browse/HDFS-12327
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12327-HDFS-7240.001.patch, 
> HDFS-12327-HDFS-7240.002.patch, HDFS-12327-HDFS-7240.003.patch
>
>
> The background should support timeout setting in case the task ran hung 
> caused by unpredictability sceneries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137891#comment-16137891
 ] 

Anu Engineer commented on HDFS-12282:
-

bq. SCMNodeManager#handleHeartbeat is already a worker thread to process HB 
queue, running in certain interval at background, even there is heavy I/O, it 
won't affect HB performance. 

I am afraid that might not be true. Imagine this scenario, the Disk I/O is 
incredibly slow. If this thread does not finish execution, then SCM's view of 
the cluster will be stale. That is machines which are *really* dead will not be 
marked as dead for example.

Now, these dead nodes would be assumed to be alive and clients will be told to 
do I/O to these nodes. Hence over all error rates in the cluster would shoot up.

Let us take a step back, I wanted to understand something more high level. Why 
do we need this in the path of anything heart beat related? May be I am missing 
something here.




> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282-HDFS-7240.003.patch, HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.16.patch

Thanks a lot Wei-Chiu!
Patch 16 to fix checkstyles, and edited a few log contents. failed tests did 
not reproduce locally.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.16.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, 
> Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136469#comment-16136469
 ] 

Xiao Chen edited comment on HDFS-10899 at 8/23/17 4:26 AM:
---

bq. ReencryptionUpdater#throttle(): updater would keep contending for namenode 
lock
{{batchService.take();}} is a blocking call, so hangs there if 'nothing to do', 
so NN lock untouched.
1.0 means no throttling, so would be touch on locking - that's because this is 
intended to be run in a maintenance window. Same reason why renames are 
disabled during this time.
Throttler also considers how many tasks are pending, to prevent piling up tasks 
on NN heap.

bq. ... ZoneSubmissionTracker#tasks If there will always be just one 
ReencryptionHandler, then this is okay.
Good analysis. Yes, one handler.

bq. the edit log is written only when all tasks are successful.
Edit: I think I misread your comment.
Yes, in case of failures, we best effort by telling admin 'there are failures, 
please examine and rerun'.


was (Author: xiaochen):
bq. ReencryptionUpdater#throttle(): updater would keep contending for namenode 
lock
{{batchService.take();}} is a blocking call, so hangs there if 'nothing to do', 
so NN lock untouched.
1.0 means no throttling, so would be touch on locking - that's because this is 
intended to be run in a maintenance window. Same reason why renames are 
disabled during this time.
Throttler also considers how many tasks are pending, to prevent piling up tasks 
on NN heap.

bq. ... ZoneSubmissionTracker#tasks If there will always be just one 
ReencryptionHandler, then this is okay.
Good analysis. Yes, one handler.

bq. the edit log is written only when all tasks are successful.
That {{updateReencryptionProgress}} call is to update the zone node with the 
progress. The actual file xattrs (aka. new edeks) are logged during the 
processing of each batch, via {{FSDirEncryptionZoneOp.setFileEncryptionInfo}}

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-08-22 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137866#comment-16137866
 ] 

John Zhuge commented on HDFS-12339:
---

Thanks [~saileshpatel] for the great report with so much detail!

> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-22 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12283:
--
Attachment: HDFS-12283-HDFS-7240.008.patch

fix the checkstyle issue.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch, 
> HDFS-12283-HDFS-7240.004.patch, HDFS-12283-HDFS-7240.005.patch, 
> HDFS-12283-HDFS-7240.006.patch, HDFS-12283-HDFS-7240.007.patch, 
> HDFS-12283-HDFS-7240.008.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12328) Ozone: Purge metadata of deleted blocks after max retry times

2017-08-22 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12328:
--
Description: 
In HDFS-12283, we set the value of count to -1 if blocks cannot be deleted 
after max retry times. We need to provide APIs for admins to purge the "-1" 
metadata manually. Implement these commands:
list the txids
{code}
hdfs scm -txid list -count -retry 
{code}
delete the txid
{code}
hdfs scm -txid delete -id 
{code}


  was:In HDFS-12283, we set the value of count to -1 if blocks cannot be 
deleted after max retry times. We need to provide APIs for admins to purge the 
"-1" metadata manually.


> Ozone: Purge metadata of deleted blocks after max retry times
> -
>
> Key: HDFS-12328
> URL: https://issues.apache.org/jira/browse/HDFS-12328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>
> In HDFS-12283, we set the value of count to -1 if blocks cannot be deleted 
> after max retry times. We need to provide APIs for admins to purge the "-1" 
> metadata manually. Implement these commands:
> list the txids
> {code}
> hdfs scm -txid list -count -retry 
> {code}
> delete the txid
> {code}
> hdfs scm -txid delete -id 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-08-22 Thread Sailesh Patel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137817#comment-16137817
 ] 

Sailesh Patel commented on HDFS-12339:
--

2017-08-22 18:40:02,817 TRACE org.apache.hadoop.oncrpc.RpcCall: 
Xid:-2139249408, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
procedure:2, credential:(AuthFlavor:AUTH_NONE), verifier:(AuthFlavor:AUTH_NONE)
2017-08-22 18:40:02,818 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
Unregistration failure with localhost:2049, portmap entry: 
(PortmapMapping-13:3:6:2049)
2017-08-22 18:40:02,820 WARN org.apache.hadoop.util.ShutdownHookManager: 
ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
Unregistration failure
java.lang.RuntimeException: Unregistration failure
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:135)
at org.apache.hadoop.oncrpc.RpcProgram.unregister(RpcProgram.java:118)
at org.apache.hadoop.nfs.nfs3.Nfs3Base$NfsShutdownHook.run(Nfs3Base.java:80)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
Caused by: java.net.SocketException: Socket is closed
at java.net.DatagramSocket.send(DatagramSocket.java:641)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
... 3 more
2017-08-22 18:40:02,898 TRACE org.apache.hadoop.oncrpc.RpcCall: Xid:-690614368, 
messageType:RPC_CALL, rpcVersion:2, program:10, version:2, procedure:2, 
credential:(AuthFlavor:AUTH_NONE), verifier:(AuthFlavor:AUTH_NONE)
2017-08-22 18:40:02,899 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
Unregistration failure with localhost:4242, portmap entry: 
(PortmapMapping-15:1:17:4242)
2017-08-22 18:40:02,899 WARN org.apache.hadoop.util.ShutdownHookManager: 
ShutdownHook 'Unregister' failed, java.lang.RuntimeException: Unregistration 
failure
java.lang.RuntimeException: Unregistration failure
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:135)
at org.apache.hadoop.oncrpc.RpcProgram.unregister(RpcProgram.java:118)
at org.apache.hadoop.mount.MountdBase$Unregister.run(MountdBase.java:100)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
Caused by: java.net.SocketException: Socket is closed
at java.net.DatagramSocket.send(DatagramSocket.java:641)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
... 3 more
2017-08-22 18:40:02,900 INFO org.apache.hadoop.nfs.nfs3.Nfs3Base: SHUTDOWN_MSG:
 

Further testing:

1. service rpcbind stop
2. start rpcbind in debug mode in foreground: rpcbind -d
3. start NFS Gateway 
4. rpcbind will show the registrations calls made similar to:

PMAP_SET request for (15, 1) : Checking caller's adress (port = 40)
PMAP_SET request for (15, 2) : Checking caller's adress (port = 40)
PMAP_SET request for (15, 3) : Checking caller's adress (port = 40)
PMAP_SET request for (15, 1) : Checking caller's adress (port = 40)
PMAP_SET request for (15, 2) : Checking caller's adress (port = 40)
PMAP_SET request for (15, 3) : Checking caller's adress (port = 40)
PMAP_SET request for (13, 3) : Checking caller's adress (port = 40)

rpcinfo -p shows:
15 1 udp 4242 mountd
15 2 udp 4242 mountd
15 3 udp 4242 mountd
15 1 tcp 4242 mountd
15 2 tcp 4242 mountd
15 3 tcp 4242 mountd
13 3 tcp 2049 nfs

5. Stop NFS Gateway 

Notice the errors in NFS Gateway Role log
Notice no unregistration calls arrived to rpcbind


> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-08-22 Thread Sailesh Patel (JIRA)
Sailesh Patel created HDFS-12339:


 Summary: NFS Gateway on Shutdown Gives Unregistration Failure. 
Does Not Unregister with rpcbind Portmapper
 Key: HDFS-12339
 URL: https://issues.apache.org/jira/browse/HDFS-12339
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Sailesh Patel




When stopping NFS Gateway the following error is thrown in the NFS gateway role 
logs.

2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
Unregistration failure with localhost:2049, portmap entry: 
(PortmapMapping-13:3:6:2049)

2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
Unregistration failure
java.lang.RuntimeException: Unregistration failure
..
Caused by: java.net.SocketException: Socket is closed
at java.net.DatagramSocket.send(DatagramSocket.java:641)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)

Checking rpcinfo -p : the following entry is still there:
" 13 3 tcp 2049 nfs"




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137598#comment-16137598
 ] 

Uma Maheswara Rao G edited comment on HDFS-12225 at 8/23/17 1:21 AM:
-

Hi [~surendrasingh] thanks for updating the patch. 

Latest patch almost looks good to me. I have few more minor comments though.

# .
{code}
Integer pendingWork = pendingWorkForDirectory.get(rootId) - 1;
+pendingWorkForDirectory.put(rootId, pendingWork);
+if (pendingWork != null && pendingWork <= 0) {
{code}
I think pendingWork never be null. So, no null check needed. Probably you need 
a null check for pendingWorkForDirectory.get(rootId) ?
# .
{code}
pendingSPSTaskScanner = storageMovementNeeded.getPendingTaskScanner();
+pendingSPSTaskScanner.start();
{code}
Instead of getting thread outside of the class, keep start/stop inside that 
class itself and expose methods to do that.
BlockStorageMovementNeeded class can have init and close methods?
# .
{code}
 BlockStorageMovementInfosBatch blkStorageMovementInfosBatch = nodeinfo
 .getBlocksToMoveStorages();
{code}
 Could you please move this variable down before to the below code piece?
{code}
if (blkStorageMovementInfosBatch != null) {
+  cmds.add(new BlockStorageMovementCommand(
+  DatanodeProtocol.DNA_BLOCK_STORAGE_MOVEMENT,
+  blkStorageMovementInfosBatch.getTrackID(), blockPoolId,
+  blkStorageMovementInfosBatch.getBlockMovingInfo()));
+}
{code}
# . Seems like we have 2 kind of classes I can see for tracking the info. 1. 
ItemInfo for tracking attempted items 2. SatisfyTrackInfo - for tracking 
storage movement needed items. Should we unify that class naming? How about 
something like, SatisfyTrackInfo  --> ItemInfo and ItemInfo  --> 
AttemptedItemInfo( extends ItemInfo ?) . Does this make sense to you?
# . Could you please provide clear documentation in the class 
BlockStorageMovementNeeded, what it is doing and responsible? Now its doing 
more than what doc says.


was (Author: umamaheswararao):
Hi [~surendrasingh] thanks for updating the patch. 

Latest patch almost looks good to me. I have few more minor comments though.

# .
{code}
Integer pendingWork = pendingWorkForDirectory.get(rootId) - 1;
+pendingWorkForDirectory.put(rootId, pendingWork);
+if (pendingWork != null && pendingWork <= 0) {
{code}
I think pendingWork never be null. So, no null check needed. Probably you need 
a null check for pendingWorkForDirectory.get(rootId) ?
# .
{code}
pendingSPSTaskScanner = storageMovementNeeded.getPendingTaskScanner();
+pendingSPSTaskScanner.start();
{code}
Instead of getting thread outside of the class, keep start/stop inside that 
class itself and expose methods to do that.
BlockStorageMovementNeeded class can have init and close methods?
# .
{code}
 BlockStorageMovementInfosBatch blkStorageMovementInfosBatch = nodeinfo
 .getBlocksToMoveStorages();
{code}
 Could you please move this variable down before to the below code piece?
{code}
if (blkStorageMovementInfosBatch != null) {
+  cmds.add(new BlockStorageMovementCommand(
+  DatanodeProtocol.DNA_BLOCK_STORAGE_MOVEMENT,
+  blkStorageMovementInfosBatch.getTrackID(), blockPoolId,
+  blkStorageMovementInfosBatch.getBlockMovingInfo()));
+}
{code}
# . Seems like we have 2 kind of classes I can see for tracking the info. 1. 
ItemInfo for tracking attempted items 2. SatisfyTrackInfo - for tracking 
storage movement needed items. Should we unify that class naming? How about 
something like, SatisfyTrackInfo  --> ItemInfo and ItemInfo  --> 
AttemptedItemInfo( extends ItemInfo ?) . Does this make sense to you?

> [SPS]: Optimize extended attributes for tracking SPS movements
> --
>
> Key: HDFS-12225
> URL: https://issues.apache.org/jira/browse/HDFS-12225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12225-HDFS-10285-01.patch, 
> HDFS-12225-HDFS-10285-02.patch, HDFS-12225-HDFS-10285-03.patch, 
> HDFS-12225-HDFS-10285-04.patch, HDFS-12225-HDFS-10285-05.patch, 
> HDFS-12225-HDFS-10285-06.patch, HDFS-12225-HDFS-10285-07.patch, 
> HDFS-12225-HDFS-10285-08.patch
>
>
> We have discussed to optimize number extended attributes and asked to report 
> separate JIRA while implementing [HDFS-11150 | 
> https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only 
> mark the directory. 

[jira] [Commented] (HDFS-12294) Let distcp to bypass external attribute provider when calling getFileStatus etc at source cluster

2017-08-22 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137676#comment-16137676
 ] 

Chris Douglas commented on HDFS-12294:
--

Replied on HDFS-12295, to keep the discussion in one place.

> Let distcp to bypass external attribute provider when calling getFileStatus 
> etc at source cluster
> -
>
> Key: HDFS-12294
> URL: https://issues.apache.org/jira/browse/HDFS-12294
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> This is an alternative solution for HDFS-12202, which proposed introducing a 
> new set of API, with an additional boolean parameter bypassExtAttrProvider, 
> so to let NN bypass external attribute provider when getFileStatus. The goal 
> is to avoid distcp from copying attributes from one cluster's external 
> attribute provider and save to another cluster's fsimage.
> The solution here is, instead of having an additional parameter, encode this 
> parameter to the path itself, when calling getFileStatus (and some other 
> calls), NN will parse the path, and figure out that whether external 
> attribute provider need to be bypassed. The suggested encoding is to have a 
> prefix to the path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning.
> Thanks much to [~andrew.wang] for this suggestion. The scope of change is 
> smaller and we don't have to change the FileSystem APIs.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12335) Federation Metrics

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137675#comment-16137675
 ] 

Hadoop QA commented on HDFS-12335:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 14m 
33s{color} | {color:red} root in HDFS-10467 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883205/HDFS-12335-HDFS-10467-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 70d93d6d11a4 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / fc2c254 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20812/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20812/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20812/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20812/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation 

[jira] [Commented] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr

2017-08-22 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137674#comment-16137674
 ] 

Chris Douglas commented on HDFS-12295:
--

If the subtree under {{/.reserved/bypassExtAttr}} is read-only, that should 
address many of the issues that [~daryn] raised. As long as it's only the split 
generation that's using this API, that limits the cases that break when this 
feature is used.

The requirements for this feature- any user can perform backup-style copies 
using distcp- may be too broad. Your objective is to avoid cluttering the 
destination namesystem with xattrs from the external attribute provider at the 
source. Relying on _all_ users to set this flag correctly is unlikely to 
achieve this. What you want is the opposite: copying data between these 
clusters, by default, should take the path that reads the raw xattrs.

The less-invasive solutions attempt to relax the requirement that all users run 
distcp directly. While the user-facing solution satisfies all the requirements, 
it relies on cooperative users. Would it be feasible to add a layer of 
indirection in the deployments that need this functionality? If so, then we can 
make inter-cluster copies available to all users, without changing the 
internals of HDFS.

[Repeating|https://issues.apache.org/jira/browse/HDFS-12202?focusedCommentId=16120861=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16120861]
 from HDFS-12202, the {{distcp}} command can be swapped out in 3.x. In 
deployments with this requirement, users can contact a service to schedule an 
inter-cluster transfer. That backup user could not only be a special-case in 
the NameNode plugin, it could also help users avoid copying data from 
encryption zones into unprotected clusters (HDFS-6509).

If that's not feasible, can this use case be supported by extending 
MAPREDUCE-6007? If the src/dst are under {{/.reserved/raw}}, then omitting the 
external attribute provider is reasonable behavior.

> NameNode to support file path prefix /.reserved/bypassExtAttr
> -
>
> Key: HDFS-12295
> URL: https://issues.apache.org/jira/browse/HDFS-12295
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12295.001.patch, HDFS-12295.001.patch
>
>
> Let NameNode to support prefix /.reserved/bypassExtAttr, so client can add 
> thisprefix to a path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning, 
> and bypass external attribute provider if the prefix is there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137598#comment-16137598
 ] 

Uma Maheswara Rao G commented on HDFS-12225:


Hi [~surendrasingh] thanks for updating the patch. 

Latest patch almost looks good to me. I have few more minor comments though.

# .
{code}
Integer pendingWork = pendingWorkForDirectory.get(rootId) - 1;
+pendingWorkForDirectory.put(rootId, pendingWork);
+if (pendingWork != null && pendingWork <= 0) {
{code}
I think pendingWork never be null. So, no null check needed. Probably you need 
a null check for pendingWorkForDirectory.get(rootId) ?
# .
{code}
pendingSPSTaskScanner = storageMovementNeeded.getPendingTaskScanner();
+pendingSPSTaskScanner.start();
{code}
Instead of getting thread outside of the class, keep start/stop inside that 
class itself and expose methods to do that.
BlockStorageMovementNeeded class can have init and close methods?
# .
{code}
 BlockStorageMovementInfosBatch blkStorageMovementInfosBatch = nodeinfo
 .getBlocksToMoveStorages();
{code}
 Could you please move this variable down before to the below code piece?
{code}
if (blkStorageMovementInfosBatch != null) {
+  cmds.add(new BlockStorageMovementCommand(
+  DatanodeProtocol.DNA_BLOCK_STORAGE_MOVEMENT,
+  blkStorageMovementInfosBatch.getTrackID(), blockPoolId,
+  blkStorageMovementInfosBatch.getBlockMovingInfo()));
+}
{code}
# . Seems like we have 2 kind of classes I can see for tracking the info. 1. 
ItemInfo for tracking attempted items 2. SatisfyTrackInfo - for tracking 
storage movement needed items. Should we unify that class naming? How about 
something like, SatisfyTrackInfo  --> ItemInfo and ItemInfo  --> 
AttemptedItemInfo( extends ItemInfo ?) . Does this make sense to you?

> [SPS]: Optimize extended attributes for tracking SPS movements
> --
>
> Key: HDFS-12225
> URL: https://issues.apache.org/jira/browse/HDFS-12225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12225-HDFS-10285-01.patch, 
> HDFS-12225-HDFS-10285-02.patch, HDFS-12225-HDFS-10285-03.patch, 
> HDFS-12225-HDFS-10285-04.patch, HDFS-12225-HDFS-10285-05.patch, 
> HDFS-12225-HDFS-10285-06.patch, HDFS-12225-HDFS-10285-07.patch, 
> HDFS-12225-HDFS-10285-08.patch
>
>
> We have discussed to optimize number extended attributes and asked to report 
> separate JIRA while implementing [HDFS-11150 | 
> https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only 
> mark the directory. When recovering from FsImage, the InodeMap isn't built 
> up, so we don't know the sub-inode of a given inode, in the end, We cannot 
> add these inodes to movement queue in FSDirectory#addToInodeMap, any 
> thoughts?{quote}
> {quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can 
> add for all Inodes now. For this to handle 100%, we may need intermittent 
> processing, like first we should add them to some intermittentList while 
> loading fsImage, once fully loaded and when starting active services, we 
> should process that list and do required stuff. But it would add some 
> additional complexity may be. Let's do with all file inodes now and we can 
> revisit later if it is really creating issues. How about you raise a JIRA for 
> it and think to optimize separately?
> {quote}
> {quote}
> [~andrew.wang] wrote in HDFS-10285 merge time review comment : HDFS-10899 
> also the cursor of the iterator in the EZ root xattr to track progress and 
> handle restarts. I wonder if we can do something similar here to avoid having 
> an xattr-per-file being moved.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11888:
--
Attachment: HDFS-11888-HDFS-7240.004.patch

Patch 003 contains some unexpected YARN changes from IntelliJ refactor/rename 
of ContainerInfo class. I'm uploading patch 004 that fixed it.

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-11888-HDFS-7240.002.patch, HDFS-11888-HDFS-7240.003.patch, 
> HDFS-11888-HDFS-7240.004.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137583#comment-16137583
 ] 

Xiaoyu Yao edited comment on HDFS-11888 at 8/22/17 11:15 PM:
-

Patch 003 contains some unexpected YARN changes from IntelliJ refactor/rename 
of BlockContainerInfo class. I'm uploading patch 004 that fixed it.


was (Author: xyao):
Patch 003 contains some unexpected YARN changes from IntelliJ refactor/rename 
of ContainerInfo class. I'm uploading patch 004 that fixed it.

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-11888-HDFS-7240.002.patch, HDFS-11888-HDFS-7240.003.patch, 
> HDFS-11888-HDFS-7240.004.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137468#comment-16137468
 ] 

Wei-Chiu Chuang edited comment on HDFS-10899 at 8/22/17 11:09 PM:
--

Thanks for the rev015 patch!

Looks like all the concerns found in the reviews are addressed.

Given that 
# this feature does not affect existing functionality if not used,
# there is sufficient proof that it works in an integrated scale test,
# and all deficiencies are considered and addressed,

I would like to vote my +1 for the latest, rev 015 patch (pending Jenkins and 
checkstyle), and will proceed to commit the patch after 24 hours if there's no 
objection. If there are minor derfinciecies found afterwards, I'd like to 
suggest deferring them to a new jira.


was (Author: jojochuang):
Thanks for the rev015 patch!

Looks like all the concerns found in the reviews are addressed.

Given that 
# this feature does not affect existing functionality if not used,
# there is sufficient proof that it works in an integrated scale test,
# and all deficiencies are considered and addressed,

I would like to vote my +1 for the latest, rev 015 patch (pending Jenkins and 
checkstyle), and will proceed to commit the patch after 24 hours if there's no 
object. If there are minor derfinciecies found afterwards, I'd like to suggest 
deferring them to a new jira.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12335) Federation Metrics

2017-08-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12335:
---
Attachment: HDFS-12335-HDFS-10467-005.patch

Fixing unit tests by adding back non-use metric.

> Federation Metrics
> --
>
> Key: HDFS-12335
> URL: https://issues.apache.org/jira/browse/HDFS-12335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12335-HDFS-10467-000.patch, 
> HDFS-12335-HDFS-10467-001.patch, HDFS-12335-HDFS-10467-002.patch, 
> HDFS-12335-HDFS-10467-003.patch, HDFS-12335-HDFS-10467-004.patch, 
> HDFS-12335-HDFS-10467-005.patch
>
>
> Add metrics for the Router and the State Store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137535#comment-16137535
 ] 

Hadoop QA commented on HDFS-11888:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 16m 
21s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  9s{color} | {color:orange} root: The patch generated 13 new + 474 unchanged 
- 1 fixed = 487 total (was 475) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 49s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.ozone.web.client.TestKeys |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11888 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883171/HDFS-11888-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 7e38e711d7c4 

[jira] [Commented] (HDFS-12335) Federation Metrics

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137490#comment-16137490
 ] 

Hadoop QA commented on HDFS-12335:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 13m 
14s{color} | {color:red} root in HDFS-10467 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.federation.metrics.TestFederationMetrics |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.federation.router.TestRouter |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883181/HDFS-12335-HDFS-10467-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 40573664c5ac 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / fc2c254 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20811/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20811/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20811/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Commented] (HDFS-12334) [branch-2] Add storage type demand to into DFSNetworkTopology#chooseRandom

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137484#comment-16137484
 ] 

Hadoop QA commented on HDFS-12334:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain |
| JDK v1.7.0_131 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HDFS-12334 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883176/HDFS-12334-branch-2.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 535e79f0df68 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 

[jira] [Comment Edited] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137468#comment-16137468
 ] 

Wei-Chiu Chuang edited comment on HDFS-10899 at 8/22/17 9:55 PM:
-

Thanks for the rev015 patch!

Looks like all the concerns found in the reviews are addressed.

Given that 
# this feature does not affect existing functionality if not used,
# there is sufficient proof that it works in an integrated scale test,
# and all deficiencies are considered and addressed,

I would like to vote my +1 for the latest, rev 015 patch (pending Jenkins and 
checkstyle), and will proceed to commit the patch after 24 hours if there's no 
object. If there are minor derfinciecies found afterwards, I'd like to suggest 
deferring them to a new jira.


was (Author: jojochuang):
Thanks for the rev015 patch!

Looks like all the concerns found in the reviews are addressed.

Given that 
# this feature does not affect existing functionality if not used,
# there is sufficient proof that it works in an integrated scale test,
# and all deficiencies are considered and addressed,

I would like to vote my +1 for the latest, rev 015 patch, and will proceed to 
commit the patch after 24 hours if there's no object. If there are minor 
derfinciecies found afterwards, I'd like to suggest deferring them to a new 
jira.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137468#comment-16137468
 ] 

Wei-Chiu Chuang commented on HDFS-10899:


Thanks for the rev015 patch!

Looks like all the concerns found in the reviews are addressed.

Given that 
# this feature does not affect existing functionality if not used,
# there is sufficient proof that it works in an integrated scale test,
# and all deficiencies are considered and addressed,

I would like to vote my +1 for the latest, rev 015 patch, and will proceed to 
commit the patch after 24 hours if there's no object. If there are minor 
derfinciecies found afterwards, I'd like to suggest deferring them to a new 
jira.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12338) Ozone: SCM: clean up containers that timeout during creation

2017-08-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12338:
-

 Summary: Ozone: SCM: clean up containers that timeout during 
creation
 Key: HDFS-12338
 URL: https://issues.apache.org/jira/browse/HDFS-12338
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is a follow up of HDFS-11888 where we need to clean up containers that are 
allocated but never get created on datanodes or confirmed by the creator in a 
timely manner. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137393#comment-16137393
 ] 

Hadoop QA commented on HDFS-11888:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 16m 
47s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project: The patch generated 13 new 
+ 0 unchanged - 1 fixed = 13 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 49s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.ozone.scm.TestXceiverClientManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11888 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883164/HDFS-11888-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 3ca6520411e9 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d0bd0f6 |
| Default Java | 1.8.0_144 |
| mvninstall | 

[jira] [Updated] (HDFS-12335) Federation Metrics

2017-08-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12335:
---
Attachment: HDFS-12335-HDFS-10467-004.patch

Fixing unit findbugs.

> Federation Metrics
> --
>
> Key: HDFS-12335
> URL: https://issues.apache.org/jira/browse/HDFS-12335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12335-HDFS-10467-000.patch, 
> HDFS-12335-HDFS-10467-001.patch, HDFS-12335-HDFS-10467-002.patch, 
> HDFS-12335-HDFS-10467-003.patch, HDFS-12335-HDFS-10467-004.patch
>
>
> Add metrics for the Router and the State Store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-22 Thread Mikhail Erofeev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137312#comment-16137312
 ] 

Mikhail Erofeev commented on HDFS-12292:


Hey [~msingh], thank you. 
1) The logic is as follows: if it is not a DFS, we try to resolve it to another 
one, it is possible for ViewFs/HarFs/FilterFs. If it is still not a DFS, then 
we raise an exception.
2) The first check compares just some Strings. So PathData src, in theory, can 
be different only in a slash or an absent schema (it is not true now, as the 
only call happens after expandAsGlob(), and it normalizes paths for us). So if 
the paths check fails, we still can compare the source fs and the resolved one, 
and if they are the same, we can skip FileStatus resolve, I think. But it is 
just a premature optimization and I don't mind to remove it.
3) There is a contract in fs.resolvePath() that the returned path is fully 
qualified. 


> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292-004.patch, HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7967) Reduce the performance impact of the balancer

2017-08-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137268#comment-16137268
 ] 

Kihwal Lee commented on HDFS-7967:
--

[~djp], the progress is stalled because the replica triplet structure was 
changed fundamentally in trunk. We cannot make an equivalent change in trunk.
However, I think this should not block 2.8.2 release, as this is not a 
regression and HDFS-11384 should also mitigate this.

> Reduce the performance impact of the balancer
> -
>
> Key: HDFS-7967
> URL: https://issues.apache.org/jira/browse/HDFS-7967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7967.branch-2.001.patch, 
> HDFS-7967.branch-2.002.patch, HDFS-7967.branch-2-1.patch, 
> HDFS-7967.branch-2.8.001.patch, HDFS-7967.branch-2.8.002.patch, 
> HDFS-7967.branch-2.8.003.patch, HDFS-7967.branch-2.8-1.patch, 
> HDFS-7967-branch-2.8.patch, HDFS-7967-branch-2.patch
>
>
> The balancer needs to query for blocks to move from overly full DNs.  The 
> block lookup is extremely inefficient.  An iterator of the node's blocks is 
> created from the iterators of its storages' blocks.  A random number is 
> chosen corresponding to how many blocks will be skipped via the iterator.  
> Each skip requires costly scanning of triplets.
> The current design also only considers node imbalances while ignoring 
> imbalances within the nodes's storages.  A more efficient and intelligent 
> design may eliminate the costly skipping of blocks via round-robin selection 
> of blocks from the storages based on remaining capacity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12334) [branch-2] Add storage type demand to into DFSNetworkTopology#chooseRandom

2017-08-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137266#comment-16137266
 ] 

Chen Liang edited comment on HDFS-12334 at 8/22/17 7:38 PM:


To trigger Jenkins with v2 patch.


was (Author: vagarychen):
To trigger Jenkins.

> [branch-2] Add storage type demand to into DFSNetworkTopology#chooseRandom
> --
>
> Key: HDFS-12334
> URL: https://issues.apache.org/jira/browse/HDFS-12334
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12334-branch-2.001.patch, 
> HDFS-12334-branch-2.002.patch
>
>
> This JIRA is to backport HDFS-11514 to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12334) [branch-2] Add storage type demand to into DFSNetworkTopology#chooseRandom

2017-08-22 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12334:
--
Attachment: HDFS-12334-branch-2.002.patch

To trigger Jenkins.

> [branch-2] Add storage type demand to into DFSNetworkTopology#chooseRandom
> --
>
> Key: HDFS-12334
> URL: https://issues.apache.org/jira/browse/HDFS-12334
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12334-branch-2.001.patch, 
> HDFS-12334-branch-2.002.patch
>
>
> This JIRA is to backport HDFS-11514 to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137253#comment-16137253
 ] 

Anu Engineer commented on HDFS-11888:
-

+1, LGTM. I closely reviewed patch v2 and skimmed v3 since there is not much 
change. Pending Jenkins.

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-11888-HDFS-7240.002.patch, HDFS-11888-HDFS-7240.003.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12335) Federation Metrics

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137233#comment-16137233
 ] 

Hadoop QA commented on HDFS-12335:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 14m 
10s{color} | {color:red} root in HDFS-10467 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 403 unchanged - 0 fixed = 407 total (was 403) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883147/HDFS-12335-HDFS-10467-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d48d6b64fb71 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / fc2c254 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20807/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11888:
--
Attachment: HDFS-11888-HDFS-7240.003.patch

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-11888-HDFS-7240.002.patch, HDFS-11888-HDFS-7240.003.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137215#comment-16137215
 ] 

Xiaoyu Yao commented on HDFS-11888:
---

Thanks [~vagarychen] for the review. I attached a new patch that addressed 1 
and 3. I will leave 2 as-is for now in case we might add additional info in 
addition to the state. 

bq. I saw there are two types of objects, pipeline and container. What is the 
difference of these two terms here?
A pipeline can be used by many containers. We should have separate wrapper for 
pipeline and container. In other word, the container API should return 
container object that contains the name of the pipeline instead of the pipeline 
object. This will be done in some followup JIRAs.

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-11888-HDFS-7240.002.patch, HDFS-11888-HDFS-7240.003.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12191) Provide option to not capture the accessTime change of a file to snapshot if no other modification has been done

2017-08-22 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137209#comment-16137209
 ] 

Manoj Govindassamy commented on HDFS-12191:
---

Thanks for working on this [~yzhangal]. My comments below.

1. {{DFSConfigKeys}}
{noformat}
DFS_NAMENODE_DONT_CAPTURE_ACCESSTIME_ONLY_CHANGE_IN_SNAPSHOT = 
"dfs.namenode.dont.capture.accesstime.only.change.in.snapshot"
{noformat}
1.1 Any other better name for the above new config key? We have one other 
snapshot related config with the prefix "dfs.namenode.snapshot...". May be we 
should have all snapshot related one under the same prefix for grouping and 
consistency. How about "dfs.namenode.snapshot.skip.accesstime-only-diff. Your 
thoughts?
1.2 Config and the default value lines have checkstyle issue. can you please 
take care of this?

2. {{DirectoryWithSnapshotFeature}}
2.1 new blank line added to line 50 is not needed
2.2 
{noformat}
} else {
  if (!dirCopy.metadataEquals(sdiff.snapshotINode)) {
dirMetadataChanged = true;
  }
}
{noformat}
Is this different compared to the existing "else if" block. Doesn't look so. 
Can be reverted.

3. {{hdfs-default.xml}}
3.1 Once the config key is changed, need to be incorporated here
3.2 "..it will not be captured in next snapshot." sounds ambiguous. The access 
time change history is not preserved. The file is still part of snapshot. Can 
we reword this? Also, there is typo in "lastest"

4. {{class INode}}
{noformat}
  private static boolean dontCaptureAccessTimeOnlyChangeInSnapshot = false;

  public static void setDontCaptureAccessTimeOnlyChangeInSnapshot(boolean s) {
LOG.info("Setting dontCaptureAccessTimeOnlyChangeInSnapshot to " + s);
dontCaptureAccessTimeOnlyChangeInSnapshot = s;
  }
{noformat}
4.1 Any other better ways of doing this instead of adding a static member to 
the core INode class. Snapshot manager can be the one to read all the 
configuration related to snapshots and it can take the decision accordingly 
instead of having the logic at the INode level

{noformat}
  public static boolean getDontCaptureAccessTimeOnlyChangeInSnapshot() {
return dontCaptureAccessTimeOnlyChangeInSnapshot;
  }
{noformat}
4.2 Above method is never used.

5. {{FSDirAttrOp}}

{noformat}
  static boolean unprotectedSetTimes(
  FSDirectory fsd, INodesInPath iip, long mtime, long atime, boolean force)
  throws QuotaExceededException {

if (mtime != -1) {
  inode = inode.setModificationTime(mtime, latest);
  status = true;
}
...
if (atime != -1 && (status || force
|| atime > inode.getAccessTime() + fsd.getAccessTimePrecision())) {
  inode.setAccessTime(atime, latest);
  status = true;
}
}
{noformat}

>> "if there is other modification made to the file, the latest access
time will be captured together with the modification in next snapshot."

Looks like accessTime will always be skipped with the config turned on. Please 
correct me if my understanding from the code is wrong.





> Provide option to not capture the accessTime change of a file to snapshot if 
> no other modification has been done
> 
>
> Key: HDFS-12191
> URL: https://issues.apache.org/jira/browse/HDFS-12191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-beta1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12191.001.patch
>
>
> Currently, if the accessTime of a file changed before a snapshot is taken, 
> this accessTime will be captured in the snapshot, even if there is no other 
> modifications made to this file.
> Because of this, when we calculate snapshotDiff, more work need to be done 
> for this file, e,g,, metadataEquals method will be called, even if there is 
> no modification is made (thus not recorded to snapshotDiff). This can cause 
> snapshotDiff to slow down quite a lot when there are a lot of files to be 
> examined.
> This jira is to provide an option to skip capturing accessTime only change to 
> snapshot. Thus snapshotDiff can be done faster.
> When accessTime of a file changed, if there is other modification to the 
> file, the access time will still be captured in snapshot.
> Sometimes we want accessTime be captured to snapshot, such that when 
> restoring from the snapshot, we know the accessTime of this snapshot. So this 
> new feature is optional, and is controlled by a config property.
> Worth to mention is, how accurately the acessTime is captured is dependent on 
> the following config that has default value of 1 hour, which means new access 
> within an hour of previous access will not be captured.
> {code}
> public static final String  

[jira] [Updated] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11888:
--
Attachment: HDFS-11888-HDFS-7240.002.patch

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-11888-HDFS-7240.002.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11888:
--
Attachment: (was: HDFS-12181-HDFS-7240.002.patch)

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-11888-HDFS-7240.002.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11888:
--
Attachment: HDFS-12181-HDFS-7240.002.patch

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch, 
> HDFS-12181-HDFS-7240.002.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137172#comment-16137172
 ] 

Xiaoyu Yao edited comment on HDFS-11888 at 8/22/17 6:37 PM:


Thanks [~anu] for the detailed review. I've fixed all the comments except the 
second one which may not be an error.

bq. BlockContainerInfo#addUsed – Should we rename this ? Since it is possible 
to delete blocks? And may be changeUsed – so you can pass a negative arg via 
size or just add one more function.

Add subtractUsed() as suggested. We will use it when the delete key/block work 
is integrated. 

bq. BlockManagerImpl#loadAllocatedContainers LOG.warn("Container {} allocated 
by block service can't be found in SCM", containerName); Should we log this as 
an Error? I like the fact that we can continue, but may it is a more 
significant issue?
Not fixed: This may not be an error. Added TODO to be fixed in the next patch. 
This could happen when the allocated container failed to be created in a timely 
manner and got cleaned up by SCM due to timeout. In that case, we actually 
should remove those from the allocated container DB of block manager. Plan to 
do the clean up work when working on the next patch to clean up the containers 
in CREATING state that times out. 

bq. BlockManagerImpl#loadAllocatedContainers LOG.warn("Failed loading open 
container, continue next...");This error message seems misleading, same with 
messages below where it talks about open containers. We are opening all 
Allocated containers now, right?
Fixed.

bq. BlockManagerImpl#updateContainer nit: There is a commented out if 
statement, which I think you can remove.
Fixed.

bq. BlockManagerImpl#updateContainer nit: can we please a comment here, that we 
are relying on the lock in the allocateBlock function to make sure that 
contianers map remains consistent.
Comments added.

bq. BlockManagerImpl#allocateBlock Line 325: Can we please add a logging 
statement here so that we know why the IOException happened ? or return the ex 
to the user via making SCMException a wrapper to the IOException we got? I just 
want some way to see the original exception.
Fixed.

bq. containerManager.getContainer – Can this function ever return null ? More 
of a question. We seem to use this with possible return of null and assuming 
the return will be valid.
Good catch. Fixed by adding null check and trace.

bq. BlockManagerImpl#deleteBlock – Do we need to update the ContainerInfo 
usedSize ?
This will be handled with the key/block delete integration. the usedSize 
subtraction will be called when the async thread actually finish reclaiming the 
space on the datanodes.

bq. ContainerInfo#setState Instead of Time.now() can we please use 
this.stateEnterTime = Time.monotonicNow(). I am just worried that in places wit 
Daylight Saving you will see the clock wind backwards.
Fixed.

bq. ContainerMapping- ctor while it is very clear for me when I read code 
(probably because I did the code review for the state machine) it might be a 
good idea to write some comments about the state machine. Especially some 
comments on final states and transitions. Understanding this state machine is 
going to be vital in understanding container management in SCM. Even over 
commenting this would not hurt us.
Good idea. Comments added.

bq. ContianerMapping#allocateContainer Line 231 Replace Time.now 
==>.setStateEnterTime(Time.monotonicNow())
Fixed.

bq. ContianerMapping#deleteContainer – Is it possible we need any changes to 
state during this call ? Does the state move to deleted ? Should this call fire 
an event ? More of question than an assertion.
This API is mainly used by client drive container deletion. Like the 
implementation in ContainerOperationClient#deleteContainer(), the client will 
always delete the container on DN first and then delete container on SCM with 
this API. So we don't need the two phase DELETING process. In the case of SCM 
driven container deletion due to creation timeout, we will need the DELETING 
state and I plan to handle it in a separate JIRA. 




was (Author: xyao):
Thanks [~anu] for the detailed review. 

bq. BlockContainerInfo#addUsed – Should we rename this ? Since it is possible 
to delete blocks? And may be changeUsed – so you can pass a negative arg via 
size or just add one more function.

Add subtractUsed() as suggested. We will use it when the delete key/block work 
is integrated. 

bq. BlockManagerImpl#loadAllocatedContainers LOG.warn("Container {} allocated 
by block service can't be found in SCM", containerName); Should we log this as 
an Error? I like the fact that we can continue, but may it is a more 
significant issue?
This may not be an error. Added TODO to be fixed in the next patch. This could 
happen when the allocated container failed to be created in a timely manner and 
got cleaned up by SCM due to timeout. In that case, we actually 

[jira] [Commented] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137172#comment-16137172
 ] 

Xiaoyu Yao commented on HDFS-11888:
---

Thanks [~anu] for the detailed review. 

bq. BlockContainerInfo#addUsed – Should we rename this ? Since it is possible 
to delete blocks? And may be changeUsed – so you can pass a negative arg via 
size or just add one more function.

Add subtractUsed() as suggested. We will use it when the delete key/block work 
is integrated. 

bq. BlockManagerImpl#loadAllocatedContainers LOG.warn("Container {} allocated 
by block service can't be found in SCM", containerName); Should we log this as 
an Error? I like the fact that we can continue, but may it is a more 
significant issue?
This may not be an error. Added TODO to be fixed in the next patch. This could 
happen when the allocated container failed to be created in a timely manner and 
got cleaned up by SCM due to timeout. In that case, we actually should remove 
those from the allocated container DB of block manager. Plan to do the clean up 
work when working on the next patch to clean up the containers in CREATING 
state that times out. 

bq. BlockManagerImpl#loadAllocatedContainers LOG.warn("Failed loading open 
container, continue next...");This error message seems misleading, same with 
messages below where it talks about open containers. We are opening all 
Allocated containers now, right?
Fixed.

bq. BlockManagerImpl#updateContainer nit: There is a commented out if 
statement, which I think you can remove.
Fixed.

bq. BlockManagerImpl#updateContainer nit: can we please a comment here, that we 
are relying on the lock in the allocateBlock function to make sure that 
contianers map remains consistent.
Comments added.

bq. BlockManagerImpl#allocateBlock Line 325: Can we please add a logging 
statement here so that we know why the IOException happened ? or return the ex 
to the user via making SCMException a wrapper to the IOException we got? I just 
want some way to see the original exception.
Fixed.

bq. containerManager.getContainer – Can this function ever return null ? More 
of a question. We seem to use this with possible return of null and assuming 
the return will be valid.
Good catch. Fixed by adding null check and trace.

bq. BlockManagerImpl#deleteBlock – Do we need to update the ContainerInfo 
usedSize ?
This will be handled with the key/block delete integration. the usedSize 
subtraction will be called when the async thread actually finish reclaiming the 
space on the datanodes.

bq. ContainerInfo#setState Instead of Time.now() can we please use 
this.stateEnterTime = Time.monotonicNow(). I am just worried that in places wit 
Daylight Saving you will see the clock wind backwards.
Fixed.

bq. ContainerMapping- ctor while it is very clear for me when I read code 
(probably because I did the code review for the state machine) it might be a 
good idea to write some comments about the state machine. Especially some 
comments on final states and transitions. Understanding this state machine is 
going to be vital in understanding container management in SCM. Even over 
commenting this would not hurt us.
Good idea. Comments added.

bq. ContianerMapping#allocateContainer Line 231 Replace Time.now 
==>.setStateEnterTime(Time.monotonicNow())
Fixed.

bq. ContianerMapping#deleteContainer – Is it possible we need any changes to 
state during this call ? Does the state move to deleted ? Should this call fire 
an event ? More of question than an assertion.
This API is mainly used by client drive container deletion. Like the 
implementation in ContainerOperationClient#deleteContainer(), the client will 
always delete the container on DN first and then delete container on SCM with 
this API. So we don't need the two phase DELETING process. In the case of SCM 
driven container deletion due to creation timeout, we will need the DELETING 
state and I plan to handle it in a separate JIRA. 



> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to 

[jira] [Commented] (HDFS-11888) Ozone: SCM: use container state machine for open containers allocated for key/blcoks

2017-08-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137173#comment-16137173
 ] 

Chen Liang commented on HDFS-11888:
---

Thanks [~xyao] for taking care of this! Looks pretty good to me overall. Also 
got some minor comments.

# {{BlockManagerImpl#allocateBlock:353}} LOG.info to LOG.debug maybe? because 
seems every single block allocation will trigger one log line.
# SCMContainerInfo -> SCMContainerStateInfo?
# I found this field "used" of a container a bit ambiguous. Because I think 
there is no guarantee that this allocated block will actually be used, i.e. 
client may fail somehow after allocation, right? So to me maybe a better name 
can be "allocated"?

And one clarification question of mine:
I saw there are two types of objects, pipeline and container. What is the 
difference of these two terms here?

> Ozone: SCM: use container state machine for open containers allocated for 
> key/blcoks 
> -
>
> Key: HDFS-11888
> URL: https://issues.apache.org/jira/browse/HDFS-11888
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11888-HDFS-7240.001.patch
>
>
> SCM BlockManager provision a pool of containers upon block creation request 
> that can't be satisfied with current open containers in the pool. However, 
> only one container is returned with the creationFlag to the client. The other 
> container provisioned in the same batch will not have this flag. Client can't 
> assume to use these containers that has not been created on SCM datanodes, 
> This ticket is opened to fix the issue by persist the createContainerNeeded 
> flag for the provisioned containers. The flag will be eventually cleared by 
> processing container report from datanode when the container report handler 
> is fully implemented on SCM. 
> For now, we will use a default batch size of 1 for 
> ozone.scm.container.provision_batch_size so that the container will be 
> created on demand upon the first block allocation into the container. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12335) Federation Metrics

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137086#comment-16137086
 ] 

Hadoop QA commented on HDFS-12335:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 17m  
1s{color} | {color:red} root in HDFS-10467 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 404 unchanged - 0 fixed = 407 total (was 404) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red}  0m  1s{color} | 
{color:red} The patch has 1 ill-formed XML file(s). {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceInit(Configuration)
  At StateStoreService.java:field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceInit(Configuration)
  At StateStoreService.java:[lines 163-164] |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceStop()  
At StateStoreService.java:field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceStop()  
At StateStoreService.java:[lines 193-195] |
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | 

[jira] [Updated] (HDFS-12335) Federation Metrics

2017-08-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12335:
---
Attachment: HDFS-12335-HDFS-10467-003.patch

Fixed findbugs.

> Federation Metrics
> --
>
> Key: HDFS-12335
> URL: https://issues.apache.org/jira/browse/HDFS-12335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12335-HDFS-10467-000.patch, 
> HDFS-12335-HDFS-10467-001.patch, HDFS-12335-HDFS-10467-002.patch, 
> HDFS-12335-HDFS-10467-003.patch
>
>
> Add metrics for the Router and the State Store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137031#comment-16137031
 ] 

Hadoop QA commented on HDFS-10899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-hdfs-project: The patch generated 43 new 
+ 933 unchanged - 2 fixed = 976 total (was 935) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883075/HDFS-10899.15.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 6e0ecc2e1b22 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4ec5acc |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HDFS-12335) Federation Metrics

2017-08-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12335:
---
Attachment: HDFS-12335-HDFS-10467-002.patch

Fixing unit tests.

> Federation Metrics
> --
>
> Key: HDFS-12335
> URL: https://issues.apache.org/jira/browse/HDFS-12335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12335-HDFS-10467-000.patch, 
> HDFS-12335-HDFS-10467-001.patch, HDFS-12335-HDFS-10467-002.patch
>
>
> Add metrics for the Router and the State Store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136983#comment-16136983
 ] 

Hadoop QA commented on HDFS-12336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883120/HDFS-12336.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 9eecef036072 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27ab5f7 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20804/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20804/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20804/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> 

[jira] [Commented] (HDFS-12337) Ozone: Concurrent RocksDB open calls fail because of "No locks available"

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136939#comment-16136939
 ] 

Hadoop QA commented on HDFS-12337:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 16m 
14s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 8 unchanged - 0 fixed = 11 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.ozone.scm.node.TestQueryNode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883117/HDFS-12337-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9f02eadd097 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d0bd0f6 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20803/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20803/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20803/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20803/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks

2017-08-22 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136897#comment-16136897
 ] 

Wellington Chevreuil commented on HDFS-12182:
-

I believe the test failures are not related. 

TestMaintenance state is failing while trying to read hosts files, which has 
nothing to do with these changes. Maybe some race conditions in the build 
caused this file to be deleted while test was still running? Same test is 
passing locally:

{noformat}
Running org.apache.hadoop.hdfs.TestMaintenanceState
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 234.765 sec - 
in org.apache.hadoop.hdfs.TestMaintenanceState

Results :

Tests run: 25, Failures: 0, Errors: 0, Skipped: 0
{noformat}

Same for 
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain"

{noformat}
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.257 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0
{noformat}




> BlockManager.metaSave does not distinguish between "under replicated" and 
> "missing" blocks
> --
>
> Key: HDFS-12182
> URL: https://issues.apache.org/jira/browse/HDFS-12182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12182.001.patch, HDFS-12182.002.patch, 
> HDFS-12182.003.patch, HDFS-12182.004.patch, HDFS-12182-branch-2.001.patch, 
> HDFS-12182-branch-2.002.patch
>
>
> Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs 
> CLI command) reports both "under replicated" and "missing" blocks under same 
> metric *Metasave: Blocks waiting for reconstruction:* as shown on below code 
> snippet:
> {noformat}
>synchronized (neededReconstruction) {
>   out.println("Metasave: Blocks waiting for reconstruction: "
>   + neededReconstruction.size());
>   for (Block block : neededReconstruction) {
> dumpBlockMeta(block, out);
>   }
> }
> {noformat}
> *neededReconstruction* is an instance of *LowRedundancyBlocks*, which 
> actually wraps 5 priority queues currently. 4 of these queues store different 
> under replicated scenarios, but the 5th one is dedicated for corrupt/missing 
> blocks. 
> Thus, metasave report may suggest some corrupt blocks are just under 
> replicated. This can be misleading for admins and operators trying to track 
> block missing/corruption issues, and/or other issues related to 
> *BlockManager* metrics.
> I would like to propose a patch with trivial changes that would report 
> corrupt blocks separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Status: Patch Available  (was: Open)

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-22 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12336:

Status: Patch Available  (was: Open)

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-12336.001.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-22 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12336:

Attachment: HDFS-12336.001.patch

Proposing initial patch with changes to also cover condition when EZ is not a 
direct child of snapshottable dir, together with tests to emulate such 
condition.

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-12336.001.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12337) Ozone: Concurrent RocksDB open calls fail because of "No locks available"

2017-08-22 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12337:
-
Attachment: HDFS-12337-HDFS-7240.001.patch

> Ozone: Concurrent RocksDB open calls fail because of "No locks available"
> -
>
> Key: HDFS-12337
> URL: https://issues.apache.org/jira/browse/HDFS-12337
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12337-HDFS-7240.001.patch
>
>
> HDFS-12216 fixes the issue with the static container cache and re-using the 
> same container port on datanode restart. However TestKeys still fails after 
> HDFS-12216 is fixed.
> The test is now failing because Concurrent RocksDB open calls fail. In the 
> current code BlockDeleting service and Dispatcher tries to open the db 
> concurrently.
> This jira will also fix the keepPort property for Ratis Container port and 
> set the reuse address property for XceiverServerRatis correctly.
> {code}
> 2017-08-22 16:51:34,453 [BlockDeletingService#1] INFO  utils.RocksDBStore 
> (RocksDBStore.java:(64)) - opening db file 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db
> 2017-08-22 16:51:34,460 [nioEventLoopGroup-9-1] INFO  logging.LoggingHandler 
> (Slf4JLogger.java:info(101)) - [id: 0x8822cd3d, /0.0.0.0:57044] RECEIVED: 
> [id: 0x61367e6f, /127.0.0.1:57173 => /127.0.0
> .1:57044]
> 2017-08-22 16:51:34,461 [nioEventLoopGroup-10-1] INFO  utils.RocksDBStore 
> (RocksDBStore.java:(64)) - opening db file 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db
> 2017-08-22 16:51:34,465 [nioEventLoopGroup-10-1] INFO  utils.RocksDBStore 
> (RocksDBStore.java:(67)) - Failed init RocksDB, db path : 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.dbexception
>  org.rocksdb.RocksDBException: lock 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db/LOCK:
>  No locks available
> 2017-08-22 16:51:34,465 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(203))  - 
> The elapsed time of task@70a576ee for deleting blocks: 12ms.
> 2017-08-22 16:51:34,474 [nioEventLoopGroup-10-1] INFO  impl.Dispatcher 
> (ContainerUtils.java:logAndReturnError(129))  - Operation: GetKey : Trace 
> ID: 73f19131-f63b-459a-8f09-9a3db893a296 : Message: 
> 621a3b15-b9fc-4d49-a6e6-29d4c40cc91f : Result: UNABLE_TO_READ_METADATA_DB
> 2017-08-22 16:51:34,475 [Thread-382] INFO  exceptions.OzoneExceptionMapper 
> (OzoneExceptionMapper.java:toResponse(39)) ozone  
> c2a23759-c76f-49ea-b574-f0802a4e5b75/c0df3a48-f75b-4b5e-b1bd-c189ce698056/13b3d486-3d7a-49e4-bc9d-1ef63e674548
>  hdfs 73f19131-f63b-459a-8f09-9a3db893a296 - Returning exception. ex: 
> {"httpCode":500,"shortMessage":"internalServerError","resource":"hdfs","message":"621a3b15-b9fc-4d49-a6e6-29d4c40cc91f","requestID":"73f19131-f63b-459a-8f09-9a3db893a296","hostName":"hw13605.local"}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12337) Ozone: Concurrent RocksDB open calls fail because of "No locks available"

2017-08-22 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12337:
-
Status: Patch Available  (was: Open)

> Ozone: Concurrent RocksDB open calls fail because of "No locks available"
> -
>
> Key: HDFS-12337
> URL: https://issues.apache.org/jira/browse/HDFS-12337
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12337-HDFS-7240.001.patch
>
>
> HDFS-12216 fixes the issue with the static container cache and re-using the 
> same container port on datanode restart. However TestKeys still fails after 
> HDFS-12216 is fixed.
> The test is now failing because Concurrent RocksDB open calls fail. In the 
> current code BlockDeleting service and Dispatcher tries to open the db 
> concurrently.
> This jira will also fix the keepPort property for Ratis Container port and 
> set the reuse address property for XceiverServerRatis correctly.
> {code}
> 2017-08-22 16:51:34,453 [BlockDeletingService#1] INFO  utils.RocksDBStore 
> (RocksDBStore.java:(64)) - opening db file 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db
> 2017-08-22 16:51:34,460 [nioEventLoopGroup-9-1] INFO  logging.LoggingHandler 
> (Slf4JLogger.java:info(101)) - [id: 0x8822cd3d, /0.0.0.0:57044] RECEIVED: 
> [id: 0x61367e6f, /127.0.0.1:57173 => /127.0.0
> .1:57044]
> 2017-08-22 16:51:34,461 [nioEventLoopGroup-10-1] INFO  utils.RocksDBStore 
> (RocksDBStore.java:(64)) - opening db file 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db
> 2017-08-22 16:51:34,465 [nioEventLoopGroup-10-1] INFO  utils.RocksDBStore 
> (RocksDBStore.java:(67)) - Failed init RocksDB, db path : 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.dbexception
>  org.rocksdb.RocksDBException: lock 
> /Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db/LOCK:
>  No locks available
> 2017-08-22 16:51:34,465 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(203))  - 
> The elapsed time of task@70a576ee for deleting blocks: 12ms.
> 2017-08-22 16:51:34,474 [nioEventLoopGroup-10-1] INFO  impl.Dispatcher 
> (ContainerUtils.java:logAndReturnError(129))  - Operation: GetKey : Trace 
> ID: 73f19131-f63b-459a-8f09-9a3db893a296 : Message: 
> 621a3b15-b9fc-4d49-a6e6-29d4c40cc91f : Result: UNABLE_TO_READ_METADATA_DB
> 2017-08-22 16:51:34,475 [Thread-382] INFO  exceptions.OzoneExceptionMapper 
> (OzoneExceptionMapper.java:toResponse(39)) ozone  
> c2a23759-c76f-49ea-b574-f0802a4e5b75/c0df3a48-f75b-4b5e-b1bd-c189ce698056/13b3d486-3d7a-49e4-bc9d-1ef63e674548
>  hdfs 73f19131-f63b-459a-8f09-9a3db893a296 - Returning exception. ex: 
> {"httpCode":500,"shortMessage":"internalServerError","resource":"hdfs","message":"621a3b15-b9fc-4d49-a6e6-29d4c40cc91f","requestID":"73f19131-f63b-459a-8f09-9a3db893a296","hostName":"hw13605.local"}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136758#comment-16136758
 ] 

Hadoop QA commented on HDFS-12225:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 1s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
24s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10285 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12225 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883099/HDFS-12225-HDFS-10285-08.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c18e20f2bf5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / aff40b2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20802/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20802/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20802/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20802/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Created] (HDFS-12337) Ozone: Concurrent RocksDB open calls fail because of "No locks available"

2017-08-22 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12337:


 Summary: Ozone: Concurrent RocksDB open calls fail because of "No 
locks available"
 Key: HDFS-12337
 URL: https://issues.apache.org/jira/browse/HDFS-12337
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


HDFS-12216 fixes the issue with the static container cache and re-using the 
same container port on datanode restart. However TestKeys still fails after 
HDFS-12216 is fixed.

The test is now failing because Concurrent RocksDB open calls fail. In the 
current code BlockDeleting service and Dispatcher tries to open the db 
concurrently.

This jira will also fix the keepPort property for Ratis Container port and set 
the reuse address property for XceiverServerRatis correctly.

{code}
2017-08-22 16:51:34,453 [BlockDeletingService#1] INFO  utils.RocksDBStore 
(RocksDBStore.java:(64)) - opening db file 
/Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db
2017-08-22 16:51:34,460 [nioEventLoopGroup-9-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0x8822cd3d, /0.0.0.0:57044] RECEIVED: [id: 
0x61367e6f, /127.0.0.1:57173 => /127.0.0
.1:57044]
2017-08-22 16:51:34,461 [nioEventLoopGroup-10-1] INFO  utils.RocksDBStore 
(RocksDBStore.java:(64)) - opening db file 
/Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db
2017-08-22 16:51:34,465 [nioEventLoopGroup-10-1] INFO  utils.RocksDBStore 
(RocksDBStore.java:(67)) - Failed init RocksDB, db path : 
/Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.dbexception
 org.rocksdb.RocksDBException: lock 
/Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn0_data0/containers/621a3b15-b9fc-4d49-a6e6-29d4c40cc91f/metadata/container.db/LOCK:
 No locks available
2017-08-22 16:51:34,465 [BlockDeletingService#1] INFO  
background.BlockDeletingService (BlockDeletingService.java:call(203))  - 
The elapsed time of task@70a576ee for deleting blocks: 12ms.
2017-08-22 16:51:34,474 [nioEventLoopGroup-10-1] INFO  impl.Dispatcher 
(ContainerUtils.java:logAndReturnError(129))  - Operation: GetKey : Trace 
ID: 73f19131-f63b-459a-8f09-9a3db893a296 : Message: 
621a3b15-b9fc-4d49-a6e6-29d4c40cc91f : Result: UNABLE_TO_READ_METADATA_DB
2017-08-22 16:51:34,475 [Thread-382] INFO  exceptions.OzoneExceptionMapper 
(OzoneExceptionMapper.java:toResponse(39)) ozone  
c2a23759-c76f-49ea-b574-f0802a4e5b75/c0df3a48-f75b-4b5e-b1bd-c189ce698056/13b3d486-3d7a-49e4-bc9d-1ef63e674548
 hdfs 73f19131-f63b-459a-8f09-9a3db893a296 - Returning exception. ex: 
{"httpCode":500,"shortMessage":"internalServerError","resource":"hdfs","message":"621a3b15-b9fc-4d49-a6e6-29d4c40cc91f","requestID":"73f19131-f63b-459a-8f09-9a3db893a296","hostName":"hw13605.local"}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-22 Thread Wellington Chevreuil (JIRA)
Wellington Chevreuil created HDFS-12336:
---

 Summary: Listing encryption zones still fails when deleted EZ is 
not a direct child of snapshottable directory
 Key: HDFS-12336
 URL: https://issues.apache.org/jira/browse/HDFS-12336
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0-alpha4
Reporter: Wellington Chevreuil
Assignee: Wellington Chevreuil
Priority: Minor


The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
but still under a snapshot is not a direct child of the snapshottable directory.

Here the code snippet proposed on HDFS-11197 that would avoid the error 
reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
snapshot:

{noformat}
  INode lastINode = null;
  if (inode.getParent() != null || inode.isRoot()) {
INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
lastINode = iip.getLastINode();
  }
  if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
continue;
  }
{noformat} 

It will ignore EZs when it's a direct child of a snapshot, because its parent 
inode will be null, and it isn't the root inode. However, if the EZ is not 
directly under snapshottable directory, its parent will not be null, and it 
will pass this check, so it will fail further due *absolute path required* 
validation error.

I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136647#comment-16136647
 ] 

Hadoop QA commented on HDFS-11968:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 12 new + 14 unchanged - 0 fixed = 26 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11968 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883086/HDFS-11968.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e53bd04faafe 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d5ff57a |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20801/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20801/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20801/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20801/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: 

[jira] [Commented] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-22 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136641#comment-16136641
 ] 

Surendra Singh Lilhore commented on HDFS-12225:
---

Attached updated patch. Please review..

> [SPS]: Optimize extended attributes for tracking SPS movements
> --
>
> Key: HDFS-12225
> URL: https://issues.apache.org/jira/browse/HDFS-12225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12225-HDFS-10285-01.patch, 
> HDFS-12225-HDFS-10285-02.patch, HDFS-12225-HDFS-10285-03.patch, 
> HDFS-12225-HDFS-10285-04.patch, HDFS-12225-HDFS-10285-05.patch, 
> HDFS-12225-HDFS-10285-06.patch, HDFS-12225-HDFS-10285-07.patch, 
> HDFS-12225-HDFS-10285-08.patch
>
>
> We have discussed to optimize number extended attributes and asked to report 
> separate JIRA while implementing [HDFS-11150 | 
> https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only 
> mark the directory. When recovering from FsImage, the InodeMap isn't built 
> up, so we don't know the sub-inode of a given inode, in the end, We cannot 
> add these inodes to movement queue in FSDirectory#addToInodeMap, any 
> thoughts?{quote}
> {quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can 
> add for all Inodes now. For this to handle 100%, we may need intermittent 
> processing, like first we should add them to some intermittentList while 
> loading fsImage, once fully loaded and when starting active services, we 
> should process that list and do required stuff. But it would add some 
> additional complexity may be. Let's do with all file inodes now and we can 
> revisit later if it is really creating issues. How about you raise a JIRA for 
> it and think to optimize separately?
> {quote}
> {quote}
> [~andrew.wang] wrote in HDFS-10285 merge time review comment : HDFS-10899 
> also the cursor of the iterator in the EZ root xattr to track progress and 
> handle restarts. I wonder if we can do something similar here to avoid having 
> an xattr-per-file being moved.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-22 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12225:
--
Attachment: HDFS-12225-HDFS-10285-08.patch

> [SPS]: Optimize extended attributes for tracking SPS movements
> --
>
> Key: HDFS-12225
> URL: https://issues.apache.org/jira/browse/HDFS-12225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12225-HDFS-10285-01.patch, 
> HDFS-12225-HDFS-10285-02.patch, HDFS-12225-HDFS-10285-03.patch, 
> HDFS-12225-HDFS-10285-04.patch, HDFS-12225-HDFS-10285-05.patch, 
> HDFS-12225-HDFS-10285-06.patch, HDFS-12225-HDFS-10285-07.patch, 
> HDFS-12225-HDFS-10285-08.patch
>
>
> We have discussed to optimize number extended attributes and asked to report 
> separate JIRA while implementing [HDFS-11150 | 
> https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only 
> mark the directory. When recovering from FsImage, the InodeMap isn't built 
> up, so we don't know the sub-inode of a given inode, in the end, We cannot 
> add these inodes to movement queue in FSDirectory#addToInodeMap, any 
> thoughts?{quote}
> {quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can 
> add for all Inodes now. For this to handle 100%, we may need intermittent 
> processing, like first we should add them to some intermittentList while 
> loading fsImage, once fully loaded and when starting active services, we 
> should process that list and do required stuff. But it would add some 
> additional complexity may be. Let's do with all file inodes now and we can 
> revisit later if it is really creating issues. How about you raise a JIRA for 
> it and think to optimize separately?
> {quote}
> {quote}
> [~andrew.wang] wrote in HDFS-10285 merge time review comment : HDFS-10899 
> also the cursor of the iterator in the EZ root xattr to track progress and 
> handle restarts. I wonder if we can do something similar here to avoid having 
> an xattr-per-file being moved.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136628#comment-16136628
 ] 

Hadoop QA commented on HDFS-12283:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 16m  
4s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 39s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgradeRollback |
|   | hadoop.ozone.scm.node.TestQueryNode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12283 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883078/HDFS-12283-HDFS-7240.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 7bd2f5657c1d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136620#comment-16136620
 ] 

Hadoop QA commented on HDFS-12282:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 18m 
18s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 50s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 430 unchanged - 
2 fixed = 432 total (was 432) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.ozone.scm.node.TestQueryNode |
| Timed out junit tests | org.apache.hadoop.ozone.TestMiniOzoneCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12282 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883080/HDFS-12282-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 89f6085a74f9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d0bd0f6 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20799/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20799/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20799/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20799/testReport/ |
| modules | C: 

[jira] [Commented] (HDFS-12216) Ozone: TestKeys is failing consistently

2017-08-22 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136610#comment-16136610
 ] 

Mukul Kumar Singh commented on HDFS-12216:
--

Looked into the recent failures after the patch and it seems that the request 
to get the key is being sent even before XceiverServer on datanode is up.

{code}
HW13605:ozone_review msingh$ cat 
/Users/msingh/code/work/apache/cblock/ozone_review/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports/org.apache.hadoop.ozone.web.client.TestKeys-output.txt
 | egrep "51619|Exception getting XceiverClient"
2017-08-22 15:29:53,204 [BP-1861535517-10.200.5.245-1503395989416 heartbeating 
to localhost/127.0.0.1:51613] INFO  server.XceiverServer 
(XceiverServer.java:(75))  - Found a free port for the server : 51619
2017-08-22 15:29:53,966 [nioEventLoopGroup-4-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0xd118274a] BIND(0.0.0.0/0.0.0.0:51619)
2017-08-22 15:29:53,966 [nioEventLoopGroup-4-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0xd118274a, /0.0.0.0:51619] ACTIVE
2017-08-22 15:29:58,240 [nioEventLoopGroup-4-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0xd118274a, /0.0.0.0:51619] RECEIVED: [id: 
0xccf8c3db, /127.0.0.1:51633 => /127.0.0.1:51619]
2017-08-22 15:29:59,553 [nioEventLoopGroup-4-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0xd118274a, /0.0.0.0:51619] UNREGISTERED
2017-08-22 15:30:00,042 [Thread-378] INFO  exceptions.OzoneExceptionMapper 
(OzoneExceptionMapper.java:toResponse(39)) ozone  
ea73188d-e1f0-43a3-8d0e-4a6b13ffba95/af614819-4b18-4f49-96d8-8e8117ff7d98/8a1f6102-bf6c-4b0d-a124-0803c9950b2b
 hdfs b3ad0f07-3daa-406b-bf28-438efbd772f6 - Returning exception. ex: 
{"httpCode":500,"shortMessage":"internalServerError","resource":"hdfs","message":"Exception
 getting 
XceiverClient.","requestID":"b3ad0f07-3daa-406b-bf28-438efbd772f6","hostName":"hw13605.local"}
2017-08-22 15:30:00,047 [nioEventLoopGroup-10-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0xbb9534b5] BIND(0.0.0.0/0.0.0.0:51619)
2017-08-22 15:30:00,047 [nioEventLoopGroup-10-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0xbb9534b5, /0.0.0.0:51619] ACTIVE
2017-08-22 15:30:00,071 [nioEventLoopGroup-10-1] INFO  logging.LoggingHandler 
(Slf4JLogger.java:info(101)) - [id: 0xbb9534b5, /0.0.0.0:51619] UNREGISTERED
{code}

> Ozone: TestKeys is failing consistently
> ---
>
> Key: HDFS-12216
> URL: https://issues.apache.org/jira/browse/HDFS-12216
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12216-HDFS-7240.001.patch, 
> HDFS-12216-HDFS-7240.002.patch, HDFS-12216-HDFS-7240.003.patch, 
> HDFS-12216-HDFS-7240.004.patch, HDFS-12216-HDFS-7240.005.patch, 
> HDFS-12216-HDFS-7240.006.patch
>
>
> TestKeys and TestKeysRatis are failing consistently as noted in test logs for 
> HDFS-12183
> TestKeysRatis is failing because of the following error
> {code}
> 2017-07-28 23:11:28,783 [StateMachineUpdater-127.0.0.1:55793] ERROR 
> impl.StateMachineUpdater (ExitUtils.java:terminate(80)) - Terminating with 
> exit status 2: StateMachineUpdater-127.0.0.1:55793: the StateMachineUpdater 
> hits Throwable
> org.iq80.leveldb.DBException: Closed
>   at org.fusesource.leveldbjni.internal.JniDB.put(JniDB.java:123)
>   at org.apache.hadoop.utils.LevelDBStore.put(LevelDBStore.java:98)
>   at 
> org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.putKey(KeyManagerImpl.java:90)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handlePutKey(Dispatcher.java:547)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.keyProcessHandler(Dispatcher.java:206)
>   at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:110)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
>   at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> where as TestKeys is failing because of
> {code}
> 2017-07-28 23:14:20,889 [Thread-486] INFO  scm.XceiverClientManager 
> (XceiverClientManager.java:getClient(158)) - exception 
> java.util.concurrent.ExecutionException: java.net.ConnectException: 
> Connection 

[jira] [Commented] (HDFS-12280) Ozone: TestOzoneContainer#testCreateOzoneContainer fails

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136595#comment-16136595
 ] 

Hadoop QA commented on HDFS-12280:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 15m 
29s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12280 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883077/HDFS-12280-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6b4baf01577c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d0bd0f6 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20798/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20798/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20798/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20798/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: TestOzoneContainer#testCreateOzoneContainer fails
> 

[jira] [Commented] (HDFS-12280) Ozone: TestOzoneContainer#testCreateOzoneContainer fails

2017-08-22 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136579#comment-16136579
 ] 

Yiqun Lin commented on HDFS-12280:
--

The error was thrown from the following exception that I debugged the 
{{TestKeys}} failure.
{code}
try {
  db = RocksDB.open(dbOptions, dbLocation.getAbsolutePath());
} catch (RocksDBException e) {
  throw new IOException("Failed init RocksDB, db path : "<===
  + dbFile.getAbsolutePath(), e);
}
{code}
Hope makes sense to you.

> Ozone: TestOzoneContainer#testCreateOzoneContainer fails
> 
>
> Key: HDFS-12280
> URL: https://issues.apache.org/jira/browse/HDFS-12280
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Lokesh Jain
> Attachments: HDFS-12280-HDFS-7240.001.patch, 
> HDFS-12280-HDFS-7240.002.patch
>
>
> {{org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer#testCreateOzoneContainer}}
>  fails with the below error
> {code}
> Running org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 64.507 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> testCreateOzoneContainer(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 64.44 sec  <<< ERROR!
> java.io.IOException: Failed to start MiniOzoneCluster
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:370)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster.waitOzoneReady(MiniOzoneCluster.java:239)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster$Builder.build(MiniOzoneCluster.java:422)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testCreateOzoneContainer(TestOzoneContainer.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-08-22 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11968:
-
Status: Patch Available  (was: Open)

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11968.001.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-08-22 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11968:
-
Attachment: HDFS-11968.001.patch

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11968.001.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136551#comment-16136551
 ] 

Hadoop QA commented on HDFS-12327:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 18m 
59s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 53s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.ozone.scm.node.TestQueryNode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12327 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883068/HDFS-12327-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux a12726bb3450 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d0bd0f6 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20797/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12282:
---
Attachment: HDFS-12282-HDFS-7240.003.patch

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282-HDFS-7240.003.patch, HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-22 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12283:
--
Attachment: HDFS-12283-HDFS-7240.007.patch

upload v7 patch

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch, 
> HDFS-12283-HDFS-7240.004.patch, HDFS-12283-HDFS-7240.005.patch, 
> HDFS-12283-HDFS-7240.006.patch, HDFS-12283-HDFS-7240.007.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136510#comment-16136510
 ] 

Hadoop QA commented on HDFS-12327:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 15m 
55s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 41s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.ozone.scm.node.TestQueryNode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12327 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883067/HDFS-12327-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f7804f55847e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / d0bd0f6 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20796/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs 

[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136494#comment-16136494
 ] 

Hadoop QA commented on HDFS-12282:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 17m 
43s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 430 unchanged - 
2 fixed = 432 total (was 432) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Null pointer dereference of data in 
org.apache.hadoop.ozone.container.common.helpers.ChunkUtils.getChunkFile(ContainerData,
 ChunkInfo)  Dereferenced at ChunkUtils.java:in 
org.apache.hadoop.ozone.container.common.helpers.ChunkUtils.getChunkFile(ContainerData,
 ChunkInfo)  Dereferenced at ChunkUtils.java:[line 142] |
|  |  Load of known null value in 
org.apache.hadoop.ozone.container.common.helpers.ChunkUtils.getChunkFile(ContainerData,
 ChunkInfo)  At ChunkUtils.java:in 
org.apache.hadoop.ozone.container.common.helpers.ChunkUtils.getChunkFile(ContainerData,
 ChunkInfo)  At ChunkUtils.java:[line 142] |
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12282 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883066/HDFS-12282-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Commented] (HDFS-12280) Ozone: TestOzoneContainer#testCreateOzoneContainer fails

2017-08-22 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136492#comment-16136492
 ] 

Mukul Kumar Singh commented on HDFS-12280:
--

[~anu], TestKeys is still failing occasionally in other Jenkins runs as well. I 
am taking a look at the failures. 

> Ozone: TestOzoneContainer#testCreateOzoneContainer fails
> 
>
> Key: HDFS-12280
> URL: https://issues.apache.org/jira/browse/HDFS-12280
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Lokesh Jain
> Attachments: HDFS-12280-HDFS-7240.001.patch, 
> HDFS-12280-HDFS-7240.002.patch
>
>
> {{org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer#testCreateOzoneContainer}}
>  fails with the below error
> {code}
> Running org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 64.507 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> testCreateOzoneContainer(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 64.44 sec  <<< ERROR!
> java.io.IOException: Failed to start MiniOzoneCluster
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:370)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster.waitOzoneReady(MiniOzoneCluster.java:239)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster$Builder.build(MiniOzoneCluster.java:422)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testCreateOzoneContainer(TestOzoneContainer.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12280) Ozone: TestOzoneContainer#testCreateOzoneContainer fails

2017-08-22 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-12280:
---
Attachment: HDFS-12280-HDFS-7240.002.patch

I have included the changes suggested by [~msingh]. 
TestKeys.testPutAndGetKeyWithDnRestart is working locally and its failure is 
not related to this patch.

> Ozone: TestOzoneContainer#testCreateOzoneContainer fails
> 
>
> Key: HDFS-12280
> URL: https://issues.apache.org/jira/browse/HDFS-12280
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Lokesh Jain
> Attachments: HDFS-12280-HDFS-7240.001.patch, 
> HDFS-12280-HDFS-7240.002.patch
>
>
> {{org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer#testCreateOzoneContainer}}
>  fails with the below error
> {code}
> Running org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 64.507 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer
> testCreateOzoneContainer(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 64.44 sec  <<< ERROR!
> java.io.IOException: Failed to start MiniOzoneCluster
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:370)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster.waitOzoneReady(MiniOzoneCluster.java:239)
>   at 
> org.apache.hadoop.ozone.MiniOzoneCluster$Builder.build(MiniOzoneCluster.java:422)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testCreateOzoneContainer(TestOzoneContainer.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136455#comment-16136455
 ] 

Xiao Chen edited comment on HDFS-10899 at 8/22/17 8:04 AM:
---

Thanks a lot for the reviews [~jojochuang], good comments!
Replying one by one, and attaching a patch in the end. Comments that's not 
mentioned are all addressed.

{quote}
reencryptionHandler#reencryptEncryptionZone()
zoneId is obtained when holding FSDirectory read lock, release the lock, and 
then acquire FSDirectory read lock again.
This assertion is only correct if there will be only one ReencryptionHandler 
running.
{quote}
There is only one {{ReencryptionHandler}}. Added texts to javadoc.
If the zone referred to by inodeid is changed (e.g. deleted/renamed) while the 
lock is not held, {{checkZoneReady}} will throw. A similar test case would be 
{{TestReencryption#testZoneDeleteDuringReencrypt}}.

bq. ReencryptionStatus#updateZoneStatus() should check that zoneNode is an 
encryption zone.
For the 2 callers, {{FSD#addEZ}} is where the zoneId is added, so always true. 
{{FSDirXAttrOp#unprotectedSetXAttrs}} is happening within the EZXattr, so also 
always true. (There's no 'disable encryption' command, so zone node can only be 
deleted/renamed)

bq. Why is currentBatch a TreeMap?
Good question. Initially this was done to keep the element's ordering and using 
path as the key. Now that it's changed to inode id based, we can just use a 
list. (Sorry didn't rebase the inodeid patch here on 14...)

bq. Does ZoneReencryptionStatus#getLastProcessedFile return the relative path? 
or file name only? or absolute path?
Absolute path - so we can restore in case of fail over.

bq. It Allocates a 2000-element map, copy it over, and then clear the map. That 
looks suboptimal. Would it be feasible to wrap TreeMap and make a method that 
simply assigns the TreeMap reference to another currentBatch?
Agreed, problem is {{currentBatch}} here is passed in from the very outside of 
the call stack.
Made it a member variable of {{ReencryptionHandler}} to address this. It's 
still safe with the single-threaded handler model, but perhaps harder to read. 
Please share your thoughts.

bq. EDEKReencryptCallable ... retry ... if reencryptEdeks() returns numFailures 
> 0, call() should not return a new ReencryptionTask object.
Initially talking with [~andrew.wang], we wanted to always retry things, so 
admin can just fix the error, and continue (or cancel).
But since KMSCP already has the retry logic added by HADOOP-14521, and to trade 
off for maintainability, we do not 'double retry' here, and only let KMSCP's 
retry policy to handle failures.
When -listReencryptionStatus, if numOfFailures > 0, a message is printed to ask 
admin to examine failure and re-submit.
Implementation-wise, we still depend on the ReencryptionTask object to pass the 
failures to the updater, so need that object. Updater handles failed tasks 
differently.



was (Author: xiaochen):
Thanks a lot for the reviews [~jojochuang], good comments!
Replying one by one, and attaching a patch in the end. Comments that's not 
mentioned are all addressed.

{quote}
reencryptionHandler#reencryptEncryptionZone()
zoneId is obtained when holding FSDirectory read lock, release the lock, and 
then acquire FSDirectory read lock again.
This assertion is only correct if there will be only one ReencryptionHandler 
running.
{quote}
There is only one {{ReencryptionHandler}}. Added texts to javadoc.
If the zone referred to by inodeid is changed (e.g. deleted/renamed) while the 
lock is not held, {{checkZoneReady}} will throw. A similar test case would be 
{{TestReencryption#testZoneDeleteDuringReencrypt}}.

bq. ReencryptionStatus#updateZoneStatus() should check that zoneNode is an 
encryption zone.
For the 2 callers, {{FSD#addEZ}} is where the zoneId is added, so always true. 
{{FSDirXAttrOp#unprotectedSetXAttrs}} is happening within the EZXattr, so also 
always true. (There's no 'disable encryption' command, so zone node can only be 
deleted/renamed)

bq. Why is currentBatch a TreeMap?
Good question. Initially this was done to keep the element's ordering and using 
path as the key. Now that it's changed to inode id based, we can just use a 
list. (Sorry didn't rebase the inodeid patch here on 14...)

bq. Does ZoneReencryptionStatus#getLastProcessedFile return the relative path? 
or file name only? or absolute path?
Absolute path - so we can restore in case of fail over.

bq. It Allocates a 2000-element map, copy it over, and then clear the map. That 
looks suboptimal. Would it be feasible to wrap TreeMap and make a method that 
simply assigns the TreeMap reference to another currentBatch?
Agreed, problem is {{currentBatch}} here is passed in from the very outside of 
the call stack.
Made it a member variable of {{ReencryptionHandler}} to address this. It's 
still safe with the single-threaded handler model, 

[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-08-22 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136484#comment-16136484
 ] 

Mukul Kumar Singh commented on HDFS-12292:
--

Thanks for the updated patch [~erofeev], 

1) DFSAdmin.java:126, Should the condition inside the if statement should be 
changed to ViewFileSystem ?
As this line contradicts the if statement on 130.

2) PathData.java:321 and 325, I feel that line 325 will always be true when 321 
is true. Can one of these conditions be avoided ?

3) PathData.java:325, should there be a check for filesystem scheme here?

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292-004.patch, HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.15.patch

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136480#comment-16136480
 ] 

Xiao Chen commented on HDFS-10899:
--

Patch 15 uploaded, still based on PATCH-14705.. 

Addressed all [~jojochuang] comments above (I think), as well as some issues 
found during internal testing:
- fixed {{listReencryptionStatus}} to track by inode, and also filter out 
snapshots
- added fault injectors and related unit tests for failure handling.
- throttling improvements

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.15.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136469#comment-16136469
 ] 

Xiao Chen commented on HDFS-10899:
--

bq. ReencryptionUpdater#throttle(): updater would keep contending for namenode 
lock
{{batchService.take();}} is a blocking call, so hangs there if 'nothing to do', 
so NN lock untouched.
1.0 means no throttling, so would be touch on locking - that's because this is 
intended to be run in a maintenance window. Same reason why renames are 
disabled during this time.
Throttler also considers how many tasks are pending, to prevent piling up tasks 
on NN heap.

bq. ... ZoneSubmissionTracker#tasks If there will always be just one 
ReencryptionHandler, then this is okay.
Good analysis. Yes, one handler.

bq. the edit log is written only when all tasks are successful.
That {{updateReencryptionProgress}} call is to update the zone node with the 
progress. The actual file xattrs (aka. new edeks) are logged during the 
processing of each batch, via {{FSDirEncryptionZoneOp.setFileEncryptionInfo}}

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt 
> edek design doc.pdf, Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136462#comment-16136462
 ] 

Xiao Chen commented on HDFS-10899:
--

bq. It looks like ReencryptionTask.batch does not need to use file name as the 
key; instead, it can use INode id as key, and this way it reduces the overhead 
to translate inode to file name back and forth.
Yup, I think that's inline with Daryn's comment.

bq. ReencryptionUpdater#processOneTask ... better part of ReencryptionTask 
instead of ReencryptionUpdater...
Intention was that handler produces callables, and updater consumes callables. 
{{ReencryptionTask}} is a struct holding information about the callable, so I 
think process on updater makes sense.

bq. Does task.numFilesUpdated equal task.batch.size()?
May or may not. If things caused a file to be skipped (those conditions above 
which would lead to a {{continue}}), then not equal.

bq. This variable name is a little cryptic: zst
Renamed to 'tracker', better name suggestion welcome.

bq. There are a few TODOs
Failure handling wasn't done at the time. patch 15 does it, with added unit 
tests that utilize fault injector.



> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt 
> edek design doc.pdf, Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136458#comment-16136458
 ] 

Weiwei Yang commented on HDFS-12282:


Hi [~anu]

{{SCMNodeManager#sendHeartbeat}} is the HB RPC call called by DN to send HB, 
there it is pretty light weight. {{SCMNodeManager#handleHeartbeat}} is already 
a worker thread to process HB queue, running in certain interval at background, 
even there is heavy I/O, it won't affect HB performance. Please correct me if I 
am wrong, thank you.

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136455#comment-16136455
 ] 

Xiao Chen commented on HDFS-10899:
--

Thanks a lot for the reviews [~jojochuang], good comments!
Replying one by one, and attaching a patch in the end. Comments that's not 
mentioned are all addressed.

{quote}
reencryptionHandler#reencryptEncryptionZone()
zoneId is obtained when holding FSDirectory read lock, release the lock, and 
then acquire FSDirectory read lock again.
This assertion is only correct if there will be only one ReencryptionHandler 
running.
{quote}
There is only one {{ReencryptionHandler}}. Added texts to javadoc.
If the zone referred to by inodeid is changed (e.g. deleted/renamed) while the 
lock is not held, {{checkZoneReady}} will throw. A similar test case would be 
{{TestReencryption#testZoneDeleteDuringReencrypt}}.

bq. ReencryptionStatus#updateZoneStatus() should check that zoneNode is an 
encryption zone.
For the 2 callers, {{FSD#addEZ}} is where the zoneId is added, so always true. 
{{FSDirXAttrOp#unprotectedSetXAttrs}} is happening within the EZXattr, so also 
always true. (There's no 'disable encryption' command, so zone node can only be 
deleted/renamed)

bq. Why is currentBatch a TreeMap?
Good question. Initially this was done to keep the element's ordering and using 
path as the key. Now that it's changed to inode id based, we can just use a 
list. (Sorry didn't rebase the inodeid patch here on 14...)

bq. Does ZoneReencryptionStatus#getLastProcessedFile return the relative path? 
or file name only? or absolute path?
Absolute path - so we can restore in case of fail over.

bq. It Allocates a 2000-element map, copy it over, and then clear the map. That 
looks suboptimal. Would it be feasible to wrap TreeMap and make a method that 
simply assigns the TreeMap reference to another currentBatch?
Agreed, problem is {{currentBatch}} here is passed in from the very outside of 
the call stack.
Made it a member variable of {{ReencryptionHandler}} to address this. It's 
still safe with the single-threaded handler model, but perhaps harder to read. 
Please share your thoughts.

bq. EDEKReencryptCallable ... retry ... if reencryptEdeks() returns numFailures 
> 0, call() should not return a new ReencryptionTask object.
Initially talking with [~andrew.wang], we wanted to always retry things, so 
admin can just fix the error, and continue (or cancel).
But since KMSCP already has the retry logic added by HADOOP-14521, and to trade 
off for maintainability, we do not 'double retry' here, and only let KMSCP's 
retry policy to handle failures.
When -listReencryptionStatus, if numOfFailures > 0, a message is printed to ask 
admin to examine failure and re-submit. 

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.14.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt 
> edek design doc.pdf, Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12327:
-
Attachment: HDFS-12327-HDFS-7240.003.patch

> Ozone: support setting timeout in background service
> 
>
> Key: HDFS-12327
> URL: https://issues.apache.org/jira/browse/HDFS-12327
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12327-HDFS-7240.001.patch, 
> HDFS-12327-HDFS-7240.002.patch, HDFS-12327-HDFS-7240.003.patch
>
>
> The background should support timeout setting in case the task ran hung 
> caused by unpredictability sceneries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12335) Federation Metrics

2017-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136418#comment-16136418
 ] 

Hadoop QA commented on HDFS-12335:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 14m 
54s{color} | {color:red} root in HDFS-10467 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 403 unchanged - 0 fixed = 406 total (was 403) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceInit(Configuration)
  At StateStoreService.java:field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceInit(Configuration)
  At StateStoreService.java:[lines 163-164] |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceStop()  
At StateStoreService.java:field 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.metrics in 
org.apache.hadoop.hdfs.server.federation.store.StateStoreService.serviceStop()  
At StateStoreService.java:[lines 193-195] |
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouter |
|   | hadoop.hdfs.server.federation.metrics.TestFederationMetrics |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | 

[jira] [Updated] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12327:
-
Attachment: (was: HDFS-12327-HDFS-7240.003.patch)

> Ozone: support setting timeout in background service
> 
>
> Key: HDFS-12327
> URL: https://issues.apache.org/jira/browse/HDFS-12327
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12327-HDFS-7240.001.patch, 
> HDFS-12327-HDFS-7240.002.patch
>
>
> The background should support timeout setting in case the task ran hung 
> caused by unpredictability sceneries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136406#comment-16136406
 ] 

Anu Engineer commented on HDFS-12282:
-

[~cheersyang] My concern comes from the issues I have seen in the HDFS world. 
The heartbeat became a scalability bottleneck when it started doing too many 
things. Then HDFS eventually had to invent things like "life-line protocol", 
which is really Heartbeat which does nothing else. Disk I/O also has a very 
weird problem when the disk is slow or busy it will slow down everything in its 
path. That in turn, makes the SCM think that data nodes are not HB-ing and 
marks them as dead. 

This is a first-hand issue from very busy HDFS clusters, so life line protocol 
was developed to purposefully make HB handling lightweight. I am just 
suggesting that we follow the model, HB processing should be quick -- if you 
need to do any work based on HB, I think we should post that to a queue and let 
an async system pick up the work.
 

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136402#comment-16136402
 ] 

Weiwei Yang commented on HDFS-12327:


Thanks [~linyiqun], I just noticed this is wrapped using {{getTimeDuration}} 
call, we don't need a cast here then long it is. Thank you.

> Ozone: support setting timeout in background service
> 
>
> Key: HDFS-12327
> URL: https://issues.apache.org/jira/browse/HDFS-12327
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12327-HDFS-7240.001.patch, 
> HDFS-12327-HDFS-7240.002.patch, HDFS-12327-HDFS-7240.003.patch
>
>
> The background should support timeout setting in case the task ran hung 
> caused by unpredictability sceneries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12327:
-
Attachment: HDFS-12327-HDFS-7240.003.patch

> Ozone: support setting timeout in background service
> 
>
> Key: HDFS-12327
> URL: https://issues.apache.org/jira/browse/HDFS-12327
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12327-HDFS-7240.001.patch, 
> HDFS-12327-HDFS-7240.002.patch, HDFS-12327-HDFS-7240.003.patch
>
>
> The background should support timeout setting in case the task ran hung 
> caused by unpredictability sceneries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136401#comment-16136401
 ] 

Weiwei Yang commented on HDFS-12282:


Hi [~anu]

Thanks for your quick comments. I have addressed all in v2 patch except the 
last one, are you suggesting to run the code in 
{{SCMNodeManager#handleHeartbeat}} in a thread? I am not sure why that is 
necessary. This part of change only scans the block deletion transaction log, 
get a throttled number of transactions and add them to the {{commandQueue}}, 
there is no heavy I/O involved. We can certainly have some more discussion 
about this, probably tomorrow.

Thank you!

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12327) Ozone: support setting timeout in background service

2017-08-22 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136399#comment-16136399
 ] 

Yiqun Lin commented on HDFS-12327:
--

Thanks for the review, [~cheersynang]. All the comments make sense to me except 
this one.
bq. Another nit is can we use int for the timeout instead of long?
We don't need to change the type of timeout I think. Method 
{{Configuration#getTimeDuration}} returns the {{long}} type value. If we use 
{{int}} type for the timeout, that means we will cast long to int of timeout 
value each time in subclass of background service. There is also a risk of 
losing precision that we do the type cast operation.
Attach the updated patch.

> Ozone: support setting timeout in background service
> 
>
> Key: HDFS-12327
> URL: https://issues.apache.org/jira/browse/HDFS-12327
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12327-HDFS-7240.001.patch, 
> HDFS-12327-HDFS-7240.002.patch
>
>
> The background should support timeout setting in case the task ran hung 
> caused by unpredictability sceneries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12282:
---
Attachment: HDFS-12282-HDFS-7240.002.patch

> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, 
> HDFS-12282-HDFS-7240.001.patch, HDFS-12282-HDFS-7240.002.patch, 
> HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12222) Add EC information to BlockLocation

2017-08-22 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136375#comment-16136375
 ] 

Huafeng Wang commented on HDFS-1:
-

I just tweaked the patch according to your suggestions. Is it on the right way? 
And about the new API that returns both data and parity blocks, I tend to place 
this API in DFSClient and DistributedFileSystem, something like 
{code}
public ErasureCodedBlockLocation getECBlockLocation(Path p);
{code}

Is it a proper way to do that?

> Add EC information to BlockLocation
> ---
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-1.001.patch, HDFS-1.002.patch
>
>
> HDFS applications query block location information to compute splits. One 
> example of this is FileInputFormat:
> https://github.com/apache/hadoop/blob/d4015f8628dd973c7433639451a9acc3e741d2a2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java#L346
> You see bits of code like this that calculate offsets as follows:
> {noformat}
> long bytesInThisBlock = blkLocations[startIndex].getOffset() + 
>   blkLocations[startIndex].getLength() - offset;
> {noformat}
> EC confuses this since the block locations include parity block locations as 
> well, which are not part of the logical file length. This messes up the 
> offset calculation and thus topology/caching information too.
> Applications can figure out what's a parity block by reading the EC policy 
> and then parsing the schema, but it'd be a lot better if we exposed this more 
> generically in BlockLocation instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-22 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136363#comment-16136363
 ] 

Yuanbo Liu commented on HDFS-12283:
---

[~cheersyang] Thanks for your comments.
{quote}
ozone-default.xml
{quote}
addressed
{quote}
DeletedBlockLog.java
{quote}
addressed
{quote}
can we move line 67 - 71
{quote}
Those fields are designed to be kept in memory as global variables. 

{quote}
line 64: can we use AtomicLong
{quote}
Since we change the value in a locked field, so I guess we don't have to use 
AtomicLong here.
{quote}
exception handling of addTransaction, what if deletedStore.writeBatch(batch) 
{quote}
The situation is much worse if we write batch before updating latestTxid, it 
would lead to repeated txid, so we update latestTxid before writing batch, in 
this case, txid may be discontinuous, but it's acceptable.

Other comments will be addressed in the new patch, thanks again for your review.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch, 
> HDFS-12283-HDFS-7240.004.patch, HDFS-12283-HDFS-7240.005.patch, 
> HDFS-12283-HDFS-7240.006.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >