[jira] [Commented] (HDFS-11942) make new chooseDataNode policy work in more operation like seek, fetch

2018-02-27 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379902#comment-16379902
 ] 

Jiandan Yang  commented on HDFS-11942:
--

[~whisper_deng] This patch is very important for HBase, Why not keep on going?

> make new  chooseDataNode policy  work in more operation like seek, fetch
> 
>
> Key: HDFS-11942
> URL: https://issues.apache.org/jira/browse/HDFS-11942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha3
>Reporter: Fangyuan Deng
>Priority: Major
> Fix For: 3.0.1
>
> Attachments: HDFS-11942.0.patch, HDFS-11942.1.patch, 
> ssd-first-disable(default).png, ssd-first-enable.png
>
>
> in default policy, if a file is ONE_SSD, client will prior read the local 
> disk replica rather than the remote ssd replica.
> but now, the pci-e SSD and 10G ethernet make remote read SSD more faster than 
>  the local disk.
> HDFS-9666 give us a patch,  but the code is not complete and not updated for 
> a long time.
> this sub-task issue give a complete patch and 
> we have tested on three machines [ 32 core cpu, 128G mem , 1000M network, 
> 1.2T HDD, 800G SSD(intel P3600) ].
> with this feather, throughput of hbase table(ONE_SSD) is double of which 
> without this feather



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379899#comment-16379899
 ] 

genericqa commented on HDFS-13201:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912386/HDFS-13201.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bf7403dfeefd 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a9c14b1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23235/testReport/ |
| Max. process+thread count | 441 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23235/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: 

[jira] [Updated] (HDFS-13202) Fix javadoc in HAUtil and small refactoring

2018-02-27 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13202:

Issue Type: Improvement  (was: Bug)

> Fix javadoc in HAUtil and small refactoring
> ---
>
> Key: HDFS-13202
> URL: https://issues.apache.org/jira/browse/HDFS-13202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Trivial
> Attachments: HDFS-13202.000.patch
>
>
> There are a few outdated javadocs in {{HAUtil}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13202) Fix javadoc in HAUtil and small refactoring

2018-02-27 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13202:

Status: Patch Available  (was: Open)

> Fix javadoc in HAUtil and small refactoring
> ---
>
> Key: HDFS-13202
> URL: https://issues.apache.org/jira/browse/HDFS-13202
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Trivial
> Attachments: HDFS-13202.000.patch
>
>
> There are a few outdated javadocs in {{HAUtil}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13202) Fix javadoc in HAUtil and small refactoring

2018-02-27 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13202:

Attachment: HDFS-13202.000.patch

> Fix javadoc in HAUtil and small refactoring
> ---
>
> Key: HDFS-13202
> URL: https://issues.apache.org/jira/browse/HDFS-13202
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Trivial
> Attachments: HDFS-13202.000.patch
>
>
> There are a few outdated javadocs in {{HAUtil}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13202) Fix javadoc in HAUtil and small refactoring

2018-02-27 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13202:
---

 Summary: Fix javadoc in HAUtil and small refactoring
 Key: HDFS-13202
 URL: https://issues.apache.org/jira/browse/HDFS-13202
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chao Sun
Assignee: Chao Sun


There are a few outdated javadocs in {{HAUtil}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-02-27 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379882#comment-16379882
 ] 

Chao Sun commented on HDFS-12975:
-

Attached patch v2.

> Changes to the NameNode to support reads from standby
> -
>
> Key: HDFS-12975
> URL: https://issues.apache.org/jira/browse/HDFS-12975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12975.000.patch, HDFS-12975.001.patch, 
> HDFS-12975.002.patch
>
>
> In order to support reads from standby NameNode needs changes to add Observer 
> role, turn off checkpointing and such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-02-27 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12975:

Attachment: HDFS-12975.002.patch

> Changes to the NameNode to support reads from standby
> -
>
> Key: HDFS-12975
> URL: https://issues.apache.org/jira/browse/HDFS-12975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12975.000.patch, HDFS-12975.001.patch, 
> HDFS-12975.002.patch
>
>
> In order to support reads from standby NameNode needs changes to add Observer 
> role, turn off checkpointing and such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11925) Offline Image Viewer: Processor argument should have some verification

2018-02-27 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-11925:

Status: Open  (was: Patch Available)

> Offline Image Viewer: Processor argument should have some verification
> --
>
> Key: HDFS-11925
> URL: https://issues.apache.org/jira/browse/HDFS-11925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-11925.002.patch, HDFS-11925.patch
>
>
> At present, hdfs oiv tool lacks in verification of input parameter.people can 
> type in irrelevant option like:
> bq. ./hdfs oiv -i fsimage_000 -p XML -step 1024
> or type in option with wrong format which he think come into effec but 
> actually not: 
> bq. ./hdfs oiv -i fsimage_000 -p FileDistribution maxSize 
> 4096 step 512 format
> or some meaningless word which can also get through:
> bq. ./hdfs oiv -i fsimage_000 -p XML Hello Han Meimei
> We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11925) Offline Image Viewer: Processor argument should have some verification

2018-02-27 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-11925:

Affects Version/s: (was: 3.0.0-alpha3)
   3.1.0
 Target Version/s:   (was: 3.1.0)
   Status: Patch Available  (was: Open)

> Offline Image Viewer: Processor argument should have some verification
> --
>
> Key: HDFS-11925
> URL: https://issues.apache.org/jira/browse/HDFS-11925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-11925.002.patch, HDFS-11925.patch
>
>
> At present, hdfs oiv tool lacks in verification of input parameter.people can 
> type in irrelevant option like:
> bq. ./hdfs oiv -i fsimage_000 -p XML -step 1024
> or type in option with wrong format which he think come into effec but 
> actually not: 
> bq. ./hdfs oiv -i fsimage_000 -p FileDistribution maxSize 
> 4096 step 512 format
> or some meaningless word which can also get through:
> bq. ./hdfs oiv -i fsimage_000 -p XML Hello Han Meimei
> We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11925) Offline Image Viewer: Processor argument should have some verification

2018-02-27 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-11925:

Attachment: HDFS-11925.002.patch

> Offline Image Viewer: Processor argument should have some verification
> --
>
> Key: HDFS-11925
> URL: https://issues.apache.org/jira/browse/HDFS-11925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-11925.002.patch, HDFS-11925.patch
>
>
> At present, hdfs oiv tool lacks in verification of input parameter.people can 
> type in irrelevant option like:
> bq. ./hdfs oiv -i fsimage_000 -p XML -step 1024
> or type in option with wrong format which he think come into effec but 
> actually not: 
> bq. ./hdfs oiv -i fsimage_000 -p FileDistribution maxSize 
> 4096 step 512 format
> or some meaningless word which can also get through:
> bq. ./hdfs oiv -i fsimage_000 -p XML Hello Han Meimei
> We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-02-27 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379865#comment-16379865
 ] 

Chao Sun commented on HDFS-12975:
-

Thanks Konstantin. Let me strip out the state part of the patch and leave the 
rest for separate Jiras.

{quote}
May be we should have a separate doc or I can update the design. The main thing 
is that we should configure NNs altogether the same way as it is currently done
{quote}

Yes, I think it would be useful to add a section for the configuration changes. 
Currently I'm using a separate config: 
{{dfs.ha.observer.namenode.}}, to separate observer from 
active/standby. I think this is similar to the choices #2 you mentioned.

{quote}
HAUtil.getConfForOtherNodes() the refactoring (otherNn -> otherNameNodes) 
should go into trunk. Better avoid renaming even though the original choice of 
the variable name is bad.
{quote}
Will create a Jira to address this.

> Changes to the NameNode to support reads from standby
> -
>
> Key: HDFS-12975
> URL: https://issues.apache.org/jira/browse/HDFS-12975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12975.000.patch, HDFS-12975.001.patch
>
>
> In order to support reads from standby NameNode needs changes to add Observer 
> role, turn off checkpointing and such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2018-02-27 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-13201:
---
Attachment: HDFS-13201.patch

> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Priority: Minor
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2018-02-27 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-13201:
---
Status: Patch Available  (was: Open)

> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Priority: Minor
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2018-02-27 Thread chencan (JIRA)
chencan created HDFS-13201:
--

 Summary: Fix prompt message in testPolicyAndStateCantBeNull
 Key: HDFS-13201
 URL: https://issues.apache.org/jira/browse/HDFS-13201
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: chencan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13200) Fix prompt message in testPolicyAndStateCantBeNull

2018-02-27 Thread chencan (JIRA)
chencan created HDFS-13200:
--

 Summary: Fix prompt message in testPolicyAndStateCantBeNull
 Key: HDFS-13200
 URL: https://issues.apache.org/jira/browse/HDFS-13200
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: chencan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-02-27 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379828#comment-16379828
 ] 

Bharat Viswanadham edited comment on HDFS-13195 at 2/28/18 6:18 AM:


[~maobaolong] Ran few tests locally they have passed. And I don't think the 
test failures are related to the patch.

Assigned the Jira to you, as you have provided the patch.


was (Author: bharatviswa):
[~maobaolong] Ran few tests locally they have passed. And I don't think the 
test failures are related to the patch.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, HDFS-13195.001.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-02-27 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379828#comment-16379828
 ] 

Bharat Viswanadham commented on HDFS-13195:
---

[~maobaolong] Ran few tests locally they have passed. And I don't think the 
test failures are related to the patch.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, HDFS-13195.001.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-02-27 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-13195:
-

Assignee: maobaolong

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, HDFS-13195.001.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-02-27 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379820#comment-16379820
 ] 

maobaolong commented on HDFS-13195:
---

[~bharatviswa] I don't think this failed tests is related to me.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, HDFS-13195.001.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13166) [SPS]: Implement caching mechanism to keep LIVE datanodes to minimize costly getLiveDatanodeStorageReport() calls

2018-02-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379775#comment-16379775
 ] 

Rakesh R commented on HDFS-13166:
-

Thanks [~surendrasingh] for the reviews and useful comments.

Attached another patch addressing the same.
bq. In external SPS case how we will insure that this property is same in 
namenode and external SPS ?
I've documented this in the externalSPS startup section in 
{{ArchiveStorage.md}} file.

Renamed following items:
- {{NodeInfo}} => {{StorageDetails}} as StorageInfo class name is already used 
in the code.
- {{typeNodeMap}} => {{storageMap}}


> [SPS]: Implement caching mechanism to keep LIVE datanodes to minimize costly 
> getLiveDatanodeStorageReport() calls
> -
>
> Key: HDFS-13166
> URL: https://issues.apache.org/jira/browse/HDFS-13166
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Attachments: HDFS-13166-HDFS-10285-00.patch, 
> HDFS-13166-HDFS-10285-01.patch, HDFS-13166-HDFS-10285-02.patch
>
>
> Presently {{#getLiveDatanodeStorageReport()}} is fetched for every file and 
> does the computation. This Jira sub-task is to discuss and implement a cache 
> mechanism which in turn reduces the number of function calls. Also, could 
> define a configurable refresh interval and periodically refresh the DN cache 
> by fetching latest {{#getLiveDatanodeStorageReport}} on this interval.
>  Following comments taken from HDFS-10285, 
> [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16347472]
>  Comment-7)
> {quote}Adding getDatanodeStorageReport is concerning. 
> getDatanodeListForReport is already a very bad method that should be avoided 
> for anything but jmx – even then it’s a concern. I eliminated calls to it 
> years ago. All it takes is a nscd/dns hiccup and you’re left holding the fsn 
> lock for an excessive length of time. Beyond that, the response is going to 
> be pretty large and tagging all the storage reports is not going to be cheap.
> verifyTargetDatanodeHasSpaceForScheduling does it really need the namesystem 
> lock? Can’t DatanodeDescriptor#chooseStorage4Block synchronize on its 
> storageMap?
> Appears to be calling getLiveDatanodeStorageReport for every file. As 
> mentioned earlier, this is NOT cheap. The SPS should be able to operate on a 
> fuzzy/cached state of the world. Then it gets another datanode report to 
> determine the number of live nodes to decide if it should sleep before 
> processing the next path. The number of nodes from the prior cached view of 
> the world should suffice.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13166) [SPS]: Implement caching mechanism to keep LIVE datanodes to minimize costly getLiveDatanodeStorageReport() calls

2018-02-27 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-13166:

Attachment: HDFS-13166-HDFS-10285-02.patch

> [SPS]: Implement caching mechanism to keep LIVE datanodes to minimize costly 
> getLiveDatanodeStorageReport() calls
> -
>
> Key: HDFS-13166
> URL: https://issues.apache.org/jira/browse/HDFS-13166
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Attachments: HDFS-13166-HDFS-10285-00.patch, 
> HDFS-13166-HDFS-10285-01.patch, HDFS-13166-HDFS-10285-02.patch
>
>
> Presently {{#getLiveDatanodeStorageReport()}} is fetched for every file and 
> does the computation. This Jira sub-task is to discuss and implement a cache 
> mechanism which in turn reduces the number of function calls. Also, could 
> define a configurable refresh interval and periodically refresh the DN cache 
> by fetching latest {{#getLiveDatanodeStorageReport}} on this interval.
>  Following comments taken from HDFS-10285, 
> [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16347472]
>  Comment-7)
> {quote}Adding getDatanodeStorageReport is concerning. 
> getDatanodeListForReport is already a very bad method that should be avoided 
> for anything but jmx – even then it’s a concern. I eliminated calls to it 
> years ago. All it takes is a nscd/dns hiccup and you’re left holding the fsn 
> lock for an excessive length of time. Beyond that, the response is going to 
> be pretty large and tagging all the storage reports is not going to be cheap.
> verifyTargetDatanodeHasSpaceForScheduling does it really need the namesystem 
> lock? Can’t DatanodeDescriptor#chooseStorage4Block synchronize on its 
> storageMap?
> Appears to be calling getLiveDatanodeStorageReport for every file. As 
> mentioned earlier, this is NOT cheap. The SPS should be able to operate on a 
> fuzzy/cached state of the world. Then it gets another datanode report to 
> determine the number of live nodes to decide if it should sleep before 
> processing the next path. The number of nodes from the prior cached view of 
> the world should suffice.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379750#comment-16379750
 ] 

genericqa commented on HDFS-13081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 12s{color} | {color:orange} root: The patch generated 1 new + 163 unchanged 
- 2 fixed = 164 total (was 165) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}224m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912358/HDFS-13081.006.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f4ade9611637 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven 

[jira] [Commented] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2018-02-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379748#comment-16379748
 ] 

Brahma Reddy Battula commented on HDFS-12781:
-

thanks [~arpitagarwal] for review and commit.[~Harsha1206] thanks for reporting.

> After Datanode down, In Namenode UI Datanode tab is throwing warning message.
> -
>
> Key: HDFS-12781
> URL: https://issues.apache.org/jira/browse/HDFS-12781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Brahma Reddy Battula
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12781-001.patch
>
>
> Scenario:
> Stop one Datanode
> Refresh or click on the Datanode tab in namenode UI.
> Actual Output:
> ==
> it's throwing the warning message. please find the bellow warning message.
> DataTables warning: table id=table-datanodes - Requested unknown parameter 
> '7' for row 2. For more information about this error, please see 
> http://datatables.net/tn/4
> Expected Output:
> 
> whenever you click on Datanode tab,it should be display the datanodes 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11925) Offline Image Viewer: Processor argument should have some verification

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379738#comment-16379738
 ] 

genericqa commented on HDFS-11925:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-11925 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11925 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871179/HDFS-11925.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23233/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Offline Image Viewer: Processor argument should have some verification
> --
>
> Key: HDFS-11925
> URL: https://issues.apache.org/jira/browse/HDFS-11925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-11925.patch
>
>
> At present, hdfs oiv tool lacks in verification of input parameter.people can 
> type in irrelevant option like:
> bq. ./hdfs oiv -i fsimage_000 -p XML -step 1024
> or type in option with wrong format which he think come into effec but 
> actually not: 
> bq. ./hdfs oiv -i fsimage_000 -p FileDistribution maxSize 
> 4096 step 512 format
> or some meaningless word which can also get through:
> bq. ./hdfs oiv -i fsimage_000 -p XML Hello Han Meimei
> We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11925) Offline Image Viewer: Processor argument should have some verification

2018-02-27 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-11925:

Summary: Offline Image Viewer: Processor argument should have some 
verification  (was: HDFS oiv:Normalize the verification of input parameter)

> Offline Image Viewer: Processor argument should have some verification
> --
>
> Key: HDFS-11925
> URL: https://issues.apache.org/jira/browse/HDFS-11925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-11925.patch
>
>
> At present, hdfs oiv tool lacks in verification of input parameter.people can 
> type in irrelevant option like:
> bq. ./hdfs oiv -i fsimage_000 -p XML -step 1024
> or type in option with wrong format which he think come into effec but 
> actually not: 
> bq. ./hdfs oiv -i fsimage_000 -p FileDistribution maxSize 
> 4096 step 512 format
> or some meaningless word which can also get through:
> bq. ./hdfs oiv -i fsimage_000 -p XML Hello Han Meimei
> We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13194) CachePool permissions incorrectly checked

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379691#comment-16379691
 ] 

Hudson commented on HDFS-13194:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13733 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13733/])
HDFS-13194. CachePool permissions incorrectly checked. Contributed by (yqlin: 
rev a9c14b11193adeaa31389578f4cb90fa79cad8c3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


> CachePool permissions incorrectly checked
> -
>
> Key: HDFS-13194
> URL: https://issues.apache.org/jira/browse/HDFS-13194
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Jianfei Jiang
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.2.0
>
> Attachments: HDFS-13194.001.patch, HDFS-13194.002.patch
>
>
> The permissions of CachePool incorrectly checked. The checking logic:
> {code:java}
>   public void checkPermission(CachePool pool, FsAction access)
>   throws AccessControlException {
> FsPermission mode = pool.getMode();
> if (isSuperUser()) {
>   return;
> }
> if (getUser().equals(pool.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(pool.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> // Following line seems incorrect,
> // we should ensure current user is not belong the pool's owner or pool's 
> group.
> if (mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException("Permission denied while accessing pool "
> + pool.getPoolName() + ": user " + getUser() + " does not have "
> + access.toString() + " permissions.");
>   }
> {code}
> For example one corner case, a cachepool (owner: test, group,test-group, 
> permission mode:--rwx(007)), then one user which named "test" or whose 
> group is "test-group" can both access this pool. But actually this is not 
> allowed since permission for its owner or group is none.
>  The behavior of checking other user should be updated like this:
> {code:java}
> if (!getUser().equals(pool.getOwnerName())
> && !isMemberOfGroup(pool.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13194) CachePool permissions incorrectly checked

2018-02-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13194:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Failed unit test is not related. Committed this to trunk, branch-3,1 and 
branch-2. Thanks [~jiangjianfei] for the contribution.

> CachePool permissions incorrectly checked
> -
>
> Key: HDFS-13194
> URL: https://issues.apache.org/jira/browse/HDFS-13194
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Jianfei Jiang
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.2.0
>
> Attachments: HDFS-13194.001.patch, HDFS-13194.002.patch
>
>
> The permissions of CachePool incorrectly checked. The checking logic:
> {code:java}
>   public void checkPermission(CachePool pool, FsAction access)
>   throws AccessControlException {
> FsPermission mode = pool.getMode();
> if (isSuperUser()) {
>   return;
> }
> if (getUser().equals(pool.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(pool.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> // Following line seems incorrect,
> // we should ensure current user is not belong the pool's owner or pool's 
> group.
> if (mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException("Permission denied while accessing pool "
> + pool.getPoolName() + ": user " + getUser() + " does not have "
> + access.toString() + " permissions.");
>   }
> {code}
> For example one corner case, a cachepool (owner: test, group,test-group, 
> permission mode:--rwx(007)), then one user which named "test" or whose 
> group is "test-group" can both access this pool. But actually this is not 
> allowed since permission for its owner or group is none.
>  The behavior of checking other user should be updated like this:
> {code:java}
> if (!getUser().equals(pool.getOwnerName())
> && !isMemberOfGroup(pool.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread tartarus (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379668#comment-16379668
 ] 

tartarus commented on HDFS-13199:
-

LGTM

 

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379652#comment-16379652
 ] 

Hudson commented on HDFS-13199:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13732 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13732/])
HDFS-13199. RBF: Fix the hdfs router page missing label icon issue. (inigoiri: 
rev d86f301d464683f8d392dad50e83f50d823e862e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/router/federationhealth.html
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/router/federationhealth.js


> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-02-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379646#comment-16379646
 ] 

Íñigo Goiri commented on HDFS-13198:


Thanks [~ywskycn] for  [^HDFS-13198.000.patch].
I think it makes sense to check in those places.
I would probably create a function or split it the checks a little.
For {{StateStoreZooKeeperImpl}} something like the following:
{code}
public boolean isDriverReady() {
  if (zKManager == null) {
return false;
  }
  curator = zkManager.getCurator();
  if (curator == null) {
return false;
  }
  return curator.getState() == CuratorFrameworkState.STARTED;
}
{code}
For the other one, probably a function like {{isStoreAvailable()}}.

Regarding the unit test, you are right, right now there is no exception.
Maybe we could add a mocked delay on initializing the ZK store and then 
checking that the state is not changed but after a while it does.
Not sure how to check for the NPE though, we could try to do the change that is 
done async by hand.

> RBF: RouterHeartbeatService throws out CachedStateStore related exceptions 
> when starting router
> ---
>
> Key: HDFS-13198
> URL: https://issues.apache.org/jira/browse/HDFS-13198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13198.000.patch
>
>
> Exception looks like:
> {code:java}
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State 
> Store not initialized, MembershipState records not valid
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State 
> Store not initialized, MountTable records not valid
> Exception in thread "Router Heartbeat Async" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75)
> at 
> org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68)
> at java.lang.Thread.run(Thread.java:748){code}
> This is because, during starting the Router, the CachedStateStore hasn't been 
> initialized and cannot serve requests. Although the router will still be 
> started, it would be better to fix the exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379638#comment-16379638
 ] 

Íñigo Goiri commented on HDFS-13199:


Thanks [~maobaolong] for the contribution.
Regarding HDFS-9357, I was just mentioning that the broken icon was caused by 
HDFS-9357; this was developed on top of 2.7 and that change "broke" the icons 
added in HDFS-12273.
Committed to {{trunk}}, {{branch-3.1}}, {{branch-3.0}}, {{branch-2}}, and 
{{branch-2.9}}.

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13199:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.0.2
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379634#comment-16379634
 ] 

genericqa commented on HDFS-13081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} root: The patch generated 0 new + 163 unchanged - 2 
fixed = 163 total (was 165) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
1s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}242m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Dead store to isSecure in 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(Configuration)
  At 
SecureDataNodeStarter.java:org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(Configuration)
  At SecureDataNodeStarter.java:[line 116] |
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | 

[jira] [Commented] (HDFS-13089) Add test to validate dfs used and no of blocks when blocks are moved across volumes

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379623#comment-16379623
 ] 

genericqa commented on HDFS-13089:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13089 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912341/HDFS-13089.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bc4514088e25 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 727c033 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23231/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23231/testReport/ |
| Max. process+thread count | 4003 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23231/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add test to validate dfs used and no of blocks when blocks 

[jira] [Commented] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379622#comment-16379622
 ] 

maobaolong commented on HDFS-13199:
---

[~goiri] Thank you  for your apply. I have great honor to become a contributor 
of hadoop hdfs.

This issue is very similar to HDFS-9357. The difference is that HDFS-9357 is 
about NN, this issue is about router.

Yeah, the lower case of my name is better.

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-02-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379616#comment-16379616
 ] 

Konstantin Shvachko edited comment on HDFS-12975 at 2/28/18 1:31 AM:
-

Chao, the state part of the patch looks really good to me. I think we can 
commit it if you can separate it from the rest, after some minor cleanup - see 
below.

*_Configuration needs more thinking_.* I missed to describe the cluster startup 
process in the design doc. May be we should have a separate doc or I can update 
the design. The main thing is that we should configure NNs altogether the same 
way as it is currently done:
{code:java}
dfs.ha.namenodes.mycluster = nn1,nn2,nn3
dfs.namenode.rpc-address.mycluster.nn1 = machine1.example.com:8020
dfs.namenode.rpc-address.mycluster.nn2 = machine2.example.com:8020
dfs.namenode.rpc-address.mycluster.nn3 = machine3.example.com:8020
{code}
Currently all nodes start in standby state and admins should transition to 
active one of them, e.g. nn1.
 We have two choices for observers.
 # Use {{haadmin transitionToObserver}}, which will bring say nn3 from standby 
to observer state.
 # Or add configuration parameter, which allows nn3 to start in observer state
{code:java}
dfs.namenode.observer.mycluster.nn3 = true // false by default
{code}
In the end we will probably need both.

_*Comments on the rest fo the patch*_
 # {{allowStaleStandbyReads}} for observers should be true - ignore the config
 # NameNodeStatusMXBean methods should not be in this patch
 ** {{NameNode.getRole()}} looks like incompatible change
 ** {{NameNode.getState()}} shouldn't change. You need {{synchronized}} to 
access the {{state}} member.
 # {{HAState}} adds only an unused import
 # {{HAUtil.getConfForOtherNodes()}} the refactoring (otherNn -> 
otherNameNodes) should go into trunk. Better avoid renaming even though the 
original choice of the variable name is bad.
 # {{StandbyState.checkOperation()}} should not change since 
{{allowStaleReads()}} for observer is true
 # A think Command {{-namenodes}} should return all namenodes including 
standbys and observers. As it does now. We can postpone this decision and do it 
in the next jira if you want.
 # Even in tests we should not count observers separately from NNs. They are 
all NNs.
 # I see some Javadoc spelling corrections, white spaces, and long lines
 ** If formatting or spelling corrections of the existing code are needed they 
should go into trunk
 ** New ones should be fixed in the patch


was (Author: shv):
Chao, the state part of the patch looks really good to me. I think we can 
commit it if you can separate it from the rest, after some minor cleanup - see 
below.

*_Configuration needs more thinking_.* I missed to describe the cluster startup 
process in the design doc. May be we should have a separate doc or I can update 
the design. The main thing is that we should configure NNs altogether the same 
way as it is currently done:
{code:java}
dfs.ha.namenodes.mycluster = nn1,nn2,nn3
dfs.namenode.rpc-address.mycluster.nn1 = machine1.example.com:8020
dfs.namenode.rpc-address.mycluster.nn2 = machine2.example.com:8020
dfs.namenode.rpc-address.mycluster.nn3 = machine3.example.com:8020
{code}
Currently all nodes start in standby state and admins should transition to 
active one of them, e.g. nn1.
 We have two choices for observers.
 # Use {{haadmin transitionToObserver}}, which will bring say nn3 from standby 
to observer state.
 # Or add configuration parameter, which allows nn3 to start in observer state
{code:java}
dfs.namenode.observer.mycluster.nn3 = true // false by default
{code}
In the end we will probably need both.

_*Comments on the rest fo the patch*_
 # {{allowStaleStandbyReads}} for observers should be true - ignore the config
 # NameNodeStatusMXBean methods should not be in this patch
 ** {{NameNode.getRole()}} looks like incompatible change
 ** {{NameNode.getState()}} shouldn't change. You need {{synchronized}} to 
access the {{state}} member.
 # {{HAState}} adds only an unused import
 # {{HAUtil.getConfForOtherNodes()}} the refactoring (otherNn -> 
otherNameNodes) should go into trunk. Better avoid renaming even though the 
original choice of the variable name is bad.
 # {{StandbyState.checkOperation()}} should not change since 
{{allowStaleReads()}} for observer is true
 # A think Command {{-namenodes}} should return all namenodes including 
standbys and observers. As it does now. We can postpone this decision and do it 
in the next jira if you want.
 # Even in tests we should not count observers separately from NNs. They are 
all NNs.
 # I see some Javadoc spelling corrections, white spaces, and long lines
 ** If formatting or spelling corrections of the existing code are needed they 
should go into trunk
 ** New ones should be fixed in the

> Changes to the NameNode to support reads from standby
> 

[jira] [Commented] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-02-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379616#comment-16379616
 ] 

Konstantin Shvachko commented on HDFS-12975:


Chao, the state part of the patch looks really good to me. I think we can 
commit it if you can separate it from the rest, after some minor cleanup - see 
below.

*_Configuration needs more thinking_.* I missed to describe the cluster startup 
process in the design doc. May be we should have a separate doc or I can update 
the design. The main thing is that we should configure NNs altogether the same 
way as it is currently done:
{code:java}
dfs.ha.namenodes.mycluster = nn1,nn2,nn3
dfs.namenode.rpc-address.mycluster.nn1 = machine1.example.com:8020
dfs.namenode.rpc-address.mycluster.nn2 = machine2.example.com:8020
dfs.namenode.rpc-address.mycluster.nn3 = machine3.example.com:8020
{code}
Currently all nodes start in standby state and admins should transition to 
active one of them, e.g. nn1.
 We have two choices for observers.
 # Use {{haadmin transitionToObserver}}, which will bring say nn3 from standby 
to observer state.
 # Or add configuration parameter, which allows nn3 to start in observer state
{code:java}
dfs.namenode.observer.mycluster.nn3 = true // false by default
{code}
In the end we will probably need both.

_*Comments on the rest fo the patch*_
 # {{allowStaleStandbyReads}} for observers should be true - ignore the config
 # NameNodeStatusMXBean methods should not be in this patch
 ** {{NameNode.getRole()}} looks like incompatible change
 ** {{NameNode.getState()}} shouldn't change. You need {{synchronized}} to 
access the {{state}} member.
 # {{HAState}} adds only an unused import
 # {{HAUtil.getConfForOtherNodes()}} the refactoring (otherNn -> 
otherNameNodes) should go into trunk. Better avoid renaming even though the 
original choice of the variable name is bad.
 # {{StandbyState.checkOperation()}} should not change since 
{{allowStaleReads()}} for observer is true
 # A think Command {{-namenodes}} should return all namenodes including 
standbys and observers. As it does now. We can postpone this decision and do it 
in the next jira if you want.
 # Even in tests we should not count observers separately from NNs. They are 
all NNs.
 # I see some Javadoc spelling corrections, white spaces, and long lines
 ** If formatting or spelling corrections of the existing code are needed they 
should go into trunk
 ** New ones should be fixed in the

> Changes to the NameNode to support reads from standby
> -
>
> Key: HDFS-12975
> URL: https://issues.apache.org/jira/browse/HDFS-12975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12975.000.patch, HDFS-12975.001.patch
>
>
> In order to support reads from standby NameNode needs changes to add Observer 
> role, turn off checkpointing and such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-02-27 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379612#comment-16379612
 ] 

Wei Yan commented on HDFS-13198:


Put a quick fix [^HDFS-13198.000.patch] for the issue.

Regarding the test case, not sure what is the good way to do. As now 
RouterHeartbeatService will ingest exceptions, so testcase actually cannot 
catch exceptions there. One way may be, let method updateStateStore() throw out 
different types of exceptions and let method updateStateAsync() to decide the 
log message. In such way, the testcase can verify different exceptions there.

> RBF: RouterHeartbeatService throws out CachedStateStore related exceptions 
> when starting router
> ---
>
> Key: HDFS-13198
> URL: https://issues.apache.org/jira/browse/HDFS-13198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13198.000.patch
>
>
> Exception looks like:
> {code:java}
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State 
> Store not initialized, MembershipState records not valid
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State 
> Store not initialized, MountTable records not valid
> Exception in thread "Router Heartbeat Async" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75)
> at 
> org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68)
> at java.lang.Thread.run(Thread.java:748){code}
> This is because, during starting the Router, the CachedStateStore hasn't been 
> initialized and cannot serve requests. Although the router will still be 
> started, it would be better to fix the exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-02-27 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13198:
---
Attachment: HDFS-13198.000.patch

> RBF: RouterHeartbeatService throws out CachedStateStore related exceptions 
> when starting router
> ---
>
> Key: HDFS-13198
> URL: https://issues.apache.org/jira/browse/HDFS-13198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13198.000.patch
>
>
> Exception looks like:
> {code:java}
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State 
> Store not initialized, MembershipState records not valid
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State 
> Store not initialized, MountTable records not valid
> Exception in thread "Router Heartbeat Async" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75)
> at 
> org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68)
> at java.lang.Thread.run(Thread.java:748){code}
> This is because, during starting the Router, the CachedStateStore hasn't been 
> initialized and cannot serve requests. Although the router will still be 
> started, it would be better to fix the exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379591#comment-16379591
 ] 

genericqa commented on HDFS-13109:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 24s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}147m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13109 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912329/HDFS-13109.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fc5da1da2f86 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-13114) CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path

2018-02-27 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379573#comment-16379573
 ] 

Hanisha Koneru commented on HDFS-13114:
---

[~xyao], yes, the ListZonesCommand and ListReencryptionStatusCommand work as 
expected with the -fs command (without the fix too).

> CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path
> 
>
> Key: HDFS-13114
> URL: https://issues.apache.org/jira/browse/HDFS-13114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13114.001.patch
>
>
> The {{crypto -reencryptZone  -path }} command takes in a path 
> argument. But when creating {{HdfsAdmin}} object, it takes the defaultFs 
> instead of resolving from the path. This causes the following exception if 
> the authority component in path does not match the authority of default Fs.
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://mycluster-node-1:8020/zone1
> IllegalArgumentException: Wrong FS: hdfs://mycluster-node-1:8020/zone1, 
> expected: hdfs://ns1{code}
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://ns2/zone2
> IllegalArgumentException: Wrong FS: hdfs://ns2/zone2, expected: 
> hdfs://ns1{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13081:
--
Attachment: HDFS-13081.006.patch

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch, 
> HDFS-13081.005.patch, HDFS-13081.006.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379539#comment-16379539
 ] 

Ajay Kumar commented on HDFS-13081:
---

patch v6 to resolve findbugs warning.

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch, 
> HDFS-13081.005.patch, HDFS-13081.006.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13178) Disk Balancer: Add a force option to DiskBalancer Execute command

2018-02-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379533#comment-16379533
 ] 

Anu Engineer commented on HDFS-13178:
-

+1

> Disk Balancer: Add a force option to DiskBalancer Execute command
> -
>
> Key: HDFS-13178
> URL: https://issues.apache.org/jira/browse/HDFS-13178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13178.00.patch, HDFS-13178.01.patch, 
> HDFS-13178.02.patch
>
>
>  
> Add a force option to DiskBalancer Execute command, which is used for skip 
> date check and force execute the plan.
> This is one of the TODO for diskbalancer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379525#comment-16379525
 ] 

genericqa commented on HDFS-13081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} root: The patch generated 0 new + 163 unchanged - 2 
fixed = 163 total (was 165) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Dead store to isSecure in 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(Configuration)
  At 
SecureDataNodeStarter.java:org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(Configuration)
  At SecureDataNodeStarter.java:[line 116] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912323/HDFS-13081.004.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cce265fc1e5c 

[jira] [Commented] (HDFS-13150) Create fast path for SbNN tailing edits from JNs

2018-02-27 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379521#comment-16379521
 ] 

Chao Sun commented on HDFS-13150:
-

Thanks Erik. Overall I'm good with the design :). I also like the approach 1): 
SbNN perform quorum reads better and think overall it should be correct. 
Looking forward to this feature!

> Create fast path for SbNN tailing edits from JNs
> 
>
> Key: HDFS-13150
> URL: https://issues.apache.org/jira/browse/HDFS-13150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, journal-node, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: edit-tailing-fast-path-design-v0.pdf, 
> edit-tailing-fast-path-design-v1.pdf
>
>
> In the interest of making coordinated/consistent reads easier to complete 
> with low latency, it is advantageous to reduce the time between when a 
> transaction is applied on the ANN and when it is applied on the SbNN. We 
> propose adding a new "fast path" which can be used to tail edits when low 
> latency is desired. We leave the existing tailing logic in place, and fall 
> back to this path on startup, recovery, and when the fast path encounters 
> unrecoverable errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13143) SnapshotDiff - snapshotDiffReport might be inconsistent if the snapshotDiff calculation happens between a snapshot and the current tree

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379504#comment-16379504
 ] 

Hudson commented on HDFS-13143:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13731 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13731/])
HDFS-13143. SnapshotDiff - snapshotDiffReport might be inconsistent if 
(szetszwo: rev 55c77bf722f2b6fcde135c0f71454647a8d2a3db)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java


> SnapshotDiff - snapshotDiffReport might be inconsistent if the snapshotDiff 
> calculation happens between a snapshot and the current tree
> ---
>
> Key: HDFS-13143
> URL: https://issues.apache.org/jira/browse/HDFS-13143
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-13143.001.patch, HDFS-13143.002.patch, 
> HDFS-13143.003.patch
>
>
> HDFS-12594 introduced an iterative approach for computing snapshotDiffs over 
> multiple rpc calls. The iterative approach depends upon the startPath (path 
> of the directory with respect to the snapshottable root) and the size of the 
> createdList (0 in case the startPath refers a file) to exactly determine form 
> where in each iteration the calculation has to start.
>  
> In case of the diff computation between a snapshot and current tree(if any of 
> the snapshot names specified in the getSnapshotDiffReport call is null or 
> empty), the last SnapshotDiff associated with directory/file might change 
> owing to changes in the current tree in between the rpc calls in the absence 
> of a global fsn lock. This might result in consistencies in the 
> snapshotDiffReport.
> In case the snapshotDiffReport computation needs to be done between the 
> current tree and a snapshot, we should fall back to non-iterative approach to 
> compute snapshotDiff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13102) Implement SnapshotSkipList class to store Multi level DirectoryDiffs

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379500#comment-16379500
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13102:


- Pass skipInterval and maxSkipLevels in the DirectoryDiffList constructor, 
change them to final and remove setMaxSkipLevel and setSkipInterval. 

> Implement SnapshotSkipList class to store Multi level DirectoryDiffs
> 
>
> Key: HDFS-13102
> URL: https://issues.apache.org/jira/browse/HDFS-13102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13102.001.patch, HDFS-13102.002.patch, 
> HDFS-13102.003.patch, HDFS-13102.004.patch, HDFS-13102.005.patch, 
> HDFS-13102.006.patch, HDFS-13102.007.patch
>
>
> HDFS-11225 explains an issue where deletion of older snapshots can take a 
> very long time in case the no of snapshot diffs is quite large for 
> directories. For any directory under a snapshot, to construct the children 
> list , it needs to combine all the diffs from that particular snapshot to the 
> last snapshotDiff record and reverseApply to the current children list of the 
> directory on live fs. This can take  a significant time if the no of snapshot 
> diffs are quite large and changes per diff is significant.
> This Jira proposes to store the Directory diffs in a SnapshotSkip list, where 
> we store multi level DirectoryDiffs. At each level, the Directory Diff will 
> be cumulative diff of k snapshot diffs,
> where k is the level of a node in the list. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12998) SnapshotDiff - Provide an iterator-based listing API for calculating snapshotDiff

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-12998:
---
Fix Version/s: (was: 3.2.0)
   3.1.0

Merged to branch-3.1.

> SnapshotDiff - Provide an iterator-based listing API for calculating 
> snapshotDiff
> -
>
> Key: HDFS-12998
> URL: https://issues.apache.org/jira/browse/HDFS-12998
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-12998.001.patch, HDFS-12998.002.patch, 
> HDFS-12998.003.patch
>
>
> Currently , SnapshotDiff computation happens over multiple rpc calls to 
> namenode depending on the no of snapshotDiff entries where each rpc call 
> returns at max 1000 entries by default . Each "getSnapshotDiffreportListing" 
> call to namenode returns a partial snapshotDiffreportList which are all 
> combined and processed at the client side to generate a final 
> snapshotDiffreport. There can be cases where SnapshotDiffReport can be huge 
> and in situations as such , the  rpc calls to namnode should happen on demand 
> at the client side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13143) SnapshotDiff - snapshotDiffReport might be inconsistent if the snapshotDiff calculation happens between a snapshot and the current tree

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13143:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Shash!

> SnapshotDiff - snapshotDiffReport might be inconsistent if the snapshotDiff 
> calculation happens between a snapshot and the current tree
> ---
>
> Key: HDFS-13143
> URL: https://issues.apache.org/jira/browse/HDFS-13143
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-13143.001.patch, HDFS-13143.002.patch, 
> HDFS-13143.003.patch
>
>
> HDFS-12594 introduced an iterative approach for computing snapshotDiffs over 
> multiple rpc calls. The iterative approach depends upon the startPath (path 
> of the directory with respect to the snapshottable root) and the size of the 
> createdList (0 in case the startPath refers a file) to exactly determine form 
> where in each iteration the calculation has to start.
>  
> In case of the diff computation between a snapshot and current tree(if any of 
> the snapshot names specified in the getSnapshotDiffReport call is null or 
> empty), the last SnapshotDiff associated with directory/file might change 
> owing to changes in the current tree in between the rpc calls in the absence 
> of a global fsn lock. This might result in consistencies in the 
> snapshotDiffReport.
> In case the snapshotDiffReport computation needs to be done between the 
> current tree and a snapshot, we should fall back to non-iterative approach to 
> compute snapshotDiff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13178) Disk Balancer: Add a force option to DiskBalancer Execute command

2018-02-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379479#comment-16379479
 ] 

Arpit Agarwal commented on HDFS-13178:
--

+1 from me for the v3 patch. [~anu], are you okay with committing this?

> Disk Balancer: Add a force option to DiskBalancer Execute command
> -
>
> Key: HDFS-13178
> URL: https://issues.apache.org/jira/browse/HDFS-13178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13178.00.patch, HDFS-13178.01.patch, 
> HDFS-13178.02.patch
>
>
>  
> Add a force option to DiskBalancer Execute command, which is used for skip 
> date check and force execute the plan.
> This is one of the TODO for diskbalancer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13178) Disk Balancer: Add a force option to DiskBalancer Execute command

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379476#comment-16379476
 ] 

genericqa commented on HDFS-13178:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13178 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912325/HDFS-13178.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 492ddf55dfc5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ac42dfc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23227/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23227/testReport/ |
| Max. process+thread count | 3885 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23227/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (HDFS-13143) SnapshotDiff - snapshotDiffReport might be inconsistent if the snapshotDiff calculation happens between a snapshot and the current tree

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13143:
---
Component/s: snapshots
 Issue Type: Bug  (was: Improvement)

+1 patch looks good

Since the only checkstyle warning is about an intention problem in 
TestSnapshotDiffReport.java.  I will commit the patch with the intention fixed 
shortly.

> SnapshotDiff - snapshotDiffReport might be inconsistent if the snapshotDiff 
> calculation happens between a snapshot and the current tree
> ---
>
> Key: HDFS-13143
> URL: https://issues.apache.org/jira/browse/HDFS-13143
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13143.001.patch, HDFS-13143.002.patch, 
> HDFS-13143.003.patch
>
>
> HDFS-12594 introduced an iterative approach for computing snapshotDiffs over 
> multiple rpc calls. The iterative approach depends upon the startPath (path 
> of the directory with respect to the snapshottable root) and the size of the 
> createdList (0 in case the startPath refers a file) to exactly determine form 
> where in each iteration the calculation has to start.
>  
> In case of the diff computation between a snapshot and current tree(if any of 
> the snapshot names specified in the getSnapshotDiffReport call is null or 
> empty), the last SnapshotDiff associated with directory/file might change 
> owing to changes in the current tree in between the rpc calls in the absence 
> of a global fsn lock. This might result in consistencies in the 
> snapshotDiffReport.
> In case the snapshotDiffReport computation needs to be done between the 
> current tree and a snapshot, we should fall back to non-iterative approach to 
> compute snapshotDiff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13089) Add test to validate dfs used and no of blocks when blocks are moved across volumes

2018-02-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379458#comment-16379458
 ] 

Arpit Agarwal commented on HDFS-13089:
--

+1 pending Jenkins.

> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes
> ---
>
> Key: HDFS-13089
> URL: https://issues.apache.org/jira/browse/HDFS-13089
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13089.000.patch, HDFS-13089.001.patch
>
>
> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13102) Implement SnapshotSkipList class to store Multi level DirectoryDiffs

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379457#comment-16379457
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13102:



- In getMinListForRange, we can clean up the code as follow.
{code}
  @Override
  public List getMinListForRange(
  int fromIndex, int toIndex, INodeDirectory dir) {
final List subList = new ArrayList<>();
final int toSnapshotId = get(toIndex).getSnapshotId();

for (SkipListNode current = getNode(fromIndex); current != null; ) {
  SkipListNode next = null;
  ChildrenDiff childrenDiff = null;
  for(int level = current.level(); level >= 0; level--) {
next = current.getSkipNode(level);
if (next != null && next.getDiff().compareTo(toSnapshotId) <= 0) {
  childrenDiff = current.getChildrenDiff(level);
  break;
}
  }

  final DirectoryDiff curDiff = current.getDiff();
  subList.add(childrenDiff == null? curDiff
  : new DirectoryDiff(curDiff.getSnapshotId(), dir, childrenDiff);

  current = next;
}
return subList;
  }
{code}


> Implement SnapshotSkipList class to store Multi level DirectoryDiffs
> 
>
> Key: HDFS-13102
> URL: https://issues.apache.org/jira/browse/HDFS-13102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13102.001.patch, HDFS-13102.002.patch, 
> HDFS-13102.003.patch, HDFS-13102.004.patch, HDFS-13102.005.patch, 
> HDFS-13102.006.patch, HDFS-13102.007.patch
>
>
> HDFS-11225 explains an issue where deletion of older snapshots can take a 
> very long time in case the no of snapshot diffs is quite large for 
> directories. For any directory under a snapshot, to construct the children 
> list , it needs to combine all the diffs from that particular snapshot to the 
> last snapshotDiff record and reverseApply to the current children list of the 
> directory on live fs. This can take  a significant time if the no of snapshot 
> diffs are quite large and changes per diff is significant.
> This Jira proposes to store the Directory diffs in a SnapshotSkip list, where 
> we store multi level DirectoryDiffs. At each level, the Directory Diff will 
> be cumulative diff of k snapshot diffs,
> where k is the level of a node in the list. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13089) Add test to validate dfs used and no of blocks when blocks are moved across volumes

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379452#comment-16379452
 ] 

Ajay Kumar commented on HDFS-13089:
---

[~arpitagarwal], thanks for review. patch v1 rebased for trunk and increased 
timeout.

> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes
> ---
>
> Key: HDFS-13089
> URL: https://issues.apache.org/jira/browse/HDFS-13089
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13089.000.patch, HDFS-13089.001.patch
>
>
> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13102) Implement SnapshotSkipList class to store Multi level DirectoryDiffs

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379394#comment-16379394
 ] 

Tsz Wo Nicholas Sze edited comment on HDFS-13102 at 2/27/18 11:07 PM:
--

Thanks for the update.  Some comments on the 007 patch:
- In SkipListNode, similar to getSkipNode, setSkipDiff and setSkipTo should 
take care about resize so the callers don't have to.  We should also remove the 
addSkipDiff method.
{code}
public void setSkipDiff(ChildrenDiff cDiff, int level) {
  if (level < skipDiffList.size()) {
skipDiffList.get(level).setDiff(cDiff);
  } else {
skipDiffList.add(new SkipDiff(null, cDiff));
  }
}

public void setSkipTo(SkipListNode node, int level) {
  for (int i = skipDiffList.size(); i <= level; i++) {
skipDiffList.add(null);
  }
  skipDiffList.get(level).setSkipTo(node);
}
{code}

- In addFirst, there are two "if (combined != null)".

- We should not directly use ID_INTEGER_COMPARATOR in DirectoryDiffList.
-* In SkipListNode, change compareTo to
{code}
public final int compareTo(final Integer that) {
  return diff.compareTo(that);
}
{code}
-* In getMinListForRange, use
{code}
next.getDiff().compareTo(toSnapshotId) > 0)
{code}


was (Author: szetszwo):
Thanks for the update.  Some comments on the 007 patch:
- In SkipListNode, similar to getSkipNode, setSkipDiff and setSkipTo should 
take care about resize so the callers don't have to.  We should also remove the 
addSkipDiff method.
{code}
public void setSkipDiff(ChildrenDiff cDiff, int level) {
  if (level < skipDiffList.size()) {
skipDiffList.get(level).setDiff(cDiff);
  } else {
skipDiffList.add(new SkipDiff(null, cDiff));
  }
}

public void setSkipTo(SkipListNode node, int level) {
  for (int i = skipDiffList.size(); i <= level; i++) {
skipDiffList.add(null);
  }
  skipDiffList.get(level).setSkipTo(node);
}
{code}

- In addFirst, there are two "if (combined != null)".

- We should not directly use ID_INTEGER_COMPARATOR in DirectoryDiffList.
-* In SkipListNode, change compareTo to
{code}
public final int compareTo(final Integer that) {
  return diff.compareTo(that);
}
{code}
-* In getMinListForRange, use
{code}
next.getDiff().compareTo(toSnapshotId) > 0)
{code}


- In getMinListForRange, we can clean up the code as follow.
{code}
  @Override
  public List getMinListForRange(
  int fromIndex, int toIndex, INodeDirectory dir) {
final List subList = new ArrayList<>();
final DirectoryDiff toDiff = get(toIndex);
final int toSnapshotId = toDiff.getSnapshotId();

for (SkipListNode current = getNode(fromIndex); current != null; ) {
  SkipListNode next = null;
  ChildrenDiff childrenDiff = null;
  for(int level = current.level(); level >= 0; level--) {
next = current.getSkipNode(level);
if (next != null && next.getDiff().compareTo(toSnapshotId) <= 0) {
  childrenDiff = current.getChildrenDiff(level);
  break;
}
  }

  final DirectoryDiff curDiff = current.getDiff();
  subList.add(childrenDiff == null? curDiff
  : new DirectoryDiff(curDiff.getSnapshotId(), dir, childrenDiff);

  current = next;
}
return subList;
  }
{code}


> Implement SnapshotSkipList class to store Multi level DirectoryDiffs
> 
>
> Key: HDFS-13102
> URL: https://issues.apache.org/jira/browse/HDFS-13102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13102.001.patch, HDFS-13102.002.patch, 
> HDFS-13102.003.patch, HDFS-13102.004.patch, HDFS-13102.005.patch, 
> HDFS-13102.006.patch, HDFS-13102.007.patch
>
>
> HDFS-11225 explains an issue where deletion of older snapshots can take a 
> very long time in case the no of snapshot diffs is quite large for 
> directories. For any directory under a snapshot, to construct the children 
> list , it needs to combine all the diffs from that particular snapshot to the 
> last snapshotDiff record and reverseApply to the current children list of the 
> directory on live fs. This can take  a significant time if the no of snapshot 
> diffs are quite large and changes per diff is significant.
> This Jira proposes to store the Directory diffs in a SnapshotSkip list, where 
> we store multi level DirectoryDiffs. At each level, the Directory Diff will 
> be cumulative diff of k snapshot diffs,
> where k is the level of a node in the list. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, 

[jira] [Updated] (HDFS-13089) Add test to validate dfs used and no of blocks when blocks are moved across volumes

2018-02-27 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13089:
--
Attachment: HDFS-13089.001.patch

> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes
> ---
>
> Key: HDFS-13089
> URL: https://issues.apache.org/jira/browse/HDFS-13089
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13089.000.patch, HDFS-13089.001.patch
>
>
> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13102) Implement SnapshotSkipList class to store Multi level DirectoryDiffs

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379394#comment-16379394
 ] 

Tsz Wo Nicholas Sze edited comment on HDFS-13102 at 2/27/18 11:05 PM:
--

Thanks for the update.  Some comments on the 007 patch:
- In SkipListNode, similar to getSkipNode, setSkipDiff and setSkipTo should 
take care about resize so the callers don't have to.  We should also remove the 
addSkipDiff method.
{code}
public void setSkipDiff(ChildrenDiff cDiff, int level) {
  if (level < skipDiffList.size()) {
skipDiffList.get(level).setDiff(cDiff);
  } else {
skipDiffList.add(new SkipDiff(null, cDiff));
  }
}

public void setSkipTo(SkipListNode node, int level) {
  for (int i = skipDiffList.size(); i <= level; i++) {
skipDiffList.add(null);
  }
  skipDiffList.get(level).setSkipTo(node);
}
{code}

- In addFirst, there are two "if (combined != null)".

- We should not directly use ID_INTEGER_COMPARATOR in DirectoryDiffList.
-* In SkipListNode, change compareTo to
{code}
public final int compareTo(final Integer that) {
  return diff.compareTo(that);
}
{code}
-* In getMinListForRange, use
{code}
next.getDiff().compareTo(toSnapshotId) > 0)
{code}


- In getMinListForRange, we can clean up the code as follow.
{code}
  @Override
  public List getMinListForRange(
  int fromIndex, int toIndex, INodeDirectory dir) {
final List subList = new ArrayList<>();
final DirectoryDiff toDiff = get(toIndex);
final int toSnapshotId = toDiff.getSnapshotId();

for (SkipListNode current = getNode(fromIndex); current != null; ) {
  SkipListNode next = null;
  ChildrenDiff childrenDiff = null;
  for(int level = current.level(); level >= 0; level--) {
next = current.getSkipNode(level);
if (next != null && next.getDiff().compareTo(toSnapshotId) <= 0) {
  childrenDiff = current.getChildrenDiff(level);
  break;
}
  }

  final DirectoryDiff curDiff = current.getDiff();
  subList.add(childrenDiff == null? curDiff
  : new DirectoryDiff(curDiff.getSnapshotId(), dir, childrenDiff);

  current = next;
}
return subList;
  }
{code}



was (Author: szetszwo):
Thanks for the update.  Some comments on the 007 patch:
- In SkipListNode, similar to getSkipNode, setSkipDiff and setSkipTo should 
take care about resize so the callers don't have to.  We should also remove the 
addSkipDiff method.
{code}
public void setSkipDiff(ChildrenDiff cDiff, int level) {
  if (level < skipDiffList.size()) {
skipDiffList.get(level).setDiff(cDiff);
  } else {
skipDiffList.add(new SkipDiff(null, cDiff));
  }
}

public void setSkipTo(SkipListNode node, int level) {
  for (int i = skipDiffList.size(); i <= level; i++) {
skipDiffList.add(null);
  }
  skipDiffList.get(level).setSkipTo(node);
}
{code}

- In addFirst, there are two "if (combined != null)".

- We should not directly use ID_INTEGER_COMPARATOR in DirectoryDiffList.
-* In SkipListNode, change compareTo to
{code}
public final int compareTo(final Integer that) {
  return diff.compareTo(that);
}
{code}
-* In getMinListForRange, use
{code}
next.getDiff().compareTo(toSnapshotId) > 0)
{code}


- In getMinListForRange, is it never level < 0 unless the list is inconsistent? 
 So that we can clean up the code as follow.
{code}
  @Override
  public List getMinListForRange(
  int fromIndex, int toIndex, INodeDirectory dir) {
final List subList = new ArrayList<>();
final DirectoryDiff toDiff = get(toIndex);
final int toSnapshotId = toDiff.getSnapshotId();

for(SkipListNode current = getNode(fromIndex); current != null; ) {
  int level = current.level();
  SkipListNode next = current.getSkipNode(level);
  while (next == null || next.getDiff().compareTo(toSnapshotId) > 0) {
level--;
next = current.getSkipNode(level);
  }

  DirectoryDiff diff = new DirectoryDiff(current.getDiff().getSnapshotId(),
  dir, current.getChildrenDiff(level));
  subList.add(diff);
  current = next;
}
return subList;
  }
}
{code}


> Implement SnapshotSkipList class to store Multi level DirectoryDiffs
> 
>
> Key: HDFS-13102
> URL: https://issues.apache.org/jira/browse/HDFS-13102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13102.001.patch, HDFS-13102.002.patch, 
> HDFS-13102.003.patch, HDFS-13102.004.patch, HDFS-13102.005.patch, 
> HDFS-13102.006.patch, HDFS-13102.007.patch
>
>
> HDFS-11225 explains an 

[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379419#comment-16379419
 ] 

genericqa commented on HDFS-13081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} root: The patch generated 0 new + 163 unchanged - 2 
fixed = 163 total (was 165) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
22s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Dead store to isSecure in 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(Configuration)
  At 
SecureDataNodeStarter.java:org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(Configuration)
  At SecureDataNodeStarter.java:[line 116] |
| Failed junit tests | hadoop.fs.shell.TestCopyFromLocal |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDFS-13102) Implement SnapshotSkipList class to store Multi level DirectoryDiffs

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379415#comment-16379415
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13102:


- The ListItr class can be moved as an anonymous class.
{code}
  @Override
  public Iterator iterator() {
return new Iterator() {
  final Iterator i = skipNodeList.iterator();

  @Override
  public boolean hasNext() {
return i.hasNext();
  }

  @Override
  public DirectoryDiff next() {
return i.next().getDiff();
  }
};
  }
{code}


> Implement SnapshotSkipList class to store Multi level DirectoryDiffs
> 
>
> Key: HDFS-13102
> URL: https://issues.apache.org/jira/browse/HDFS-13102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13102.001.patch, HDFS-13102.002.patch, 
> HDFS-13102.003.patch, HDFS-13102.004.patch, HDFS-13102.005.patch, 
> HDFS-13102.006.patch, HDFS-13102.007.patch
>
>
> HDFS-11225 explains an issue where deletion of older snapshots can take a 
> very long time in case the no of snapshot diffs is quite large for 
> directories. For any directory under a snapshot, to construct the children 
> list , it needs to combine all the diffs from that particular snapshot to the 
> last snapshotDiff record and reverseApply to the current children list of the 
> directory on live fs. This can take  a significant time if the no of snapshot 
> diffs are quite large and changes per diff is significant.
> This Jira proposes to store the Directory diffs in a SnapshotSkip list, where 
> we store multi level DirectoryDiffs. At each level, the Directory Diff will 
> be cumulative diff of k snapshot diffs,
> where k is the level of a node in the list. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13102) Implement SnapshotSkipList class to store Multi level DirectoryDiffs

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379404#comment-16379404
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13102:


- In addFirst and addLast, pass nodeLevel to new SkipListNode and remove the 
second SkipListNode constructor.
- Remove the skipTo parameter from the SkipDiff constructor since it is always 
null.

> Implement SnapshotSkipList class to store Multi level DirectoryDiffs
> 
>
> Key: HDFS-13102
> URL: https://issues.apache.org/jira/browse/HDFS-13102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13102.001.patch, HDFS-13102.002.patch, 
> HDFS-13102.003.patch, HDFS-13102.004.patch, HDFS-13102.005.patch, 
> HDFS-13102.006.patch, HDFS-13102.007.patch
>
>
> HDFS-11225 explains an issue where deletion of older snapshots can take a 
> very long time in case the no of snapshot diffs is quite large for 
> directories. For any directory under a snapshot, to construct the children 
> list , it needs to combine all the diffs from that particular snapshot to the 
> last snapshotDiff record and reverseApply to the current children list of the 
> directory on live fs. This can take  a significant time if the no of snapshot 
> diffs are quite large and changes per diff is significant.
> This Jira proposes to store the Directory diffs in a SnapshotSkip list, where 
> we store multi level DirectoryDiffs. At each level, the Directory Diff will 
> be cumulative diff of k snapshot diffs,
> where k is the level of a node in the list. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13102) Implement SnapshotSkipList class to store Multi level DirectoryDiffs

2018-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379394#comment-16379394
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13102:


Thanks for the update.  Some comments on the 007 patch:
- In SkipListNode, similar to getSkipNode, setSkipDiff and setSkipTo should 
take care about resize so the callers don't have to.  We should also remove the 
addSkipDiff method.
{code}
public void setSkipDiff(ChildrenDiff cDiff, int level) {
  if (level < skipDiffList.size()) {
skipDiffList.get(level).setDiff(cDiff);
  } else {
skipDiffList.add(new SkipDiff(null, cDiff));
  }
}

public void setSkipTo(SkipListNode node, int level) {
  for (int i = skipDiffList.size(); i <= level; i++) {
skipDiffList.add(null);
  }
  skipDiffList.get(level).setSkipTo(node);
}
{code}

- In addFirst, there are two "if (combined != null)".

- We should not directly use ID_INTEGER_COMPARATOR in DirectoryDiffList.
-* In SkipListNode, change compareTo to
{code}
public final int compareTo(final Integer that) {
  return diff.compareTo(that);
}
{code}
-* In getMinListForRange, use
{code}
next.getDiff().compareTo(toSnapshotId) > 0)
{code}


- In getMinListForRange, is it never level < 0 unless the list is inconsistent? 
 So that we can clean up the code as follow.
{code}
  @Override
  public List getMinListForRange(
  int fromIndex, int toIndex, INodeDirectory dir) {
final List subList = new ArrayList<>();
final DirectoryDiff toDiff = get(toIndex);
final int toSnapshotId = toDiff.getSnapshotId();

for(SkipListNode current = getNode(fromIndex); current != null; ) {
  int level = current.level();
  SkipListNode next = current.getSkipNode(level);
  while (next == null || next.getDiff().compareTo(toSnapshotId) > 0) {
level--;
next = current.getSkipNode(level);
  }

  DirectoryDiff diff = new DirectoryDiff(current.getDiff().getSnapshotId(),
  dir, current.getChildrenDiff(level));
  subList.add(diff);
  current = next;
}
return subList;
  }
}
{code}


> Implement SnapshotSkipList class to store Multi level DirectoryDiffs
> 
>
> Key: HDFS-13102
> URL: https://issues.apache.org/jira/browse/HDFS-13102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13102.001.patch, HDFS-13102.002.patch, 
> HDFS-13102.003.patch, HDFS-13102.004.patch, HDFS-13102.005.patch, 
> HDFS-13102.006.patch, HDFS-13102.007.patch
>
>
> HDFS-11225 explains an issue where deletion of older snapshots can take a 
> very long time in case the no of snapshot diffs is quite large for 
> directories. For any directory under a snapshot, to construct the children 
> list , it needs to combine all the diffs from that particular snapshot to the 
> last snapshotDiff record and reverseApply to the current children list of the 
> directory on live fs. This can take  a significant time if the no of snapshot 
> diffs are quite large and changes per diff is significant.
> This Jira proposes to store the Directory diffs in a SnapshotSkip list, where 
> we store multi level DirectoryDiffs. At each level, the Directory Diff will 
> be cumulative diff of k snapshot diffs,
> where k is the level of a node in the list. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13178) Disk Balancer: Add a force option to DiskBalancer Execute command

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379389#comment-16379389
 ] 

genericqa commented on HDFS-13178:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13178 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912315/HDFS-13178.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4c1f67a9ed33 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ac42dfc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23226/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23226/testReport/ |
| Max. process+thread count | 4502 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379320#comment-16379320
 ] 

Ajay Kumar commented on HDFS-13081:
---

[~xyao] Thanks for catching that. Documentation updated in patch v5 to include 
JSVC_HOME as well.

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch, 
> HDFS-13081.005.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379248#comment-16379248
 ] 

Ajay Kumar edited comment on HDFS-13081 at 2/27/18 9:37 PM:


Patch v4 to update documentation for SASL.

Tested patch in single node secure cluster for SASL. DataNode started with 
non-privileged rpc port and privileged http port. hdfs operations confirms SASL 
qop.
 dfs.datanode.address = 0.0.0.0:10040 (non-privileged)
 dfs.datanode.http.address = 0.0.0.0:1016 (privileged)
{code:java}
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL encryption trust 
check: localHostTrusted = false, remoteHostTrusted = false
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL client doing general 
handshake for addr = /192.168.7.205, datanodeId = 
DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]
18/02/27 19:40:14 DEBUG sasl.DataTransferSaslUtil: Verifying QOP, requested QOP 
= [auth-conf], negotiated QOP = auth-conf
18/02/27 19:40:14 DEBUG security.SaslInputStream: Actual length is 22
18/02/27 19:40:14 DEBUG hdfs.DataStreamer: nodes 
[DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]]
 storageTypes [DISK] storageIDs [DS-aa5225d7-f60a-4c2d-b780-119fc1d60879]
{code}


was (Author: ajayydv):
Patch v4 to update documentation for SASL as following:

Tested patch in single node secure cluster for SASL. DataNode started with 
non-privileged rpc port and privileged http port. hdfs operations confirms SASL 
qop.
 dfs.datanode.address = 0.0.0.0:10040 (non-privileged)
 dfs.datanode.http.address = 0.0.0.0:1016 (privileged)
{code:java}
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL encryption trust 
check: localHostTrusted = false, remoteHostTrusted = false
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL client doing general 
handshake for addr = /192.168.7.205, datanodeId = 
DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]
18/02/27 19:40:14 DEBUG sasl.DataTransferSaslUtil: Verifying QOP, requested QOP 
= [auth-conf], negotiated QOP = auth-conf
18/02/27 19:40:14 DEBUG security.SaslInputStream: Actual length is 22
18/02/27 19:40:14 DEBUG hdfs.DataStreamer: nodes 
[DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]]
 storageTypes [DISK] storageIDs [DS-aa5225d7-f60a-4c2d-b780-119fc1d60879]
{code}

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch, 
> HDFS-13081.005.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379314#comment-16379314
 ] 

genericqa commented on HDFS-13055:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1263 unchanged - 2 fixed = 1264 total (was 1265) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}147m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.federation.router.TestRouterSafemode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13055 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912309/HDFS-13055.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 0a74c44d3b23 4.4.0-64-generic 

[jira] [Updated] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13081:
--
Attachment: HDFS-13081.005.patch

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch, 
> HDFS-13081.005.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13114) CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path

2018-02-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379311#comment-16379311
 ] 

Xiaoyu Yao commented on HDFS-13114:
---

[~hanishakoneru], does the generic -fs option work as expected for 
ListZonesCommand  and ListReencryptionStatusCommand? Also, does it work for 
ReencryptZoneCommand without the fix?

> CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path
> 
>
> Key: HDFS-13114
> URL: https://issues.apache.org/jira/browse/HDFS-13114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13114.001.patch
>
>
> The {{crypto -reencryptZone  -path }} command takes in a path 
> argument. But when creating {{HdfsAdmin}} object, it takes the defaultFs 
> instead of resolving from the path. This causes the following exception if 
> the authority component in path does not match the authority of default Fs.
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://mycluster-node-1:8020/zone1
> IllegalArgumentException: Wrong FS: hdfs://mycluster-node-1:8020/zone1, 
> expected: hdfs://ns1{code}
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://ns2/zone2
> IllegalArgumentException: Wrong FS: hdfs://ns2/zone2, expected: 
> hdfs://ns1{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13114) CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path

2018-02-27 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379298#comment-16379298
 ] 

Hanisha Koneru commented on HDFS-13114:
---

Thanks for the reivew, [~xyao].

{{ListZonesCommand#run}} and \{{ListReencryptionStatusCommand#run}} do not have 
path parameters. So we have to fallback to defaultUri only. For these two 
commands, we would need to utilize the generic -fs option to specify the 
nameservice.

> CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path
> 
>
> Key: HDFS-13114
> URL: https://issues.apache.org/jira/browse/HDFS-13114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13114.001.patch
>
>
> The {{crypto -reencryptZone  -path }} command takes in a path 
> argument. But when creating {{HdfsAdmin}} object, it takes the defaultFs 
> instead of resolving from the path. This causes the following exception if 
> the authority component in path does not match the authority of default Fs.
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://mycluster-node-1:8020/zone1
> IllegalArgumentException: Wrong FS: hdfs://mycluster-node-1:8020/zone1, 
> expected: hdfs://ns1{code}
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://ns2/zone2
> IllegalArgumentException: Wrong FS: hdfs://ns2/zone2, expected: 
> hdfs://ns1{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379297#comment-16379297
 ] 

Xiaoyu Yao commented on HDFS-13081:
---

Thanks [~ajayydv] for the update. One last ask for the document:

{code}
Set `dfs.http.policy` to `HTTPS_ONLY` or set `dfs.datanode.http.address` to a 
privileged port and make sure the `HDFS_DATANODE_SECURE_USER` environment 
variable is defined.

===>

Set `dfs.http.policy` to `HTTPS_ONLY` or set `dfs.datanode.http.address` to a 
privileged port and make sure the `HDFS_DATANODE_SECURE_USER` and `JSVC_HOME` 
are specified properly as environment variables on start up (in `hadoop-env.sh`)
{code}

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-02-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13109:
--
Attachment: HDFS-13109.004.patch

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-02-27 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379287#comment-16379287
 ] 

Hanisha Koneru commented on HDFS-13109:
---

Thanks for the review, [~xyao].

bq. You have already resolved the path in the calling function public void 
provisionEZTrash. You can just pass the resolved path to the private method 
provisionEZTrash instead of getPathName.
[~shahrs87], we would have to call {{getPathName()}} as the 
{{FileSystemLinkResolver.resolve}} function in the calling {{public void 
provisionEZTrash}} doesn't verify that the path belongs to the correct 
filesystem. Please let me know if I am missing something here.

I have reverted {{p.toUri().getPath()}} to {{getPathName(p)}} in patch v04.



> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379248#comment-16379248
 ] 

Ajay Kumar edited comment on HDFS-13081 at 2/27/18 8:46 PM:


Patch v4 to update documentation for SASL as following:

Tested patch in single node secure cluster for SASL. DataNode started with 
non-privileged rpc port and privileged http port. hdfs operations confirms SASL 
qop.
 dfs.datanode.address = 0.0.0.0:10040 (non-privileged)
 dfs.datanode.http.address = 0.0.0.0:1016 (privileged)
{code:java}
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL encryption trust 
check: localHostTrusted = false, remoteHostTrusted = false
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL client doing general 
handshake for addr = /192.168.7.205, datanodeId = 
DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]
18/02/27 19:40:14 DEBUG sasl.DataTransferSaslUtil: Verifying QOP, requested QOP 
= [auth-conf], negotiated QOP = auth-conf
18/02/27 19:40:14 DEBUG security.SaslInputStream: Actual length is 22
18/02/27 19:40:14 DEBUG hdfs.DataStreamer: nodes 
[DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]]
 storageTypes [DISK] storageIDs [DS-aa5225d7-f60a-4c2d-b780-119fc1d60879]
{code}


was (Author: ajayydv):
Patch v4 to update documentation for SASL as following:


Tested patch in single node secure cluster for SASL. DataNode started with 
non-privileged rpc port and privileged http port. Http operations confirms SASL 
qop.
dfs.datanode.address = 0.0.0.0:10040 (non-privileged)
dfs.datanode.http.address = 0.0.0.0:1016 (privileged)
{code}
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL encryption trust 
check: localHostTrusted = false, remoteHostTrusted = false
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL client doing general 
handshake for addr = /192.168.7.205, datanodeId = 
DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]
18/02/27 19:40:14 DEBUG sasl.DataTransferSaslUtil: Verifying QOP, requested QOP 
= [auth-conf], negotiated QOP = auth-conf
18/02/27 19:40:14 DEBUG security.SaslInputStream: Actual length is 22
18/02/27 19:40:14 DEBUG hdfs.DataStreamer: nodes 
[DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]]
 storageTypes [DISK] storageIDs [DS-aa5225d7-f60a-4c2d-b780-119fc1d60879]
{code}

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379248#comment-16379248
 ] 

Ajay Kumar commented on HDFS-13081:
---

Patch v4 to update documentation for SASL as following:


Tested patch in single node secure cluster for SASL. DataNode started with 
non-privileged rpc port and privileged http port. Http operations confirms SASL 
qop.
dfs.datanode.address = 0.0.0.0:10040 (non-privileged)
dfs.datanode.http.address = 0.0.0.0:1016 (privileged)
{code}
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL encryption trust 
check: localHostTrusted = false, remoteHostTrusted = false
18/02/27 19:40:14 DEBUG sasl.SaslDataTransferClient: SASL client doing general 
handshake for addr = /192.168.7.205, datanodeId = 
DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]
18/02/27 19:40:14 DEBUG sasl.DataTransferSaslUtil: Verifying QOP, requested QOP 
= [auth-conf], negotiated QOP = auth-conf
18/02/27 19:40:14 DEBUG security.SaslInputStream: Actual length is 22
18/02/27 19:40:14 DEBUG hdfs.DataStreamer: nodes 
[DatanodeInfoWithStorage[192.168.7.205:10040,DS-aa5225d7-f60a-4c2d-b780-119fc1d60879,DISK]]
 storageTypes [DISK] storageIDs [DS-aa5225d7-f60a-4c2d-b780-119fc1d60879]
{code}

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13178) Disk Balancer: Add a force option to DiskBalancer Execute command

2018-02-27 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13178:
--
Attachment: HDFS-13178.02.patch

> Disk Balancer: Add a force option to DiskBalancer Execute command
> -
>
> Key: HDFS-13178
> URL: https://issues.apache.org/jira/browse/HDFS-13178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13178.00.patch, HDFS-13178.01.patch, 
> HDFS-13178.02.patch
>
>
>  
> Add a force option to DiskBalancer Execute command, which is used for skip 
> date check and force execute the plan.
> This is one of the TODO for diskbalancer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13081:
--
Attachment: HDFS-13081.004.patch

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13178) Disk Balancer: Add a force option to DiskBalancer Execute command

2018-02-27 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379149#comment-16379149
 ] 

Bharat Viswanadham commented on HDFS-13178:
---

[~arpitagarwal] [~anu]

Thank you for review.

Addressed your review comments in patch v01.

> Disk Balancer: Add a force option to DiskBalancer Execute command
> -
>
> Key: HDFS-13178
> URL: https://issues.apache.org/jira/browse/HDFS-13178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13178.00.patch, HDFS-13178.01.patch
>
>
>  
> Add a force option to DiskBalancer Execute command, which is used for skip 
> date check and force execute the plan.
> This is one of the TODO for diskbalancer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13178) Disk Balancer: Add a force option to DiskBalancer Execute command

2018-02-27 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13178:
--
Attachment: HDFS-13178.01.patch

> Disk Balancer: Add a force option to DiskBalancer Execute command
> -
>
> Key: HDFS-13178
> URL: https://issues.apache.org/jira/browse/HDFS-13178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13178.00.patch, HDFS-13178.01.patch
>
>
>  
> Add a force option to DiskBalancer Execute command, which is used for skip 
> date check and force execute the plan.
> This is one of the TODO for diskbalancer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-02-27 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379145#comment-16379145
 ] 

Wei Yan commented on HDFS-13198:


Sure, will post a patch here.

> RBF: RouterHeartbeatService throws out CachedStateStore related exceptions 
> when starting router
> ---
>
> Key: HDFS-13198
> URL: https://issues.apache.org/jira/browse/HDFS-13198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
>
> Exception looks like:
> {code:java}
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State 
> Store not initialized, MembershipState records not valid
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State 
> Store not initialized, MountTable records not valid
> Exception in thread "Router Heartbeat Async" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75)
> at 
> org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68)
> at java.lang.Thread.run(Thread.java:748){code}
> This is because, during starting the Router, the CachedStateStore hasn't been 
> initialized and cannot serve requests. Although the router will still be 
> started, it would be better to fix the exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-02-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379144#comment-16379144
 ] 

Íñigo Goiri commented on HDFS-13198:


I was able to repro it in our internal branch by providing the wrong ZooKeeper 
address.
It should be pretty easy to create a unit test for the RouterHeartbeatService 
with an invalid State Store.
In any case, I have no fix for this, [~ywskycn] do you want to give it a try?

> RBF: RouterHeartbeatService throws out CachedStateStore related exceptions 
> when starting router
> ---
>
> Key: HDFS-13198
> URL: https://issues.apache.org/jira/browse/HDFS-13198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
>
> Exception looks like:
> {code:java}
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State 
> Store not initialized, MembershipState records not valid
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State 
> Store not initialized, MountTable records not valid
> Exception in thread "Router Heartbeat Async" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75)
> at 
> org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68)
> at java.lang.Thread.run(Thread.java:748){code}
> This is because, during starting the Router, the CachedStateStore hasn't been 
> initialized and cannot serve requests. Although the router will still be 
> started, it would be better to fix the exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13181) DiskBalancer: Add an configuration for valid plan hours

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379121#comment-16379121
 ] 

Hudson commented on HDFS-13181:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13728 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13728/])
HDFS-13181. DiskBalancer: Add an configuration for valid plan hours . (arp: rev 
1cc9a58ddad8a02db0ec5a014f9de417eec1b8dd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/command/TestDiskBalancerCommand.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerConstants.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> DiskBalancer: Add an configuration for valid plan hours 
> 
>
> Key: HDFS-13181
> URL: https://issues.apache.org/jira/browse/HDFS-13181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-13181.00.patch, HDFS-13181.01.patch, 
> HDFS-13181.02.patch, HDFS-13181.03.patch, HDFS-13181.04.patch, 
> HDFS-13181.05.patch
>
>
> Add a configuration for valid plan hours, instead of constant 24 hours in the 
> code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379111#comment-16379111
 ] 

Ajay Kumar commented on HDFS-13081:
---

Hi  [~xyao], Patch v3 with following two changes:
* Changes for SecureMode.md 
* SecureDataNodeStarter: L177

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13081) Datanode#checkSecureConfig should check HTTPS and SASL encryption

2018-02-27 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13081:
--
Attachment: HDFS-13081.003.patch

> Datanode#checkSecureConfig should check HTTPS and SASL encryption
> -
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. The SASL handshake guarantees authentication of 
> the RPC server before a client transmits a secret, such as a block access 
> token. Similarly, SSL guarantees authentication of the
>  HTTP server before a client transmits a secret, such as a delegation token.
> For the 2nd case, HTTPS_ONLY means all the traffic between REST client/server 
> will be encrypted. However, the logic to check only if SASL property resolver 
> is configured does not mean server requires an encrypted RPC. 
> This ticket is open to further check and ensure datanode SASL property 
> resolver has a QoP that includes auth-conf(PRIVACY). Note that the SASL QoP 
> (Quality of Protection) negotiation may drop RPC protection level from 
> auth-conf(PRIVACY) to auth-int(integrity) or auth(authentication) only, which 
> should be fine by design.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-02-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379105#comment-16379105
 ] 

genericqa commented on HDFS-12794:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 
0 unchanged - 1 fixed = 7 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}167m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12794 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13089) Add test to validate dfs used and no of blocks when blocks are moved across volumes

2018-02-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379102#comment-16379102
 ] 

Arpit Agarwal commented on HDFS-13089:
--

+1 for the change once its rebased, with one minor comment. Let's increase the 
test timeout from 1minute to 10 minutes, to avoid spurious failures.

> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes
> ---
>
> Key: HDFS-13089
> URL: https://issues.apache.org/jira/browse/HDFS-13089
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13089.000.patch
>
>
> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13089) Add test to validate dfs used and no of blocks when blocks are moved across volumes

2018-02-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379100#comment-16379100
 ] 

Arpit Agarwal commented on HDFS-13089:
--

Hi [~ajayydv], looks like this patch needs to be rebased to current trunk.

> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes
> ---
>
> Key: HDFS-13089
> URL: https://issues.apache.org/jira/browse/HDFS-13089
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13089.000.patch
>
>
> Add test to validate dfs used and no of blocks when blocks are moved across 
> volumes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13181) DiskBalancer: Add an configuration for valid plan hours

2018-02-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13181:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

+1 I've committed this. Thanks [~bharatviswa].

The Jenkins UT failures are unrelated.

> DiskBalancer: Add an configuration for valid plan hours 
> 
>
> Key: HDFS-13181
> URL: https://issues.apache.org/jira/browse/HDFS-13181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-13181.00.patch, HDFS-13181.01.patch, 
> HDFS-13181.02.patch, HDFS-13181.03.patch, HDFS-13181.04.patch, 
> HDFS-13181.05.patch
>
>
> Add a configuration for valid plan hours, instead of constant 24 hours in the 
> code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes

2018-02-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379046#comment-16379046
 ] 

Ajay Kumar commented on HDFS-13055:
---

patch v5 to address checkstyle issues and fix broken test in 
{{TestDatanodeManager}}. {{TestAclsEndToEnd}} is failing irrespective of patch.

> Aggregate usage statistics from datanodes
> -
>
> Key: HDFS-13055
> URL: https://issues.apache.org/jira/browse/HDFS-13055
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13055.001.patch, HDFS-13055.002.patch, 
> HDFS-13055.003.patch, HDFS-13055.004.patch, HDFS-13055.005.patch
>
>
> We collect variety of statistics in DataNodes and expose them via JMX. 
> Aggregating some of the high level statistics which we are already collecting 
> in {{DataNodeMetrics}} (like bytesRead,bytesWritten etc) over a configurable 
> time window will create a central repository accessible via JMX and UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13055) Aggregate usage statistics from datanodes

2018-02-27 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13055:
--
Attachment: HDFS-13055.005.patch

> Aggregate usage statistics from datanodes
> -
>
> Key: HDFS-13055
> URL: https://issues.apache.org/jira/browse/HDFS-13055
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13055.001.patch, HDFS-13055.002.patch, 
> HDFS-13055.003.patch, HDFS-13055.004.patch, HDFS-13055.005.patch
>
>
> We collect variety of statistics in DataNodes and expose them via JMX. 
> Aggregating some of the high level statistics which we are already collecting 
> in {{DataNodeMetrics}} (like bytesRead,bytesWritten etc) over a configurable 
> time window will create a central repository accessible via JMX and UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379036#comment-16379036
 ] 

Íñigo Goiri commented on HDFS-13199:


Found it, this was changed in HDFS-9357.
It seems to apply everywhere so committing to 2.9.1, 2.10, 3.0.2, 3.1.0, and 
3.2.0.
[~maobaolong], for the commit message, do you want to show as maobaolong in 
lower case?


> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12965) Ozone: Documentation : Add ksm -createObjectStore command documentation.

2018-02-27 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379031#comment-16379031
 ] 

Nanda kumar commented on HDFS-12965:


I have committed this to the feature branch. Thanks [~shashikant] for the 
contribution.

> Ozone: Documentation : Add ksm -createObjectStore command documentation.
> 
>
> Key: HDFS-12965
> URL: https://issues.apache.org/jira/browse/HDFS-12965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12965-HDFS-7240.001.patch
>
>
> ksm -createObjectStore command once executed gets the cluster id and scm id 
> from the scm instance running and persist it locally. Once ksm starts , it 
> verifies whether the scm instance its connecting to, has the same cluster id 
> and scm id as present in the version file in KSM and fails in case the info 
> does not match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12965) Ozone: Documentation : Add ksm -createObjectStore command documentation.

2018-02-27 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12965:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone: Documentation : Add ksm -createObjectStore command documentation.
> 
>
> Key: HDFS-12965
> URL: https://issues.apache.org/jira/browse/HDFS-12965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12965-HDFS-7240.001.patch
>
>
> ksm -createObjectStore command once executed gets the cluster id and scm id 
> from the scm instance running and persist it locally. Once ksm starts , it 
> verifies whether the scm instance its connecting to, has the same cluster id 
> and scm id as present in the version file in KSM and fails in case the info 
> does not match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13199:
---
Tags: RBF

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12965) Ozone: Documentation : Add ksm -createObjectStore command documentation.

2018-02-27 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379025#comment-16379025
 ] 

Nanda kumar commented on HDFS-12965:


+1, LGTM. I will commit this shortly.

> Ozone: Documentation : Add ksm -createObjectStore command documentation.
> 
>
> Key: HDFS-12965
> URL: https://issues.apache.org/jira/browse/HDFS-12965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12965-HDFS-7240.001.patch
>
>
> ksm -createObjectStore command once executed gets the cluster id and scm id 
> from the scm instance running and persist it locally. Once ksm starts , it 
> verifies whether the scm instance its connecting to, has the same cluster id 
> and scm id as present in the version file in KSM and fails in case the info 
> does not match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379024#comment-16379024
 ] 

Íñigo Goiri commented on HDFS-13199:


Thanks [~maobaolong], I added you to the list of contributors and assigned it 
to you.
[^HDFS-13199.001.patch] looks good.
I remember seeing the patch where this was changed; I'm trying to find it and 
link it here.

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-02-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13199:
--

Assignee: maobaolong

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11699) Ozone:SCM: Add support for close containers in SCM

2018-02-27 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379021#comment-16379021
 ] 

Nanda kumar commented on HDFS-11699:


I have committed this to the feature branch. Thanks [~anu] for the contribution.

> Ozone:SCM: Add support for close containers in SCM
> --
>
> Key: HDFS-11699
> URL: https://issues.apache.org/jira/browse/HDFS-11699
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-11699-HDFS-7240.001.patch, 
> HDFS-11699-HDFS-7240.002.patch, HDFS-11699-HDFS-7240.003.patch, 
> HDFS-11699-HDFS-7240.004.patch
>
>
> Add support for closed containers in SCM. When a container is closed, SCM 
> needs to make a set of decisions like which pool and which machines are 
> expected to have this container. SCM also needs to issue a copyContainer 
> command to the target datanodes so that these nodes can replicate data from 
> the original locations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11699) Ozone:SCM: Add support for close containers in SCM

2018-02-27 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-11699:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

> Ozone:SCM: Add support for close containers in SCM
> --
>
> Key: HDFS-11699
> URL: https://issues.apache.org/jira/browse/HDFS-11699
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-11699-HDFS-7240.001.patch, 
> HDFS-11699-HDFS-7240.002.patch, HDFS-11699-HDFS-7240.003.patch, 
> HDFS-11699-HDFS-7240.004.patch
>
>
> Add support for closed containers in SCM. When a container is closed, SCM 
> needs to make a set of decisions like which pool and which machines are 
> expected to have this container. SCM also needs to issue a copyContainer 
> command to the target datanodes so that these nodes can replicate data from 
> the original locations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >