[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-02-13 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865283#comment-15865283
 ] 

Weiwei Yang commented on HDFS-6874:
---

Hi [~andrew.wang]

Appreciate if you can take a look at this patch, it's been a while now...

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874-1.patch, 
> HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11410) Use the cache when edit logging XAttrOps

2017-02-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11410:
-
Attachment: HDFS-11410.01.patch

Simple patch 1 attached. Trivial changes.

Pinged one of the initial jiras HDFS-6301, no response so far. Very likely just 
a bug, or accident as Andrew said.

> Use the cache when edit logging XAttrOps
> 
>
> Key: HDFS-11410
> URL: https://issues.apache.org/jira/browse/HDFS-11410
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-11410.01.patch
>
>
> [~andrew.wang] recently had a comment on HDFS-10899:
> {quote}
> Looks like we aren't using the op cache in FSEditLog SetXAttrOp / 
> RemoveXAttrOp. I think this is accidental, could you do some research? 
> Particularly since we'll be doing a lot of SetXAttrOps, avoiding all that 
> object allocation would be nice. This could be a separate JIRA.
> {quote}
> i.e. 
> {code}
> static SetXAttrOp getInstance() {
>   return new SetXAttrOp();
> }
> {code}
> v.s.
> {code}
> static AddOp getInstance(OpInstanceCache cache) {
>   return (AddOp) cache.get(OP_ADD);
> }
> {code}
> Seems we should fix these non-caching usages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11411) Avoid OutOfMemoryError in TestMaintenanceState test runs

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865278#comment-15865278
 ] 

Hadoop QA commented on HDFS-11411:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11411 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852497/HDFS-11411.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cb0298184640 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71c23c9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18366/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18366/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18366/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Avoid OutOfMemoryError in TestMaintenanceState test runs
> 
>
> Key: HDFS-11411
> URL: https://issues.apache.org/jira/browse/HDFS-11411
> Project: Hadoop HDFS
>  

[jira] [Commented] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865275#comment-15865275
 ] 

Hadoop QA commented on HDFS-11084:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852491/HDFS-11084.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 154b22ee0c96 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71c23c9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18364/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18364/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18364/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  

[jira] [Commented] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865223#comment-15865223
 ] 

Hadoop QA commented on HDFS-11412:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.tools.TestGetGroups |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.TestSetTimes |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA |
|   | hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter |
|   | hadoop.hdfs.TestRollingUpgradeDowngrade |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.namenode.TestEditLogJournalFailures |
|   | hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | 

[jira] [Commented] (HDFS-11408) The config name of balance bandwidth is out of date

2017-02-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865151#comment-15865151
 ] 

Akira Ajisaka commented on HDFS-11408:
--

+1 pending Jenkins.

> The config name of balance bandwidth is out of date
> ---
>
> Key: HDFS-11408
> URL: https://issues.apache.org/jira/browse/HDFS-11408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11408.001.patch
>
>
> The config of balance bandwidth {{dfs.balance.bandwidthPerSec}} has been 
> deprecated and replaced by the new name 
> {{dfs.datanode.balance.bandwidthPerSec}}. We should update this across the 
> project, including codes's comment and documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11411) Avoid OutOfMemoryError in TestMaintenanceState test runs

2017-02-13 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11411:
--
Status: Patch Available  (was: Open)

> Avoid OutOfMemoryError in TestMaintenanceState test runs
> 
>
> Key: HDFS-11411
> URL: https://issues.apache.org/jira/browse/HDFS-11411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11411.01.patch
>
>
> TestMainteananceState test runs are seeing OutOfMemoryError issues quite 
> frequently now. Need to fix tests that are consuming lots of memory/threads. 
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hdfs.TestMaintenanceState
> Tests run: 21, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 219.479 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.Te
> testTransitionFromDecommissioned(org.apache.hadoop.hdfs.TestMaintenanceState) 
>  Time elapsed: 0.64 sec  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> testTakeDeadNodeOutOfMaintenance(org.apache.hadoop.hdfs.TestMaintenanceState) 
>  Time elapsed: 0.031 sec  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> testWithNNAndDNRestart(org.apache.hadoop.hdfs.TestMaintenanceState)  Time 
> elapsed: 0.03 sec  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> testMultipleNodesMaintenance(org.apache.hadoop.hdfs.TestMaintenanceState)  
> Time elapsed: 60.127 sec  <<< ERROR!
> java.io.IOException: Problem starting http server
> Results :
> Tests in error: 
>   
> TestMaintenanceState.testTransitionFromDecommissioned:225->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.s
>   
> TestMaintenanceState.testTakeDeadNodeOutOfMaintenance:636->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.s
>   
> TestMaintenanceState.testWithNNAndDNRestart:692->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.startCluste
>   
> TestMaintenanceState.testMultipleNodesMaintenance:532->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.start
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11411) Avoid OutOfMemoryError in TestMaintenanceState test runs

2017-02-13 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11411:
--
Attachment: HDFS-11411.01.patch


Following unit tests are calling {{startCluster}} repeatedly without doing any 
shutdown cluster. So, number of threads in a single jvm shot up to a larger 
number consuming lots of memory. Made these tests to explicitly invoke teardown 
when running multiple cases in the same test
-- {{testExpectedReplications}}
-- {{testDecommissionDifferentNodeAfterMaintenances}}
-- {{testChangeReplicationFactors}}
[~eddyxu] can you please take a look at the patch ?

> Avoid OutOfMemoryError in TestMaintenanceState test runs
> 
>
> Key: HDFS-11411
> URL: https://issues.apache.org/jira/browse/HDFS-11411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11411.01.patch
>
>
> TestMainteananceState test runs are seeing OutOfMemoryError issues quite 
> frequently now. Need to fix tests that are consuming lots of memory/threads. 
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hdfs.TestMaintenanceState
> Tests run: 21, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 219.479 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.Te
> testTransitionFromDecommissioned(org.apache.hadoop.hdfs.TestMaintenanceState) 
>  Time elapsed: 0.64 sec  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> testTakeDeadNodeOutOfMaintenance(org.apache.hadoop.hdfs.TestMaintenanceState) 
>  Time elapsed: 0.031 sec  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> testWithNNAndDNRestart(org.apache.hadoop.hdfs.TestMaintenanceState)  Time 
> elapsed: 0.03 sec  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> testMultipleNodesMaintenance(org.apache.hadoop.hdfs.TestMaintenanceState)  
> Time elapsed: 60.127 sec  <<< ERROR!
> java.io.IOException: Problem starting http server
> Results :
> Tests in error: 
>   
> TestMaintenanceState.testTransitionFromDecommissioned:225->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.s
>   
> TestMaintenanceState.testTakeDeadNodeOutOfMaintenance:636->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.s
>   
> TestMaintenanceState.testWithNNAndDNRestart:692->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.startCluste
>   
> TestMaintenanceState.testMultipleNodesMaintenance:532->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.start
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-02-13 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11412:
--
Status: Patch Available  (was: Open)

> Maintenance minimum replication config value allowable range should be {0 - 
> DefaultReplication}
> ---
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-02-13 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11412:
--
Attachment: HDFS-11412.01.patch

[~mingma], [~eddyxu], Attached v01 patch to address the following
* maintenance minimum repl config value range to be less restrictive
* couple of unit tests in TestMaintenanceState to trigger re-replication during 
maintenance state and validate config.
Please let me know your comments.

> Maintenance minimum replication config value allowable range should be {0 - 
> DefaultReplication}
> ---
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865095#comment-15865095
 ] 

Akira Ajisaka commented on HDFS-11407:
--

+1, thanks Yiqun.

> Document the missing usages of OfflineImageViewer processors
> 
>
> Key: HDFS-11407
> URL: https://issues.apache.org/jira/browse/HDFS-11407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11407.001.patch
>
>
> Currently the documentation only introduces the usage of oiv processors 
> {{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, 
> the processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are 
> missing in documentation. Document this will be helpful for users to know how 
> to use this tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11084:
-
Target Version/s: 2.9.0, 3.0.0-alpha3, 2.8.1  (was: 2.8.1)

> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11084.02.patch, HDFS-11804.branch-2.002.patch, 
> HDFS-11804.branch-2.patch
>
>
> HDFS-10505 added a new feature OIV ReverseXML processor to generate a fsimage 
> from a xml file. However, if the files/directories in it has sticky bit, 
> ReverseXML processor can not recognize it due to HADOOP-13508.
> It seems HADOOP-13508 is an incompatible change in Hadoop 3. Would it be 
> reasonable to add an overloaded FsPermission constructor that uses RawParser 
> so that it reads sticky bits correctly? Or is it reasonable to backport 
> HADOOP-13508 to branch-2?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11084:
-
Target Version/s: 2.8.1

> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11084.02.patch, HDFS-11804.branch-2.002.patch, 
> HDFS-11804.branch-2.patch
>
>
> HDFS-10505 added a new feature OIV ReverseXML processor to generate a fsimage 
> from a xml file. However, if the files/directories in it has sticky bit, 
> ReverseXML processor can not recognize it due to HADOOP-13508.
> It seems HADOOP-13508 is an incompatible change in Hadoop 3. Would it be 
> reasonable to add an overloaded FsPermission constructor that uses RawParser 
> so that it reads sticky bits correctly? Or is it reasonable to backport 
> HADOOP-13508 to branch-2?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865075#comment-15865075
 ] 

Akira Ajisaka commented on HDFS-11084:
--

Hi [~jojochuang], would you rename the summary to "Add a regression test for 
OIV ReverseXML processor" or something since the patch is to add the test case?

> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11084.02.patch, HDFS-11804.branch-2.002.patch, 
> HDFS-11804.branch-2.patch
>
>
> HDFS-10505 added a new feature OIV ReverseXML processor to generate a fsimage 
> from a xml file. However, if the files/directories in it has sticky bit, 
> ReverseXML processor can not recognize it due to HADOOP-13508.
> It seems HADOOP-13508 is an incompatible change in Hadoop 3. Would it be 
> reasonable to add an overloaded FsPermission constructor that uses RawParser 
> so that it reads sticky bits correctly? Or is it reasonable to backport 
> HADOOP-13508 to branch-2?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11399) Many tests fails in Windows due to injecting disk failures

2017-02-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865073#comment-15865073
 ] 

Yiqun Lin commented on HDFS-11399:
--

Hi [~brahmareddy], any comments for my latest comment? Let me know if that also 
makes sense to you.

> Many tests fails in Windows due to injecting disk failures
> --
>
> Key: HDFS-11399
> URL: https://issues.apache.org/jira/browse/HDFS-11399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11399.001.patch, HDFS-11399.002.patch
>
>
> Found many tests run failed in Windows due to using 
> {{DataNodeTestUtils#injectDataDirFailure}}. One of the failure test:
> {code}
> java.io.IOException: Failed to rename 
> D:\work-project\hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data2
>  to 
> D:\work-project\hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data2.origin.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.injectDataDirFailure(DataNodeTestUtils.java:156)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean.testStorageTypeStatsWhenStorageFailed(TestBlockStatsMXBean.java:176)
> {code}
> The root cause of this is that the test method uses 
> {{DataNodeTestUtils#injectDataDirFailure}} to injects disk failures but 
> however missing the compare operation {{assumeNotWindows}}. Currently 
> {{DataNodeTestUtils#injectDataDirFailure}} is not supported in Windows then 
> the test fails.
> It would be better to add {{assumeNotWindows}} into 
> {{DataNodeTestUtils#injectDataDirFailure}} in case we forget to add this in 
> test methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865072#comment-15865072
 ] 

Akira Ajisaka commented on HDFS-11084:
--

+1 pending Jenkins on trunk. Thanks [~jojochuang] for updating the patch.

> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11084.02.patch, HDFS-11804.branch-2.002.patch, 
> HDFS-11804.branch-2.patch
>
>
> HDFS-10505 added a new feature OIV ReverseXML processor to generate a fsimage 
> from a xml file. However, if the files/directories in it has sticky bit, 
> ReverseXML processor can not recognize it due to HADOOP-13508.
> It seems HADOOP-13508 is an incompatible change in Hadoop 3. Would it be 
> reasonable to add an overloaded FsPermission constructor that uses RawParser 
> so that it reads sticky bits correctly? Or is it reasonable to backport 
> HADOOP-13508 to branch-2?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11084:
-
Attachment: HDFS-11084.02.patch

Attaching the same patch to run precommit job on trunk.

> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11084.02.patch, HDFS-11804.branch-2.002.patch, 
> HDFS-11804.branch-2.patch
>
>
> HDFS-10505 added a new feature OIV ReverseXML processor to generate a fsimage 
> from a xml file. However, if the files/directories in it has sticky bit, 
> ReverseXML processor can not recognize it due to HADOOP-13508.
> It seems HADOOP-13508 is an incompatible change in Hadoop 3. Would it be 
> reasonable to add an overloaded FsPermission constructor that uses RawParser 
> so that it reads sticky bits correctly? Or is it reasonable to backport 
> HADOOP-13508 to branch-2?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864989#comment-15864989
 ] 

Hadoop QA commented on HDFS-10899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
43s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 19 new 
+ 1779 unchanged - 4 fixed = 1798 total (was 1783) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible null pointer dereference of fei in new 
org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$ReencryptTracker(ReencryptionHandler,
 INodeFile)  Dereferenced at ReencryptionHandler.java:fei in new 
org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$ReencryptTracker(ReencryptionHandler,
 INodeFile)  Dereferenced at ReencryptionHandler.java:[line 128] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | 

[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-02-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864944#comment-15864944
 ] 

Kai Zheng commented on HDFS-7859:
-

Thanks Andrew for working on this and bringing up the discussion.

1. For the system default EC policy, I agree its importance becomes weak now. A 
question is, when users set a EC policy to a folder, do they want to or have to 
specify a policy in most time? Making the policy parameter optional could be 
friendly considering they may have no idea before the list of available 
policies promoted to them.

2. EC policy and schema could be a nice fit and thought of file meta data. EC 
is another form of replica, I guess replica factor is recorded per file. If EC 
policy info get persisted and stay along with data, users might feel more 
confident and comfortable to do data validation and transformation in system 
upgrading. I may echo what's said in previous comments by others, configuration 
and codes may change and evolve. 

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: BB2015-05-TBR, hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2017-02-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864914#comment-15864914
 ] 

Brahma Reddy Battula commented on HDFS-11409:
-

Me too +1. I feel, check-style can be ignored as rest of methods are same 
format in {{DatanodeInfo.java}}.

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11409.001.patch
>
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864903#comment-15864903
 ] 

Yiqun Lin commented on HDFS-11407:
--

Thanks [~jojochuang] for the review! Will commit at the end of today in case 
someone have comments on this.

> Document the missing usages of OfflineImageViewer processors
> 
>
> Key: HDFS-11407
> URL: https://issues.apache.org/jira/browse/HDFS-11407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11407.001.patch
>
>
> Currently the documentation only introduces the usage of oiv processors 
> {{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, 
> the processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are 
> missing in documentation. Document this will be helpful for users to know how 
> to use this tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11333) Namenode unable to start if plugins can not be found

2017-02-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864884#comment-15864884
 ] 

Yiqun Lin commented on HDFS-11333:
--

LGTM, +1. [~jojochuang], you may delete v02 patch with warn message printing. 
It makes me confused there are two same name patches. 

> Namenode unable to start if plugins can not be found
> 
>
> Key: HDFS-11333
> URL: https://issues.apache.org/jira/browse/HDFS-11333
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HDFS-11333.001.patch, HDFS-11333.002.patch, 
> HDFS-11333.002.patch
>
>
> If NameNode is unable to find plugins (specified in dfs.namenode.plugins), it 
> terminates abruptly with the following stack trace:
> {quote}
> Failed to start namenode.
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class XXX not 
> found
>   at org.apache.hadoop.conf.Configuration.getClasses(Configuration.java:2178)
>   at 
> org.apache.hadoop.conf.Configuration.getInstances(Configuration.java:2250)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:691)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:843)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:822)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1543)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1611)
> {quote}
> We should catch this exception, log a warning message and let it proceed, as 
> missing the third party library does not affect the functionality of 
> NameNode. We caught this bug during a CDH upgrade where a third party plugin 
> was not in the lib directory of the newer version of CDH.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2017-02-13 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11409:
---
Hadoop Flags: Reviewed
 Component/s: (was: namenode)
  performance

+1 patch looks good.

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11409.001.patch
>
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-02-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: (was: HDFS-10899.08.patch)

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-02-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.08.patch

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-02-13 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11412:
--
Component/s: (was: hdfs)
 namenode
 datanode

> Maintenance minimum replication config value allowable range should be {0 - 
> DefaultReplication}
> ---
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-02-13 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11412:
-

 Summary: Maintenance minimum replication config value allowable 
range should be {0 - DefaultReplication}
 Key: HDFS-11412
 URL: https://issues.apache.org/jira/browse/HDFS-11412
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


Currently the allowed value range for Maintenance Min Replication 
{{dfs.namenode.maintenance.replication.min}} is 0 to 
{{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
performance of the cluster would wish to have the Maintenance Min Replication 
number greater than 1, say 2. In the current design, it is possible to have 
this Maintenance Min Replication configuration, but only after changing the 
NameNode level Block Min Replication to 2, and which could slowdown the overall 
latency for client writes.

Technically speaking we should be allowing Maintenance Min Replication to be in 
range 0 to dfs.replication.max.  
* There is always config value of 0 for users not wanting any 
availability/performance during maintenance. 
* And, performance centric workloads can still get maintenance done without 
major disruptions by having a bigger Maintenance Min Replication. Setting the 
upper limit as dfs.replication.max could be an overkill as it could trigger 
re-replication which Maintenance State is trying to avoid. So, we could allow 
the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
dfs.replication}}

{noformat}
if (minMaintenanceR < 0) {
  throw new IOException("Unexpected configuration parameters: "
  + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
  + " = " + minMaintenanceR + " < 0");
}
if (minMaintenanceR > minR) {
  throw new IOException("Unexpected configuration parameters: "
  + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
  + " = " + minMaintenanceR + " > "
  + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
  + " = " + minR);
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864840#comment-15864840
 ] 

Hadoop QA commented on HDFS-10899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-10899 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852456/HDFS-10899.08.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18362/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11411) Avoid OutOfMemoryError in TestMaintenanceState test runs

2017-02-13 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11411:
-

 Summary: Avoid OutOfMemoryError in TestMaintenanceState test runs
 Key: HDFS-11411
 URL: https://issues.apache.org/jira/browse/HDFS-11411
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


TestMainteananceState test runs are seeing OutOfMemoryError issues quite 
frequently now. Need to fix tests that are consuming lots of memory/threads. 

{noformat}
---
 T E S T S
---
Running org.apache.hadoop.hdfs.TestMaintenanceState
Tests run: 21, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 219.479 sec 
<<< FAILURE! - in org.apache.hadoop.hdfs.Te
testTransitionFromDecommissioned(org.apache.hadoop.hdfs.TestMaintenanceState)  
Time elapsed: 0.64 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
testTakeDeadNodeOutOfMaintenance(org.apache.hadoop.hdfs.TestMaintenanceState)  
Time elapsed: 0.031 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
testWithNNAndDNRestart(org.apache.hadoop.hdfs.TestMaintenanceState)  Time 
elapsed: 0.03 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
testMultipleNodesMaintenance(org.apache.hadoop.hdfs.TestMaintenanceState)  Time 
elapsed: 60.127 sec  <<< ERROR!
java.io.IOException: Problem starting http server
Results :
Tests in error: 
  
TestMaintenanceState.testTransitionFromDecommissioned:225->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.s
  
TestMaintenanceState.testTakeDeadNodeOutOfMaintenance:636->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.s
  
TestMaintenanceState.testWithNNAndDNRestart:692->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.startCluste
  
TestMaintenanceState.testMultipleNodesMaintenance:532->AdminStatesBaseTest.startCluster:413->AdminStatesBaseTest.start
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-02-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.08.patch

Thanks much for the soak and all these good comments Andrew. Sorry this took a 
while to update.

Attaching patch 8:
- Refactored the {{ReencryptionHandler}} so the innards are easier to reason 
about. This makes the locking easier, gets rid of the {{subdirs}}, and contacts 
the KMS after a full batch is ready, in a new method {{processCurrentBatch}}
- Having a some local bench mark it seems the communication overhead to KMS is 
dominating (60%+), so we can potentially add the re-encrypt batch API to KMS 
and use that on the above. Could multithread the {{processCurrentBatch}} to 
further push performance I think.

Also addresses all comments above, with the exceptions following:
bq. This could be difficult with all the lock/unlocks and stages, but I'd 
prefer a goal-pause-time configuration for the {{run}} loop. This is easier for 
admins to reason about. We would still use the batch size for determining when 
to log a batch.
Good idea. Will be working on that.
bq. Looks like we aren't using the op cache in FSEditLog SetXAttrOp / 
RemoveXAttrOp. I think this is accidental, could you do some research? 
Particularly since we'll be doing a lot of SetXAttrOps, avoiding all that 
object allocation would be nice. This could be a separate JIRA.
Tracked this back to the initial HDFS-6301, pinged there but no response. Agree 
this is likely a bug, created HDFS-11410. Good find!
bq. Follow-on idea: it'd be nice for admins to be able to query the status of 
queued and running reencrypt commands. Progress indicators like submission 
time, start time, # skipped, # reencrypted, total # (if this is cheap to get) 
would be helpful.
Planed to add {{-status}} to the crypto command, still marking as todo so far 
... :)
bq. Catching Exception in run is a code smell. What is the intent? It looks 
like we already catch the checked exceptions, so this will catch 
RuntimeExceptions (which are normally unrecoverable).
True it's not ideal. The reason I added it is re-encrypt is happening in a 
separate thread, and Exceptions go to stderr which may or may not get 
collected. Logging the exception in the NN log will add some supportability.

Also it almost feels we should do the kms batching already. Let me play with it 
and update no later than Wednesday.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11410) Use the cache when edit logging XAttrOps

2017-02-13 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-11410:


 Summary: Use the cache when edit logging XAttrOps
 Key: HDFS-11410
 URL: https://issues.apache.org/jira/browse/HDFS-11410
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Xiao Chen
Assignee: Xiao Chen


[~andrew.wang] recently had a comment on HDFS-10899:
{quote}
Looks like we aren't using the op cache in FSEditLog SetXAttrOp / 
RemoveXAttrOp. I think this is accidental, could you do some research? 
Particularly since we'll be doing a lot of SetXAttrOps, avoiding all that 
object allocation would be nice. This could be a separate JIRA.
{quote}

i.e. 
{code}
static SetXAttrOp getInstance() {
  return new SetXAttrOp();
}
{code}
v.s.
{code}
static AddOp getInstance(OpInstanceCache cache) {
  return (AddOp) cache.get(OP_ADD);
}
{code}

Seems we should fix these non-caching usages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864809#comment-15864809
 ] 

Hadoop QA commented on HDFS-11409:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 3 new + 58 unchanged - 3 fixed = 61 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852445/HDFS-11409.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5015f7c1c34d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71c23c9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18361/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18361/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18361/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> 

[jira] [Updated] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2017-02-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11409:
--
Status: Patch Available  (was: Open)

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11409.001.patch
>
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2017-02-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11409:
--
Attachment: HDFS-11409.001.patch

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11409.001.patch
>
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2017-02-13 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11409:
---
Component/s: namenode

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2017-02-13 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11409:
-

 Summary: DatanodeInfo getNetworkLocation and setNetworkLocation 
shoud use volatile instead of synchronized
 Key: HDFS-11409
 URL: https://issues.apache.org/jira/browse/HDFS-11409
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chen Liang
Assignee: Chen Liang
Priority: Minor


{{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
{{setNetworkLocation}}. While they doing nothing more than setting and getting 
variable {{location}}.

Since {{location}} is not being modified based on its current value and is 
independent from any other variables. This JIRA propose to remove synchronized 
methods but only make {{location}} volatile. Such that threads will not be 
blocked on get/setNetworkLocation.

Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8196) Erasure Coding related information on NameNode UI

2017-02-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864656#comment-15864656
 ] 

Andrew Wang commented on HDFS-8196:
---

I like this new rev, some nits:

* Can we add spaces between the entries in the list of active policies? Looks a 
bit cramped right now.
* For the total size and cell size, I think we should use {{fmt_bytes}} like 
the other byte numbers on dfshealth.html.

Should we leave per-policy statistics as an enhancement for later? I think this 
is okay since most of the time we'll only have one EC policy in use on a 
cluster.

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch, HDFS-8196.02.patch, Screen Shot 
> 2017-02-06 at 22.30.40.png, Screen Shot 2017-02-12 at 20.21.42.png
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7877) Support maintenance state for datanodes

2017-02-13 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864638#comment-15864638
 ] 

Manoj Govindassamy commented on HDFS-7877:
--

Thanks [~mingma]. Got it, when you club this with Upgrade Domain, the impact is 
not that severe. 

I will make the following change for the Maintenance Min Replication range 
validation check.

{noformat}
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -484,12 +484,12 @@ public BlockManager(final Namesystem namesystem, boolean 
haEnabled,
   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
   + " = " + minMaintenanceR + " < 0");
 }
-if (minMaintenanceR > minR) {
+if (minMaintenanceR > defaultReplication) {
   throw new IOException("Unexpected configuration parameters: "
   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
   + " = " + minMaintenanceR + " > "
-  + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
-  + " = " + minR);
+  + DFSConfigKeys.DFS_REPLICATION_DEFAULT
+  + " = " + defaultReplication);
 }

{noformat}


bq. the transition policy from ENTERING_MAINTENANCE to IN_MAINTENANCE will 
become the # of live replicas >= min(dfs.namenode.maintenance.replication.min, 
replication factor).

But, the transition from ENTERIN_MM to IN_MM that is happening 
{{DecommissionManager#Monitor#check}} which in-turn calls 
{{DecommissionManager#isSufficient}} looks ok to me. Because, we allow files to 
be created with custom block replication count say 1, which can be lesser than 
the default dfs.replication=3. And, since we should not be counting in the 
Maintenance Replicas, the formula is, as it exists currently 

{noformat}

expectedRedundancy = file_block_replication_count=1 or the 
default_replication_cont=3
Math.max(
expectedRedundancy - numberReplicas.maintenanceReplicas(),
getMinMaintenanceStorageNum(block));
{noformat}

Let me know if I am missing something. Thanks.


--- related code snippets 

{noformat}

  /**
   * Checks whether a block is sufficiently replicated/stored for
   * decommissioning. For replicated blocks or striped blocks, full-strength
   * replication or storage is not always necessary, hence "sufficient".
   * @return true if sufficient, else false.
   */
  private boolean isSufficient(BlockInfo block, BlockCollection bc,
  NumberReplicas numberReplicas, boolean isDecommission) {
if (blockManager.hasEnoughEffectiveReplicas(block, numberReplicas, 0)) {
  // Block has enough replica, skip
  LOG.trace("Block {} does not need replication.", block);
  return true;
}
..
..
..



  // Check if the number of live + pending replicas satisfies
  // the expected redundancy.
  boolean hasEnoughEffectiveReplicas(BlockInfo block,
  NumberReplicas numReplicas, int pendingReplicaNum) {
int required = getExpectedLiveRedundancyNum(block, numReplicas);
int numEffectiveReplicas = numReplicas.liveReplicas() + pendingReplicaNum;
return (numEffectiveReplicas >= required) &&
(pendingReplicaNum > 0 || isPlacementPolicySatisfied(block));
  }


  // Exclude maintenance, but make sure it has minimal live replicas
  // to satisfy the maintenance requirement.
  public short getExpectedLiveRedundancyNum(BlockInfo block,
  NumberReplicas numberReplicas) {
final short expectedRedundancy = getExpectedRedundancyNum(block);
return (short) Math.max(expectedRedundancy -
numberReplicas.maintenanceReplicas(),
getMinMaintenanceStorageNum(block));
  }
{noformat}

> Support maintenance state for datanodes
> ---
>
> Key: HDFS-7877
> URL: https://issues.apache.org/jira/browse/HDFS-7877
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-7877-2.patch, HDFS-7877.patch, 
> Supportmaintenancestatefordatanodes-2.pdf, 
> Supportmaintenancestatefordatanodes.pdf
>
>
> This requirement came up during the design for HDFS-7541. Given this feature 
> is mostly independent of upgrade domain feature, it is better to track it 
> under a separate jira. The design and draft patch will be available soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11333) Namenode unable to start if plugins can not be found

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864634#comment-15864634
 ] 

Hadoop QA commented on HDFS-11333:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11333 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852421/HDFS-11333.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fcf4653fe373 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4ed33e9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18359/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18359/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18359/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Namenode unable to start if plugins can not be found
> 
>
> Key: HDFS-11333
> URL: 

[jira] [Commented] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864625#comment-15864625
 ] 

Wei-Chiu Chuang commented on HDFS-11407:


Thanks for the patch, [~linyiqun].
The patch looks good to me. 

> Document the missing usages of OfflineImageViewer processors
> 
>
> Key: HDFS-11407
> URL: https://issues.apache.org/jira/browse/HDFS-11407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11407.001.patch
>
>
> Currently the documentation only introduces the usage of oiv processors 
> {{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, 
> the processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are 
> missing in documentation. Document this will be helpful for users to know how 
> to use this tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10533) Make DistCpOptions class immutable

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864566#comment-15864566
 ] 

Hadoop QA commented on HDFS-10533:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-tools/hadoop-distcp: The patch generated 0 
new + 350 unchanged - 49 fixed = 350 total (was 399) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-tools_hadoop-distcp generated 0 new + 45 
unchanged - 4 fixed = 45 total (was 49) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
38s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10533 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852424/HDFS-10533.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 372f4a350640 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4ed33e9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18360/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18360/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make DistCpOptions class immutable
> --
>
> Key: HDFS-10533
> URL: https://issues.apache.org/jira/browse/HDFS-10533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, 
> HDFS-10533.001.patch, HDFS-10533.002.patch, HDFS-10533.003.patch, 
> 

[jira] [Updated] (HDFS-10533) Make DistCpOptions class immutable

2017-02-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10533:
-
Attachment: HDFS-10533.009.patch

> Make DistCpOptions class immutable
> --
>
> Key: HDFS-10533
> URL: https://issues.apache.org/jira/browse/HDFS-10533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, 
> HDFS-10533.001.patch, HDFS-10533.002.patch, HDFS-10533.003.patch, 
> HDFS-10533.004.patch, HDFS-10533.005.patch, HDFS-10533.006.patch, 
> HDFS-10533.007.patch, HDFS-10533.008.patch, HDFS-10533.009.patch
>
>
> Currently the {{DistCpOptions}} class encapsulates all DistCp options, which 
> may be set from command-line (via the {{OptionsParser}}) or may be set 
> manually (eg construct an instance and call setters). As there are multiple 
> option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating 
> them can be cumbersome. Ideally, the {{DistCpOptions}} object should be 
> immutable. The benefits are:
> # {{DistCpOptions}} is simple and easier to use and share, plus it scales well
> # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets 
> validated before usage
> # validation error message is well-defined which does not depend on the order 
> of setters
> This jira is to track the effort of making the {{DistCpOptions}} immutable by 
> using a Builder pattern for creation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11391) Numeric usernames do no work with WebHDFS FS (write access)

2017-02-13 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864485#comment-15864485
 ] 

Yongjun Zhang commented on HDFS-11391:
--

Thanks [~pvillard]. I'm +1 on the patch, will commit soon.


> Numeric usernames do no work with WebHDFS FS (write access)
> ---
>
> Key: HDFS-11391
> URL: https://issues.apache.org/jira/browse/HDFS-11391
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> In HDFS-4983, a property has been introduced to configure the pattern 
> validating name of users interacting with WebHDFS because default pattern was 
> excluding names starting with numbers.
> Problem is that this fix works only for read access. In case of write access 
> against data node, the default pattern is still applied whatever the 
> configuration is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11333) Namenode unable to start if plugins can not be found

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11333:
---
Attachment: HDFS-11333.002.patch
HDFS-11333.002.patch

Thanks [~linyiqun].
Posted my v002 patch.

Add an *error* message that explicitly says what the exception means. The 
exception is rethrown afterwards. Also, print the list of plugins to help 
troubleshooting.

> Namenode unable to start if plugins can not be found
> 
>
> Key: HDFS-11333
> URL: https://issues.apache.org/jira/browse/HDFS-11333
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HDFS-11333.001.patch, HDFS-11333.002.patch, 
> HDFS-11333.002.patch
>
>
> If NameNode is unable to find plugins (specified in dfs.namenode.plugins), it 
> terminates abruptly with the following stack trace:
> {quote}
> Failed to start namenode.
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class XXX not 
> found
>   at org.apache.hadoop.conf.Configuration.getClasses(Configuration.java:2178)
>   at 
> org.apache.hadoop.conf.Configuration.getInstances(Configuration.java:2250)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:691)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:843)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:822)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1543)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1611)
> {quote}
> We should catch this exception, log a warning message and let it proceed, as 
> missing the third party library does not affect the functionality of 
> NameNode. We caught this bug during a CDH upgrade where a third party plugin 
> was not in the lib directory of the newer version of CDH.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11314) Validate client-provided EC schema on the NameNode

2017-02-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-11314:
--

Assignee: Andrew Wang  (was: Chen Liang)

Thanks, assigning to myself.

> Validate client-provided EC schema on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2017-02-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864331#comment-15864331
 ] 

Hudson commented on HDFS-11026:
---

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #11240 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11240/])
HDFS-11026. Convert BlockTokenIdentifier to use Protobuf. Contributed by 
(cdouglas: rev 4ed33e9ca3d85568e3904753a3ef61a85f801838)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/KeyManager.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java


> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch, 
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, 
> HDFS-11026.005.patch, HDFS-11026.006.patch, HDFS-11026.007.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11314) Validate client-provided EC schema on the NameNode

2017-02-13 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864327#comment-15864327
 ] 

Chen Liang commented on HDFS-11314:
---

Hi [~andrew.wang], sorry I've been a bit busy on some other works recently and 
haven't got a chance to work on this. Sure, please take this as you like.

> Validate client-provided EC schema on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chen Liang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11314) Validate client-provided EC schema on the NameNode

2017-02-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864320#comment-15864320
 ] 

Andrew Wang commented on HDFS-11314:


Hi [~vagarychen] do you mind if I take this one? I've been discussing this with 
some others on HDFS-7859, and think I have a good handle on what to do here.

> Validate client-provided EC schema on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chen Liang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-02-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864318#comment-15864318
 ] 

Andrew Wang commented on HDFS-7859:
---

I thought about this JIRA some more, and had two questions I wanted to bring up 
for discussion:

h3. Do we need a system default EC policy?

AFAICT, the system default policy dates from when we only supported a single 
policy for HDFS. Now, we've pretty clearly defined the API for EC policies, and 
for most uses, the EC policy is automatically inherited from a dir-level 
policy. The {{setErasureCodingPolicy}} API already requires an EC policy to be 
specified, so I think the default EC policy is basically vestigal and can be 
removed.

# Can we use configuration instead of persistence for the set of enabled 
policies?

I'm wondering if there is actually any benefit to persisting the set of allowed 
policies. In the past, we've enabled and disabled features via configuration 
keys, and this is basically the same idea. There's no danger of data corruption 
from two NNs having different sets of enabled policies, so it's safe in that 
sense. IMO we have a key like {{dfs.namenode.erasure.coding.policies.enabled}} 
and specify from the list of hardcoded policies there.

If the above sounds good, I can file a new JIRA for refactoring out the system 
default policies, and do the configuration key over on HDFS-11314.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: BB2015-05-TBR, hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11100) Recursively deleting file protected by sticky bit should fail

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864308#comment-15864308
 ] 

Wei-Chiu Chuang commented on HDFS-11100:


+1. The failed test are unrelated and can't be reproduced locally.
Thanks [~jzhuge] for the nice work!

> Recursively deleting file protected by sticky bit should fail
> -
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>  Labels: permissions
> Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, 
> HDFS-11100.003.patch, HDFS-11100.004.patch, HDFS-11100.005.patch, hdfs_cmds
>
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2017-02-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864291#comment-15864291
 ] 

Andrew Wang commented on HDFS-11026:


Really happy to see this in, thanks Ewan, Chris, and Daryn for getting it over 
the finish line!

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch, 
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, 
> HDFS-11026.005.patch, HDFS-11026.006.patch, HDFS-11026.007.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2017-02-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-11026:
-
Description: {{BlockTokenIdentifier}} currently uses a 
{{DataInput}}/{{DataOutput}} (basically a {{byte[]}}) and manual serialization 
to get data into and out of the encrypted buffer (in {{BlockKeyProto}}). Other 
TokenIdentifiers (e.g. {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) 
use Protobuf. The {{BlockTokenIdenfitier}} should use Protobuf as well so it 
can be expanded more easily and will be consistent with the rest of the system. 
 (was: {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
(basically a {{byte[]}}) and manual serialization to get data into and out of 
the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
{{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
{{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded more 
easily and will be consistent with the rest of the system.

NB: Release of this will require a version update since 2.8.x won't be able to 
decipher {{BlockKeyProto.keyBytes}} from 2.8.y.)

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch, 
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, 
> HDFS-11026.005.patch, HDFS-11026.006.patch, HDFS-11026.007.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2017-02-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-11026:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Changed the serialized format of BlockTokenIdentifier to 
protocol buffers. Includes logic to decode both the old Writable format and the 
new PB format to support existing clients. Client implementations in other 
languages will require similar functionality.
  Status: Resolved  (was: Patch Available)

+1 Thanks for the detail, Ewan. I committed this.

This isn't marked as an incompatible change because this isn't a public API and 
it's compatible, but libraries implementing the HDFS client protocol will need 
to be updated. Added that to the release note.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch, 
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, 
> HDFS-11026.005.patch, HDFS-11026.006.patch, HDFS-11026.007.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864226#comment-15864226
 ] 

Wei-Chiu Chuang commented on HDFS-11084:


[~ajisakaa] I pushed HADOOP-13508 into branch-2.8. Thanks.

> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11804.branch-2.002.patch, HDFS-11804.branch-2.patch
>
>
> HDFS-10505 added a new feature OIV ReverseXML processor to generate a fsimage 
> from a xml file. However, if the files/directories in it has sticky bit, 
> ReverseXML processor can not recognize it due to HADOOP-13508.
> It seems HADOOP-13508 is an incompatible change in Hadoop 3. Would it be 
> reasonable to add an overloaded FsPermission constructor that uses RawParser 
> so that it reads sticky bits correctly? Or is it reasonable to backport 
> HADOOP-13508 to branch-2?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864177#comment-15864177
 ] 

Hadoop QA commented on HDFS-11026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
771 unchanged - 3 fixed = 774 total (was 774) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852374/HDFS-11026.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux a7b7afd6ebce 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 464ff47 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| 

[jira] [Commented] (HDFS-11084) OIV ReverseXML processor does not recognize sticky bit

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864092#comment-15864092
 ] 

Wei-Chiu Chuang commented on HDFS-11084:


The failed tests are unrelated and can't be reproduced.

> OIV ReverseXML processor does not recognize sticky bit
> --
>
> Key: HDFS-11084
> URL: https://issues.apache.org/jira/browse/HDFS-11084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11804.branch-2.002.patch, HDFS-11804.branch-2.patch
>
>
> HDFS-10505 added a new feature OIV ReverseXML processor to generate a fsimage 
> from a xml file. However, if the files/directories in it has sticky bit, 
> ReverseXML processor can not recognize it due to HADOOP-13508.
> It seems HADOOP-13508 is an incompatible change in Hadoop 3. Would it be 
> reasonable to add an overloaded FsPermission constructor that uses RawParser 
> so that it reads sticky bits correctly? Or is it reasonable to backport 
> HADOOP-13508 to branch-2?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11390) Add process name to httpfs process

2017-02-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863954#comment-15863954
 ] 

John Zhuge commented on HDFS-11390:
---

Ok with me.

> Add process name to httpfs process
> --
>
> Key: HDFS-11390
> URL: https://issues.apache.org/jira/browse/HDFS-11390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, scripts
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-11390-branch-2.01.patch
>
>
> Do the same for HttpFS as HADOOP-14050.
> No need to fix trunk because HDFS-10860 will take care of it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2017-02-13 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-11026:
--
Attachment: HDFS-11026.007.patch

Attaching the aforementioned patch.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch, 
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, 
> HDFS-11026.005.patch, HDFS-11026.006.patch, HDFS-11026.007.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2017-02-13 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863940#comment-15863940
 ] 

Ewan Higgs commented on HDFS-11026:
---

{quote}
The non-deterministic decoding exception needs to resolved before integration.
{quote}
{quote}
I'm having trouble reproducing it.
{quote}

Upon further investigation I found that it was a timing issue rather than a 
difference between Oracle and Open JDK.

Because the {{expiryDate}} is first and the tests use {{Time.now()}} in 
generating the tokens, sometimes the {{expiryDate}} written by the protobuf 
happens to be parseable by the {{readFieldsLegacy}} function. Of course this is 
only an issue if we have a protobuf {{BlockTokenIdentifier}} and then we force 
it to be read by the old parser ({{readFieldsLegacy}}) - which we do not do.

An example {{BlockTokenIdentifier}} with expiryDate that will throw 
{{IOException}} is: {{2017-02-09 00:12:35,072+0100}}. In contrast, {{2017-02-09 
00:12:35,071+0100}} will raise {{java.lang.NegativeArraySizeException}}.

I'm adding a new test that demonstrates the different exceptions with crafted 
timestamps.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch, 
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, 
> HDFS-11026.005.patch, HDFS-11026.006.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11379) DFSInputStream may infinite loop requesting block locations

2017-02-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863930#comment-15863930
 ] 

Sean Busbey commented on HDFS-11379:


could we get this into a 2.7 release? (maybe a 2.6 if 2.6 is similarly 
impacted?)

> DFSInputStream may infinite loop requesting block locations
> ---
>
> Key: HDFS-11379
> URL: https://issues.apache.org/jira/browse/HDFS-11379
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HDFS-11379.branch-2.patch, HDFS-11379.trunk.patch
>
>
> DFSInputStream creation caches file size and initial range of locations.  If 
> the file is truncated (or replaced) and the client attempts to read outside 
> the initial range, the client goes into a tight infinite looping requesting 
> locations for the nonexistent range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11379) DFSInputStream may infinite loop requesting block locations

2017-02-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863850#comment-15863850
 ] 

Kihwal Lee commented on HDFS-11379:
---

The impact of this bug can vary depending on how wide the job is, but it 
essentially becomes DDOS using {{getBlockLocations()}}. In one incident, the NN 
could deal with the rpc load, but it ran out of space for logging.

> DFSInputStream may infinite loop requesting block locations
> ---
>
> Key: HDFS-11379
> URL: https://issues.apache.org/jira/browse/HDFS-11379
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HDFS-11379.branch-2.patch, HDFS-11379.trunk.patch
>
>
> DFSInputStream creation caches file size and initial range of locations.  If 
> the file is truncated (or replaced) and the client attempts to read outside 
> the initial range, the client goes into a tight infinite looping requesting 
> locations for the nonexistent range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11379) DFSInputStream may infinite loop requesting block locations

2017-02-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863840#comment-15863840
 ] 

Kihwal Lee commented on HDFS-11379:
---

Thanks, I just cherry-picked it to branch-2.8.0.

> DFSInputStream may infinite loop requesting block locations
> ---
>
> Key: HDFS-11379
> URL: https://issues.apache.org/jira/browse/HDFS-11379
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HDFS-11379.branch-2.patch, HDFS-11379.trunk.patch
>
>
> DFSInputStream creation caches file size and initial range of locations.  If 
> the file is truncated (or replaced) and the client attempts to read outside 
> the initial range, the client goes into a tight infinite looping requesting 
> locations for the nonexistent range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11379) DFSInputStream may infinite loop requesting block locations

2017-02-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11379:
--
Fix Version/s: (was: 2.8.1)
   2.8.0

> DFSInputStream may infinite loop requesting block locations
> ---
>
> Key: HDFS-11379
> URL: https://issues.apache.org/jira/browse/HDFS-11379
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HDFS-11379.branch-2.patch, HDFS-11379.trunk.patch
>
>
> DFSInputStream creation caches file size and initial range of locations.  If 
> the file is truncated (or replaced) and the client attempts to read outside 
> the initial range, the client goes into a tight infinite looping requesting 
> locations for the nonexistent range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11336) [SPS]: Remove xAttrs when movements done or SPS disabled

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863772#comment-15863772
 ] 

Hadoop QA commented on HDFS-11336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 59 unchanged - 0 fixed = 61 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}151m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.server.namenode.TestBlockStorageMovementAttemptedItems |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.TestFileChecksum |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852332/HDFS-11336-HDFS-10285.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux af09d728fde2 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / efe7a1c |
| Default Java | 1.8.0_121 

[jira] [Updated] (HDFS-11395) RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the Exception thrown from NameNode

2017-02-13 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11395:
--
Attachment: HDFS-11395.000.patch

> RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the 
> Exception thrown from NameNode
> 
>
> Key: HDFS-11395
> URL: https://issues.apache.org/jira/browse/HDFS-11395
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-11395.000.patch
>
>
> When using RequestHedgingProxyProvider, in case of Exception (like 
> FileNotFoundException) from ActiveNameNode, 
> {{RequestHedgingProxyProvider#RequestHedgingInvocationHandler.invoke}} 
> receives {{ExecutionException}} since we use {{CompletionService}} for the 
> call. The ExecutionException is put into a map and wrapped with 
> {{MultiException}}.
> So for a FileNotFoundException the client receives 
> {{MultiException(Map(ExecutionException(InvocationTargetException(RemoteException(FileNotFoundException)}}
> It will cause problem in clients which are handling RemoteExceptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11390) Add process name to httpfs process

2017-02-13 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863593#comment-15863593
 ] 

Weiwei Yang edited comment on HDFS-11390 at 2/13/17 12:35 PM:
--

Hi [~jzhuge]

Thanks, I noticed that too. Since HADOOP-14050 is already committed, lets use 
same approach here and stick to the current patch, agree?


was (Author: cheersyang):
Hi [~jzhuge]

Thanks, I noticed that too. Since HADOOP-14050 is already committed, lets use 
same approach here, so stick to the current patch, agree?

> Add process name to httpfs process
> --
>
> Key: HDFS-11390
> URL: https://issues.apache.org/jira/browse/HDFS-11390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, scripts
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-11390-branch-2.01.patch
>
>
> Do the same for HttpFS as HADOOP-14050.
> No need to fix trunk because HDFS-10860 will take care of it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11390) Add process name to httpfs process

2017-02-13 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863593#comment-15863593
 ] 

Weiwei Yang commented on HDFS-11390:


Hi [~jzhuge]

Thanks, I noticed that too. Since HADOOP-14050 is already committed, lets use 
same approach here, so stick to the current patch, agree?

> Add process name to httpfs process
> --
>
> Key: HDFS-11390
> URL: https://issues.apache.org/jira/browse/HDFS-11390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, scripts
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-11390-branch-2.01.patch
>
>
> Do the same for HttpFS as HADOOP-14050.
> No need to fix trunk because HDFS-10860 will take care of it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11391) Numeric usernames do no work with WebHDFS FS (write access)

2017-02-13 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863580#comment-15863580
 ] 

Pierre Villard commented on HDFS-11391:
---

[~yzhangal], anything else required on my side to get the PR reviewed/merged?

> Numeric usernames do no work with WebHDFS FS (write access)
> ---
>
> Key: HDFS-11391
> URL: https://issues.apache.org/jira/browse/HDFS-11391
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> In HDFS-4983, a property has been introduced to configure the pattern 
> validating name of users interacting with WebHDFS because default pattern was 
> excluding names starting with numbers.
> Problem is that this fix works only for read access. In case of write access 
> against data node, the default pattern is still applied whatever the 
> configuration is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11336) [SPS]: Remove xAttrs when movements done or SPS disabled

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11336:
--
Status: Patch Available  (was: Open)

> [SPS]: Remove xAttrs when movements done or SPS disabled
> 
>
> Key: HDFS-11336
> URL: https://issues.apache.org/jira/browse/HDFS-11336
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11336-HDFS-10285.001.patch
>
>
> 1. When we finish the movement successfully, we should clean Xattrs.
> 2. When we disable SPS dynamically, we should clean Xattrs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11336) [SPS]: Remove xAttrs when movements done or SPS disabled

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11336:
--
Attachment: HDFS-11336-HDFS-10285.001.patch

upload v1 patch for this JIRA

> [SPS]: Remove xAttrs when movements done or SPS disabled
> 
>
> Key: HDFS-11336
> URL: https://issues.apache.org/jira/browse/HDFS-11336
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11336-HDFS-10285.001.patch
>
>
> 1. When we finish the movement successfully, we should clean Xattrs.
> 2. When we disable SPS dynamically, we should clean Xattrs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11390) Add process name to httpfs process

2017-02-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863266#comment-15863266
 ] 

John Zhuge commented on HDFS-11390:
---

+1 (non-binding) The change LGTM. Tested the patch on CDH5.

As for "-Dproc_httpfs" in the middle of the jvm opts, JAVA_OPTS may be a little 
better, but not too much. Here is how the java command line is formulated in 
catalina.sh:
{code}
"$_RUNJAVA" "$LOGGING_CONFIG" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS
{code}
You can see long entries for LOGGING_CONFIG and LOGGING_MANAGER on the jvm 
command line after applying the patch:
{noformat}
52941 Bootstrap 
-Djava.util.logging.config.file=.../share/hadoop/httpfs/tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager 
-Djdk.tls.ephemeralDHKeySize=2048 -Dproc_httpfs -Dhttpfs.home.dir=...
{noformat}


> Add process name to httpfs process
> --
>
> Key: HDFS-11390
> URL: https://issues.apache.org/jira/browse/HDFS-11390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, scripts
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-11390-branch-2.01.patch
>
>
> Do the same for HttpFS as HADOOP-14050.
> No need to fix trunk because HDFS-10860 will take care of it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11407:


 Summary: Document the missing usages of OfflineImageViewer 
processors
 Key: HDFS-11407
 URL: https://issues.apache.org/jira/browse/HDFS-11407
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, tools
Affects Versions: 3.0.0-alpha2
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor


Currently the documentation only introduces the usage of oiv processors 
{{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, the 
processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are missing in 
documentation. Document this will be helpful for users to know how to use this 
tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863212#comment-15863212
 ] 

Yiqun Lin commented on HDFS-11407:
--

Hi [~ajisakaa], would you take a review of this? Document this will be helpful 
to let users to know and use {{hdfs oiv}} tool.

> Document the missing usages of OfflineImageViewer processors
> 
>
> Key: HDFS-11407
> URL: https://issues.apache.org/jira/browse/HDFS-11407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11407.001.patch
>
>
> Currently the documentation only introduces the usage of oiv processors 
> {{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, 
> the processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are 
> missing in documentation. Document this will be helpful for users to know how 
> to use this tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11408) The config name of Balance bandwidh is out of date

2017-02-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11408:
-
Summary: The config name of Balance bandwidh is out of date  (was: Update 
the config name of Balance bandwidh)

> The config name of Balance bandwidh is out of date
> --
>
> Key: HDFS-11408
> URL: https://issues.apache.org/jira/browse/HDFS-11408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11408.001.patch
>
>
> The config of balance bandwidth {{dfs.balance.bandwidthPerSec}} has been 
> deprecated and replaced by the new name 
> {{dfs.datanode.balance.bandwidthPerSec}}. We should update this across the 
> project, including codes's comment and documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11408) The config name of balance bandwidth is out of date

2017-02-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11408:
-
Summary: The config name of balance bandwidth is out of date  (was: The 
config name of Balance bandwidh is out of date)

> The config name of balance bandwidth is out of date
> ---
>
> Key: HDFS-11408
> URL: https://issues.apache.org/jira/browse/HDFS-11408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11408.001.patch
>
>
> The config of balance bandwidth {{dfs.balance.bandwidthPerSec}} has been 
> deprecated and replaced by the new name 
> {{dfs.datanode.balance.bandwidthPerSec}}. We should update this across the 
> project, including codes's comment and documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11408) Update the config name of Balance bandwidh

2017-02-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11408:
-
Status: Patch Available  (was: Open)

Attach a simple patch for this.

> Update the config name of Balance bandwidh
> --
>
> Key: HDFS-11408
> URL: https://issues.apache.org/jira/browse/HDFS-11408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11408.001.patch
>
>
> The config of balance bandwidth {{dfs.balance.bandwidthPerSec}} has been 
> deprecated and replaced by the new name 
> {{dfs.datanode.balance.bandwidthPerSec}}. We should update this across the 
> project, including codes's comment and documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11408) Update the config name of Balance bandwidh

2017-02-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11408:
-
Attachment: HDFS-11408.001.patch

> Update the config name of Balance bandwidh
> --
>
> Key: HDFS-11408
> URL: https://issues.apache.org/jira/browse/HDFS-11408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11408.001.patch
>
>
> The config of balance bandwidth {{dfs.balance.bandwidthPerSec}} has been 
> deprecated and replaced by the new name 
> {{dfs.datanode.balance.bandwidthPerSec}}. We should update this across the 
> project, including codes's comment and documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11407:
-
Attachment: HDFS-11407.001.patch

> Document the missing usages of OfflineImageViewer processors
> 
>
> Key: HDFS-11407
> URL: https://issues.apache.org/jira/browse/HDFS-11407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11407.001.patch
>
>
> Currently the documentation only introduces the usage of oiv processors 
> {{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, 
> the processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are 
> missing in documentation. Document this will be helpful for users to know how 
> to use this tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11408) Update the config name of Balance bandwidh

2017-02-13 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11408:


 Summary: Update the config name of Balance bandwidh
 Key: HDFS-11408
 URL: https://issues.apache.org/jira/browse/HDFS-11408
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0-alpha2
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor


The config of balance bandwidth {{dfs.balance.bandwidthPerSec}} has been 
deprecated and replaced by the new name 
{{dfs.datanode.balance.bandwidthPerSec}}. We should update this across the 
project, including codes's comment and documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11333) Namenode unable to start if plugins can not be found

2017-02-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863516#comment-15863516
 ] 

Yiqun Lin commented on HDFS-11333:
--

The suggestion from [~aw] also makes sense to me. Based on v01 patch, I think  
it's suffient to just rethrow the exception here. 
{code}
+try {
+  plugins = conf.getInstances(DFS_NAMENODE_PLUGINS_KEY,
+  ServicePlugin.class);
+} catch (RuntimeException e) {
+  LOG.warn("Unable to load plugins", e);
   throw e;  <===
+}
{code}
And this logic can be reused for datanode plugins loading.

> Namenode unable to start if plugins can not be found
> 
>
> Key: HDFS-11333
> URL: https://issues.apache.org/jira/browse/HDFS-11333
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HDFS-11333.001.patch
>
>
> If NameNode is unable to find plugins (specified in dfs.namenode.plugins), it 
> terminates abruptly with the following stack trace:
> {quote}
> Failed to start namenode.
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class XXX not 
> found
>   at org.apache.hadoop.conf.Configuration.getClasses(Configuration.java:2178)
>   at 
> org.apache.hadoop.conf.Configuration.getInstances(Configuration.java:2250)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:691)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:843)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:822)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1543)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1611)
> {quote}
> We should catch this exception, log a warning message and let it proceed, as 
> missing the third party library does not affect the functionality of 
> NameNode. We caught this bug during a CDH upgrade where a third party plugin 
> was not in the lib directory of the newer version of CDH.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11407:
-
Status: Patch Available  (was: Open)

> Document the missing usages of OfflineImageViewer processors
> 
>
> Key: HDFS-11407
> URL: https://issues.apache.org/jira/browse/HDFS-11407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> Currently the documentation only introduces the usage of oiv processors 
> {{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, 
> the processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are 
> missing in documentation. Document this will be helpful for users to know how 
> to use this tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11407) Document the missing usages of OfflineImageViewer processors

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863236#comment-15863236
 ] 

Hadoop QA commented on HDFS-11407:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 16 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11407 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852291/HDFS-11407.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 99cabed8e785 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01be450 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18355/artifact/patchprocess/whitespace-tabs.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18355/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document the missing usages of OfflineImageViewer processors
> 
>
> Key: HDFS-11407
> URL: https://issues.apache.org/jira/browse/HDFS-11407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11407.001.patch
>
>
> Currently the documentation only introduces the usage of oiv processors 
> {{Web}},{{XML}}. But actually, the oiv processors has been increased to 5, 
> the processor {{ReverseXML}}, {{FileDistribution}} and {{Delimited}} are 
> missing in documentation. Document this will be helpful for users to know how 
> to use this tool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11408) Update the config name of Balance bandwidh

2017-02-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11408:
-
Component/s: balancer & mover

> Update the config name of Balance bandwidh
> --
>
> Key: HDFS-11408
> URL: https://issues.apache.org/jira/browse/HDFS-11408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11408.001.patch
>
>
> The config of balance bandwidth {{dfs.balance.bandwidthPerSec}} has been 
> deprecated and replaced by the new name 
> {{dfs.datanode.balance.bandwidthPerSec}}. We should update this across the 
> project, including codes's comment and documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org