[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638661#comment-15638661
 ] 

Lantao Jin commented on HDFS-11102:
---

[~raviprak] I agree yes/no confirmation is not a graceful solution, so I open 
another ticket [HDFS-1|https://issues.apache.org/jira/browse/HDFS-1] to 
discuss another way to handle this incident issue.

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11111) Delete something in .Trash using "rm" should be forbidden without safety option

2016-11-04 Thread Lantao Jin (JIRA)
Lantao Jin created HDFS-1:
-

 Summary: Delete something in  .Trash using "rm" should be 
forbidden without safety option 
 Key: HDFS-1
 URL: https://issues.apache.org/jira/browse/HDFS-1
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Lantao Jin


As we discussed in HDFS-11102, double confirmation seems not a graceful 
solution for user. But deleting trash files unexpected is till an incident 
issue. The behaviour of user I worried is rm something in trash, not rm 
something out trash with "skipTrash" option(That's a very purposeful action).

So it is not the same case with HADOOP-12358. The solution is throwing an 
exception and remind user to add "-trash" option to delete dirs in trash for 
safely:
{code}
Can not delete somthing in trash directly! Please add "-trash" or "-T" in "rm" 
command to do that.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_18.patch

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, 
> HDFS-9337_17.patch, HDFS-9337_18.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638505#comment-15638505
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 267 unchanged - 6 fixed = 276 total (was 273) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 76m 
24s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837338/HDFS-9337_17.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5f5e94c11a53 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8bab3d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17439/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17439/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17439/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17439/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: 

[jira] [Updated] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10756:
--
Release Note: "getTrashRoot" returns a trash root for a path. Currently in 
DFS if the path "/foo" is a normal path, it returns "/user/$USER/.Trash" for 
"/foo" and if "/foo" is an encrypted zone, it returns "/foo/.Trash/$USER" for 
the child file/dir of "/foo". This patch is about to override the old 
"getTrashRoot" of httpfs and webhdfs, so that the behavior of returning trash 
root in httpfs and webhdfs are consistent with DFS.  (was: "getTrashRoot" 
returns a trash root for a path. Currently in DFS if the path "/foo" is a 
normal path, it returns "/user/$USER/.Trash" for "/foo" and if "/foo" is an 
encrypted zone, it returns "/foo/.Trash/$USER" for the child file/dir of 
"/foo". This patch is about to override the old "getTrashRoot" of httpfs and 
webhdfs so that the behavior of returning trash root in httpfs and webhdfs are 
consistent with DFS.)

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10756:
--
Release Note: "getTrashRoot" returns a trash root for a path. Currently in 
DFS if the path "/foo" is a normal path, it returns "/user/$USER/.Trash" for 
"/foo" and if "/foo" is an encrypted zone, it returns "/foo/.Trash/$USER" for 
the child file/dir of "/foo". This patch is about to override the old 
"getTrashRoot" of httpfs and webhdfs so that the behavior of returning trash 
root in httpfs and webhdfs are consistent with DFS.  (was: "getTrashRoot" 
returns a trash root for a path. Currently in DFS if the path "/foo" is a 
normal path, it returns "/user/$USER/.Trash" and if "/foo" is an encrypted 
zone, it returns "/foo/.Trash/$USER" for the child file/dir of "/foo". This 
patch is about to override the old "getTrashRoot" of httpfs and webhdfs so that 
the behavior of returning trash root in httpfs and webhdfs are consistent with 
DFS.)

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638466#comment-15638466
 ] 

Yuanbo Liu commented on HDFS-10756:
---

Sure, I've updated the release note.

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10756:
--
Release Note: "getTrashRoot" returns a trash root for a path. Currently in 
DFS if the path "/foo" is a normal path, it returns "/user/$USER/.Trash" and if 
"/foo" is an encrypted zone, it returns "/foo/.Trash/$USER" for the child 
file/dir of "/foo". This patch is about to override the old "getTrashRoot" of 
httpfs and webhdfs so that the behavior of returning trash root in httpfs and 
webhdfs are consistent with DFS.

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638436#comment-15638436
 ] 

Hadoop QA commented on HDFS-0:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 32 unchanged - 0 fixed = 35 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-0 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837330/HDFS-0.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7b03ef08a55b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8bab3d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17438/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17438/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17438/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17438/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hardcoded BLOCK SIZE value of 4096 is not appropriate for 

[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_17.patch

Attaching patch fixing whitespaces and cc ,pls review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, HDFS-9337_17.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638283#comment-15638283
 ] 

Xiao Chen commented on HDFS-10756:
--

Committed this to trunk and branch-2.
Thanks Yuanbo for the work, Wei-Chiu and Andrew for the reviews.

Hi [~yuanbo], could you post a short release note? Thanks.

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638279#comment-15638279
 ] 

Hudson commented on HDFS-10756:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10777 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10777/])
HDFS-10756. Expose getTrashRoot to HTTPFS and WebHDFS. Contributed by (xiao: 
rev d8bab3dcb693b2773ede9a6e4f71ae85ee056f79)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/index.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java


> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10756:
-
Release Note:   (was: Committed this to trunk and branch-2.
Thanks Yuanbo for the work, Wei-Chiu and Andrew for the reviews.)

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10756:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
 Release Note: 
Committed this to trunk and branch-2.
Thanks Yuanbo for the work, Wei-Chiu and Andrew for the reviews.
   Status: Resolved  (was: Patch Available)

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Description: Using 
NativeIO.POSIX.getCacheManipulator().getOperatingSystemPageSize() function 
rather than hard coded block size

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HDFS-0.001.patch
>
>
> Using NativeIO.POSIX.getCacheManipulator().getOperatingSystemPageSize() 
> function rather than hard coded block size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Attachment: HDFS-0.001.patch

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HDFS-0.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Status: Patch Available  (was: Open)

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HDFS-0.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11058) Implement 'hadoop fs -df' command for ViewFileSystem

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638167#comment-15638167
 ] 

Hadoop QA commented on HDFS-11058:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 51s{color} 
| {color:red} root generated 3 new + 691 unchanged - 3 fixed = 694 total (was 
694) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} root: The patch generated 0 new + 220 unchanged - 4 
fixed = 220 total (was 224) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11058 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837295/HDFS-11058.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc5115b97c2f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6bb741f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17437/artifact/patchprocess/diff-compile-javac-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17437/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17437/testReport/ |
| modules | C: 

[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Assignee: (was: ramtin)

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin reassigned HDFS-0:
-

Assignee: ramtin

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11085) Add unit test for NameNode failing to start when name dir is unwritable

2016-11-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637879#comment-15637879
 ] 

Hudson commented on HDFS-11085:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10774 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10774/])
HDFS-11085. Add unit test for NameNode failing to start when name dir is 
(liuml07: rev 0c0ab102ab392ba07ed2aa8d8a67eef4c20cad9b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java


> Add unit test for NameNode failing to start when name dir is unwritable
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.
> UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
> unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which 
> tests that in HA mode, we should not have been able to start any NN without 
> shared dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)
ramtin created HDFS-0:
-

 Summary: Hardcoded BLOCK SIZE value of 4096 is not appropriate for 
PowerPC
 Key: HDFS-0
 URL: https://issues.apache.org/jira/browse/HDFS-0
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ramtin
Assignee: ramtin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11058) Implement 'hadoop fs -df' command for ViewFileSystem

2016-11-04 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11058:
--
Attachment: HDFS-11058.02.patch

Attached v02 patch to address previous review comments.
# Fixed a test failure
# Refactored ViewFsMountPoint to carry URIs instead of FileSystem object, 
removed the generics as ther is no more need.
# Renamed ViewFsUtil to ViewFileSystemUtil
# Removed code pertaining to 'Unresponsive FileSystem' from DfUsage. Will 
propose this as part of a new task - HDFS-11109
[~andrew.wang], kindly take a look at the updated patch.

Sample output of Df command when run against Federated Cluster of 2 NameSpaces.

{noformat}

# hadoop fs -df -h /
FilesystemSize  Used  Available  Use%  Mounted on
hdfs://127.0.0.1:51001/  1.4 T  48 K942.3 G0%  /nn1
hdfs://127.0.0.1:50001/  1.4 T  48 K942.3 G0%  /nn0

# hadoop fs -df -h /nn0
FilesystemSize  Used  Available  Use%  Mounted on
hdfs://127.0.0.1:50001/  1.4 T  48 K942.3 G0%  /nn0
manoj@~/work/test/hadev-mg(master): hadoop fs -df -h /nn0/user.
df: `/nn0/user.': No such file or directory

# hadoop fs -df -h /nn0/user/
FilesystemSize  Used  Available  Use%  Mounted on
hdfs://127.0.0.1:50001/  1.4 T  48 K942.3 G0%  /nn0

# hadoop fs -df -h /nn0/user/manoj
FilesystemSize  Used  Available  Use%  Mounted on
hdfs://127.0.0.1:50001/  1.4 T  48 K942.3 G0%  /nn0

#hadoop fs -df -h /abc
df: `/abc': No such file or directory

# hadoop fs -df -h /nn0/abc
df: `/nn0/abc': No such file or directory

{noformat}


> Implement 'hadoop fs -df' command for ViewFileSystem   
> ---
>
> Key: HDFS-11058
> URL: https://issues.apache.org/jira/browse/HDFS-11058
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: viewfs
> Attachments: HDFS-11058.01.patch, HDFS-11058.02.patch
>
>
> Df command doesn't seem to work well with ViewFileSystem. It always reports 
> used data as 0. Here is the client mount table configuration I am using 
> against a federated clusters of 2 NameNodes and 2 DataNoes. 
> {code}
>   1 
>   2 
>   3   
>   4 fs.defaultFS
>   5 viewfs://ClusterX/
>   6   
>   ..
>  11   
>  12 fs.default.name
>  13 viewfs://ClusterX/
>  14   
>  ..
>  23   
>  24 fs.viewfs.mounttable.ClusterX.link./nn0
>  25 hdfs://127.0.0.1:50001/
>  26   
>  27   
>  28 fs.viewfs.mounttable.ClusterX.link./nn1
>  29 hdfs://127.0.0.1:51001/
>  30   
>  31   
>  32 fs.viewfs.mounttable.ClusterX.link./nn2
>  33 hdfs://127.0.0.1:52001/nn2
>  34   
>  35   
>  36 fs.viewfs.mounttable.ClusterX.link./nn3
>  37 hdfs://127.0.0.1:52001/nn3
>  38   
>  39   
>  40 fs.viewfs.mounttable.ClusterY.linkMergeSlash
>  41 hdfs://127.0.0.1:50001/
>  42   
>  43 
> {code}
> {{Df}} command always reports Size/Available as 8.0E and the usage as 0 for 
> any federated cluster. 
> {noformat}
> # hadoop fs -fs viewfs://ClusterX/ -df  /
> Filesystem Size  UsedAvailable  Use%
> viewfs://ClusterX/  9223372036854775807 0  92233720368547758070%
> # hadoop fs -fs viewfs://ClusterX/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterX/  8.0 E 0  8.0 E0%
> # hadoop fs -fs viewfs://ClusterY/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterY/  8.0 E 0  8.0 E0%
> {noformat}
> Whereas {{Du}} command seems to work as expected even with ViewFileSystem.
> {noformat}
> # hadoop fs -fs viewfs://ClusterY/ -du -h /
> 10.6 K  31.8 K  /build.log.16y
> 0   0   /user
> # hadoop fs -fs viewfs://ClusterX/ -du -h /
> 10.6 K  31.8 K  /nn0
> 0   0   /nn1
> 20.2 K  35.8 K  /nn3
> 40.6 K  34.3 K  /nn4
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-04 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11083:
-
Description: 
{{hdfs dfsadmin -report}} has very useful information about the cluster. There 
are some existing customized tools that depend on this command functionality. 
We should add unit test for it. Specially,
# If one datanode is dead, the report should indicate this
# If one block is corrupt, the "Missing blocks:" field should report this
# TBD...

  was:
{{hdfs dfsadmin -report}} has very useful information about the cluster. There 
are some existing customized tools that depend on this command functionality. 
We should add unit test for it. Specially,
# If one datanode is dead, the report should indicate this
# If one block is corrupt, the "Missing blocks:" field should report this
# etc...


> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11109) ViewFileSystem Df command should work even when the backing NameServices are down

2016-11-04 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11109:
-

 Summary: ViewFileSystem Df command should work even when the 
backing NameServices are down
 Key: HDFS-11109
 URL: https://issues.apache.org/jira/browse/HDFS-11109
 Project: Hadoop HDFS
  Issue Type: Task
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


With HDFS-11058, Df command will work well with ViewFileSystem. federated 
cluster can be backed up several NameServers, with each managing their own 
NameSpaces. Even when some of the NameServers are down, the Federated cluster 
will continue to work well for the NameServers that are alive. 

But {{hadoop fs -df}} command when run against the Federated cluster expects 
all the backing NameServers to be up and running. Else, the command errors out 
with exception. 

Would be preferable to have the federated cluster commands highly available to 
match the NameSpace partition availability. 

{noformat}
#hadoop fs -df -h /
df: Call From manoj-mbp.local/172.16.3.66 to localhost:52001 failed on 
connection exception: java.net.ConnectException: Connection refused; For more 
details see:  http://wiki.apache.org/hadoop/ConnectionRefused
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11085) Add unit test for NameNode failing to start when name dir is unwritable

2016-11-04 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11085:
-
Summary: Add unit test for NameNode failing to start when name dir is 
unwritable  (was: Add unit tests for NameNode failing to startup when name dir 
can not be written)

> Add unit test for NameNode failing to start when name dir is unwritable
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.
> UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
> unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which 
> tests that in HA mode, we should not have been able to start any NN without 
> shared dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-04 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11085:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} through {{branch-2.8}} branches. I resolved trivial 
conflicts when committing. Thanks [~xiaobingo] for contributing.

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.
> UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
> unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which 
> tests that in HA mode, we should not have been able to start any NN without 
> shared dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637797#comment-15637797
 ] 

Hadoop QA commented on HDFS-11103:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 76m 
49s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11103 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837273/HDFS-11103-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 94abc682f1e2 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / eb8f2b2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17436/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17436/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.patch, 
> 

[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637758#comment-15637758
 ] 

Wei-Chiu Chuang commented on HDFS-11056:


Hi [~kihwal] thanks for the review!

This fix re-computes last chunk checksum when converting finalized/temporary 
replica to rbw replica. Would you think it may be more efficient if we store 
the last chunk checksum in finalized/temporary replica object, so that it may 
be more efficient if there are frequent open->append->close operations?

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 

[jira] [Updated] (HDFS-11108) Ozone: use containers with the state machine

2016-11-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11108:

Attachment: HDFS-11108-HDFS-7240.001.patch

Adding patch for early code review. This patch depends on HDFS-11081 and 
HDFS-11103

> Ozone: use containers with the state machine
> 
>
> Key: HDFS-11108
> URL: https://issues.apache.org/jira/browse/HDFS-11108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11108-HDFS-7240.001.patch
>
>
> Use containers via the new added state machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11108) Ozone: use containers with the state machine

2016-11-04 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11108:
---

 Summary: Ozone: use containers with the state machine
 Key: HDFS-11108
 URL: https://issues.apache.org/jira/browse/HDFS-11108
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


Use containers via the new added state machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637670#comment-15637670
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 17 new + 266 unchanged - 6 fixed = 283 total (was 272) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m 
58s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837253/HDFS-9337_16.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4c11601b6e18 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de01327 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17435/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17435/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17435/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17435/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: 

[jira] [Commented] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-04 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637538#comment-15637538
 ] 

Chen Liang commented on HDFS-10941:
---

The failed tests seem unrelated. Local tests never had 
{{TestEncryptionZones.testStartFileRetry}} failed. And the other three tests 
randomly fail either with or without the patch so it appears the tests 
themselves are flaky.

> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: HDFS-10941
> URL: https://issues.apache.org/jira/browse/HDFS-10941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-10941.001.patch, HDFS-10941.002.patch, 
> HDFS-10941.002.patch
>
>
> BlockManager#processMisReplicatesAsync is the daemon thread running inside 
> namenode to handle miserplicated blocks. As shown below, it has a trace log 
> for each of the block in the cluster being processed (1 blocks per 
> iteration after sleep 10s). 
> {code}
>   MisReplicationResult res = processMisReplicatedBlock(block);
>   if (LOG.isTraceEnabled()) {
> LOG.trace("block " + block + ": " + res);
>   }
> {code}
> However, it is not very useful as dumping every block in the cluster will 
> overwhelm the namenode log without much useful information assuming the 
> majority of the blocks are not over/under replicated. This ticket is opened 
> to improve the log for easy troubleshooting of block replication related 
> issues by:
>  
> 1) add debug log for blocks that get under/over replicated result during 
> {{processMisReplicatedBlock()}} 
> 2) or change to trace log for only blocks that get non-OK result during 
> {{processMisReplicatedBlock()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11103:

Attachment: HDFS-11103-HDFS-7240.002.patch

update to patch v2

> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.patch, 
> HDFS-11103-HDFS-7240.002.patch
>
>
> Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637516#comment-15637516
 ] 

Hadoop QA commented on HDFS-11099:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
46s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
3s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
8s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
34s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:78fc6b6 |
| JIRA Issue | HDFS-11099 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837227/HDFS-11099.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 9b26c3c35367 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 4f3696d |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_111 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| JDK v1.7.0_111  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17434/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17434/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch, 
> HDFS-11099.HDFS-8707.001.patch
>
>
> hdfsDNInfo is missing 

[jira] [Commented] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-04 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637474#comment-15637474
 ] 

James Clampffer commented on HDFS-11099:


Thanks for adding that test [~xiaowei.zhu], I'll land this late tonight or 
tomorrow morning assuming the next CI run passes.

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch, 
> HDFS-11099.HDFS-8707.001.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_16.patch

Fixed CC and Whitespaces , Test failures are not related to patch, please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637425#comment-15637425
 ] 

Ravi Prakash edited comment on HDFS-11102 at 11/4/16 7:35 PM:
--

I thought the requirement to specify "-skipTrash" was the explicit consent 
required of users. This can be a rabbit hole. Once we get a "yes/no" 
confirmation, a few years later we'll add another "Are you sure?" confirmation 
and then another "Really, are you sure?" confirmation.

Also given that this will be an incompatible change, I'm a -0.5 on it.


was (Author: raviprak):
I thought the requirement to specify "-skipTrash" was the explicit consent 
required of users. This can be a rabbit hole. Once we get a "yes/no" 
confirmation, a few years later we'll add another "Are you sure?" confirmation 
and then another "Really, are you sure?" confirmation

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637425#comment-15637425
 ] 

Ravi Prakash commented on HDFS-11102:
-

I thought the requirement to specify "-skipTrash" was the explicit consent 
required of users. This can be a rabbit hole. Once we get a "yes/no" 
confirmation, a few years later we'll add another "Are you sure?" confirmation 
and then another "Really, are you sure?" confirmation

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9868) add reading source cluster with HA access mode feature for DistCp

2016-11-04 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637395#comment-15637395
 ] 

Yongjun Zhang commented on HDFS-9868:
-

Thanks for working on this guys, sounds the issue we try to address here is not 
limited to HA, can we modify the summary to be more accurate? Thanks.



> add reading source cluster with HA access mode feature for DistCp
> -
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, 
> HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-04 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-11099:
---
Attachment: HDFS-11099.HDFS-8707.001.patch

HDFS-11099.HDFS-8707.001.patch modified hdfs_ext_test.cc

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch, 
> HDFS-11099.HDFS-8707.001.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11058) Implement 'hadoop fs -df' command for ViewFileSystem

2016-11-04 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637343#comment-15637343
 ] 

Manoj Govindassamy commented on HDFS-11058:
---

Thanks for the detailed review [~andrew.wang]. Much appreciated. 

{quote}
Our user API for referring to a FileSystem is also by URI, not by object 
reference. es, the user can always call getUri, but there is global state in a 
FileSystem like file handles and statistics, and it might be better to not 
share that by handing out a FileSystem object which they can poke at. .
{quote}
Very valid point. Exposing the whole FS object is not a good thing. Will expose 
an array of NN URIs.

{quote}
what is the reason for using generics? getTargetFileSystem will always return a 
FileSystem right?
{quote}
The source for ViewFsMountPoint is T in the generic class {{InodeTree}}. So to 
be in sync with the data source, made the dependent class also a generic one. 
But, now that T is not going to be exposed in ViewFsMountPoint, I can remove 
the generics also.

{quote}
Instead, we could have an isViewFileSystem API, and having getStatus take a 
FileSystem and throwing UnsupportedOperation if the passed FS is not a VFS...we 
should probably also name this ViewFileSystemUtil since ViewFs is the 
FileContext implementation.
{quote}
Sounds good. Will do as you suggested.

{quote}
Unresponsive FileSystem .. Let's tackle this in a separate JIRA though, and 
maybe put the behavior behind a flag.
{quote}
Sure, will track this as a new Jira.



> Implement 'hadoop fs -df' command for ViewFileSystem   
> ---
>
> Key: HDFS-11058
> URL: https://issues.apache.org/jira/browse/HDFS-11058
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: viewfs
> Attachments: HDFS-11058.01.patch
>
>
> Df command doesn't seem to work well with ViewFileSystem. It always reports 
> used data as 0. Here is the client mount table configuration I am using 
> against a federated clusters of 2 NameNodes and 2 DataNoes. 
> {code}
>   1 
>   2 
>   3   
>   4 fs.defaultFS
>   5 viewfs://ClusterX/
>   6   
>   ..
>  11   
>  12 fs.default.name
>  13 viewfs://ClusterX/
>  14   
>  ..
>  23   
>  24 fs.viewfs.mounttable.ClusterX.link./nn0
>  25 hdfs://127.0.0.1:50001/
>  26   
>  27   
>  28 fs.viewfs.mounttable.ClusterX.link./nn1
>  29 hdfs://127.0.0.1:51001/
>  30   
>  31   
>  32 fs.viewfs.mounttable.ClusterX.link./nn2
>  33 hdfs://127.0.0.1:52001/nn2
>  34   
>  35   
>  36 fs.viewfs.mounttable.ClusterX.link./nn3
>  37 hdfs://127.0.0.1:52001/nn3
>  38   
>  39   
>  40 fs.viewfs.mounttable.ClusterY.linkMergeSlash
>  41 hdfs://127.0.0.1:50001/
>  42   
>  43 
> {code}
> {{Df}} command always reports Size/Available as 8.0E and the usage as 0 for 
> any federated cluster. 
> {noformat}
> # hadoop fs -fs viewfs://ClusterX/ -df  /
> Filesystem Size  UsedAvailable  Use%
> viewfs://ClusterX/  9223372036854775807 0  92233720368547758070%
> # hadoop fs -fs viewfs://ClusterX/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterX/  8.0 E 0  8.0 E0%
> # hadoop fs -fs viewfs://ClusterY/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterY/  8.0 E 0  8.0 E0%
> {noformat}
> Whereas {{Du}} command seems to work as expected even with ViewFileSystem.
> {noformat}
> # hadoop fs -fs viewfs://ClusterY/ -du -h /
> 10.6 K  31.8 K  /build.log.16y
> 0   0   /user
> # hadoop fs -fs viewfs://ClusterX/ -du -h /
> 10.6 K  31.8 K  /nn0
> 0   0   /nn1
> 20.2 K  35.8 K  /nn3
> 40.6 K  34.3 K  /nn4
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-04 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637336#comment-15637336
 ] 

Ming Ma commented on HDFS-10702:


Thanks [~zhz] for the ping. Thanks [~clouderajiayi] [~mackrorysd] for the great 
work.

Yes it might be useful to leverage inotify, or at least evaluating it. In this 
SbNN polling approach, I am interested in knowing more how the applications 
plan to use it, specifically when they will decide to call getSyncInfo. In 
multi tenant environment, an application might care about specific 
files/directories, not necessarily the namespace has changed at a global level.

Here are some comments specific to the patch.

* Standby namenode has its own checkpoint lock to reduce checkpoint's impact on 
block report. Thus there could be some assumption that checkpointer is the only 
reader of namespace in standby. You might want to confirm if there is any 
implication.
* In the case of multiple standbys, one is the checkpointer, thus you can 
consider allowing client to connect to standbys not doing checkpoint.
* if the server config is "dfs.ha.allow.stale.reads" is set to false, and 
client side enables stale read, it seems the client will still keep trying. 
Wonder if client side should consider the server side config as well.
* Federation configuration support might need some more work. It could depend 
on how you want to enable it on client side. Current patch is based on run time 
config on per client instance. You can also allow define client side config 
like "dfs.client..ha.allow.stale.reads".
* After NN failover, does StaleReadProxyProvider#standbyProxies get refreshed? 
If not, a long running client could keep using the old standby.
* RPC layer is more general that HDFS. So it will be better if allowStandbyRead 
can be refactored out.


> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10702.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637263#comment-15637263
 ] 

Hadoop QA commented on HDFS-10368:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 40 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 53s{color} | {color:orange} root: The patch generated 7 new + 1623 unchanged 
- 9 fixed = 1630 total (was 1632) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10368 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837172/HDFS-10368-00.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  xml  |
| uname | Linux c62ed277614b 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0aafc12 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-04 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637256#comment-15637256
 ] 

Xiaobing Zhou commented on HDFS-11085:
--

[~liuml07] I filed a ticket HDFS-11107. This is a flaky failure, it's passed 
with and without my patch v001.

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.
> UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
> unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which 
> tests that in HA mode, we should not have been able to start any NN without 
> shared dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11107) TestStartup#testStorageBlockContentsStaleAfterNNRestart flaky failure

2016-11-04 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-11107:


 Summary: TestStartup#testStorageBlockContentsStaleAfterNNRestart 
flaky failure
 Key: HDFS-11107
 URL: https://issues.apache.org/jira/browse/HDFS-11107
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Xiaobing Zhou
Priority: Minor


It's noticed that this failed in the last Jenkins run of HDFS-11085, but it's 
not reproducible and passed with and without the patch.

{noformat}
Error Message

expected:<0> but was:<2>
Stacktrace

java.lang.AssertionError: expected:<0> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestStartup.testStorageBlockContentsStaleAfterNNRestart(TestStartup.java:726)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637188#comment-15637188
 ] 

Xiao Chen commented on HDFS-11045:
--

Thanks [~templedf] for the continued effort on this!! The patch is getting big. 
:)

A couple of comments:
- the modified {{DirectoryScanner#accumulateTimeRunning}} makes sense to me. 
But found it a bit confusing at first glance. Suggest to add some comments to 
make it easier to read.
- testThrottling could use extract the retries loop into a new function
- maybe +- 10% itself is too restrictive? How do you feel about changing that 
to +-20% (i.e. min=0.8, max=1.2)?
- Didn't understand this comment in test:
{code}
  // The scanner should sleep 500ms out of every 1sec. If we've slept at
  // least 200ms, we've run for at least 1600ms.
{code}

> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> -
>
> Key: HDFS-11045
> URL: https://issues.apache.org/jira/browse/HDFS-11045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-11045.001.patch, HDFS-11045.002.patch, 
> HDFS-11045.003.patch, HDFS-11045.004.patch, HDFS-11045.005.patch, 
> HDFS-11045.006.patch, HDFS-11045.007.patch, HDFS-11045.008.patch
>
>
>   TestDirectoryScanner.testThrottling:709 Throttle is too permissive
> https://builds.apache.org/job/PreCommit-HDFS-Build/17259/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-04 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637156#comment-15637156
 ] 

Mingliang Liu commented on HDFS-11085:
--

It's interesting {{hadoop.hdfs.server.namenode.TestStartup}} is failing. Would 
you have a look at this, [~xiaobingo]?

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.
> UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
> unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which 
> tests that in HA mode, we should not have been able to start any NN without 
> shared dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637129#comment-15637129
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 106 new + 272 unchanged - 0 fixed = 378 total (was 272) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 10 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837173/HDFS-9337_15.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7fc5c94c0ae7 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0aafc12 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17432/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17432/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17432/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17432/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17432/testReport/ |
| 

[jira] [Commented] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637096#comment-15637096
 ] 

Hadoop QA commented on HDFS-11045:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 76 unchanged - 1 fixed = 79 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11045 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837166/HDFS-11045.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d05dfabbfdc9 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0aafc12 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17431/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17431/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17431/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17431/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> 

[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_15.patch

Fixed the test failures,please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-04 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10368:

Target Version/s: 3.0.0-alpha2
  Status: Patch Available  (was: Open)

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-04 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10368:

Attachment: HDFS-10368-00.patch

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636821#comment-15636821
 ] 

Xiao Chen commented on HDFS-10756:
--

+1 to patch 7, will commit by end of today if no objections.

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, 
> HDFS-10756.003.patch, HDFS-10756.004.patch, HDFS-10756.005.patch, 
> HDFS-10756.006.patch, HDFS-10756.007.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11106) libhdfs++: Some refactoring to better organize files

2016-11-04 Thread James Clampffer (JIRA)
James Clampffer created HDFS-11106:
--

 Summary: libhdfs++: Some refactoring to better organize files
 Key: HDFS-11106
 URL: https://issues.apache.org/jira/browse/HDFS-11106
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


I propose splitting some of the files that have grown wild over time into files 
that align with more specific functionality.  It's probably best to do this in 
a few pieces so it doesn't invalidate anyone's patches in progress.  Here's 
what I have in mind, looking for feedback if 1) it's not worth doing for some 
reason 2) it will break your patch and you'd like this to wait.  I'd also like 
to consolidate related functions, mostly protobuf helpers, that are spread 
around the library into dedicated files. 

Targets (can split each into a separate patch):
* split hdfs.cc into hdfs.cc and hdfs_ext.cc.  Already have a separate 
hdfs_ext.h for C bindings for libhdfs++ specific extensions so implementations 
of those that live in hdfs.cc would be moved out.  Just makes things a little 
cleaner.
* separate the implementation of operations from async shim code in files like 
filesystem.cc (make a filesystem_shims.cc).  The shims are just boilerplate 
code that only need to change if the signature of their async counterparts 
change.
* split apart various RPC code based on classes.  Things like Request and 
RpcConnection get defined in rpc_engine.h and then implemented in a handful of 
files which get confusing to navigate e.g. why would one expect Request's 
implementation to be in rpc_connection.cc.
* Move all of the protobuf<->C++ struct conversion helpers and protobuf wire 
serialization/deserialization functions into a single file.  Gives us less 
protobuf header includes and less accidental duplication of these sorts of 
functions.
* merge base64.cc into util.cc; base64.cc only contains a single utility 
function.
* rename hdfs_public_api.h/cc to hdfs_ioservice.h/cc.  Originally all of the 
implementation declarations of the public API classes like FileSystemImpl were 
going to live in here.  Currently only the hdfs::IoServiceImpl lives in there 
and the other Impl classes have their own dedicated files.

Like any refactoring some of it comes down to personal preferences.  My hope is 
that by breaking these into smaller patches/commits relatively fast forward 
progress can be made on stuff everyone agrees while things that people are 
concerned about can be worked out in a way that satisfies everyone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-04 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-11045:

Attachment: HDFS-11045.008.patch

I bumped up the number of retries.  What I'm seeing is that sometimes the 
record processing takes much (~200ms) longer than expected, and it always 
correlates with one of these:

{noformat}
2016-11-04 15:23:43,002 [Thread-1060] WARN  impl.FsDatasetImpl 
(InstrumentedLock.java:logWarning(143)) - Lock held time above threshold: lock 
identifier: org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl 
lockHeldTimeMs=643 ms. Suppressed 0 lock warnings. The stack trace is: 
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1011)
org.apache.hadoop.util.InstrumentedLock.logWarning(InstrumentedLock.java:148)
org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:186)
org.apache.hadoop.util.InstrumentedLock.unlock(InstrumentedLock.java:133)
org.apache.hadoop.util.AutoCloseableLock.release(AutoCloseableLock.java:84)
org.apache.hadoop.util.AutoCloseableLock.close(AutoCloseableLock.java:96)
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:484)
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:384)
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.scan(TestDirectoryScanner.java:320)
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.scan(TestDirectoryScanner.java:314)
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.createBlocksForThrottleTest(TestDirectoryScanner.java:801)
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:586)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

I'm hoping that by upping the number of retries, we can maybe get to the other 
side of the hiccup and get a reasonable run.

> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> -
>
> Key: HDFS-11045
> URL: https://issues.apache.org/jira/browse/HDFS-11045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-11045.001.patch, HDFS-11045.002.patch, 
> HDFS-11045.003.patch, HDFS-11045.004.patch, HDFS-11045.005.patch, 
> HDFS-11045.006.patch, HDFS-11045.007.patch, HDFS-11045.008.patch
>
>
>   TestDirectoryScanner.testThrottling:709 Throttle is too permissive
> https://builds.apache.org/job/PreCommit-HDFS-Build/17259/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636733#comment-15636733
 ] 

Hadoop QA commented on HDFS-11045:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 76 unchanged - 1 fixed = 79 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11045 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837127/HDFS-11045.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f2de489ff81b 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19b3779 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17430/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17430/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17430/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17430/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   

[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636624#comment-15636624
 ] 

Hadoop QA commented on HDFS-9482:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
47s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project: The patch generated 23 new 
+ 517 unchanged - 18 fixed = 540 total (was 535) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
| JDK v1.8.0_111 Timed out junit tests | 

[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636597#comment-15636597
 ] 

Hadoop QA commented on HDFS-9482:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 20 new 
+ 827 unchanged - 22 fixed = 847 total (was 849) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
58s{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-9482 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-11105) TestRBWBlockInvalidation#testRWRInvalidation fails intermittently

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636580#comment-15636580
 ] 

Hadoop QA commented on HDFS-11105:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11105 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837124/HDFS-11105.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 302ab7ed1b98 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19b3779 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17429/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17429/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17429/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRBWBlockInvalidation#testRWRInvalidation fails intermittently
> -
>
> Key: HDFS-11105
> URL: https://issues.apache.org/jira/browse/HDFS-11105
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>   

[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636557#comment-15636557
 ] 

Hadoop QA commented on HDFS-9482:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 20 new 
+ 827 unchanged - 22 fixed = 847 total (was 849) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | 

[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636427#comment-15636427
 ] 

Lantao Jin commented on HDFS-11102:
---

I think HADOOP-12358 is good. But delete multiple directories include a trash 
or delete files in trash by "rm" command is not safe. Only limit the threshold 
is not enough.

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-04 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-11045:

Attachment: HDFS-11045.007.patch

This patch adds a few minor improvements.

> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> -
>
> Key: HDFS-11045
> URL: https://issues.apache.org/jira/browse/HDFS-11045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-11045.001.patch, HDFS-11045.002.patch, 
> HDFS-11045.003.patch, HDFS-11045.004.patch, HDFS-11045.005.patch, 
> HDFS-11045.006.patch, HDFS-11045.007.patch
>
>
>   TestDirectoryScanner.testThrottling:709 Throttle is too permissive
> https://builds.apache.org/job/PreCommit-HDFS-Build/17259/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636395#comment-15636395
 ] 

Lantao Jin commented on HDFS-11102:
---

Yes, [~yuanbo], if user try to delete something in trash by "rm" command, it's 
better to throw exception and remind user to add "-trash" option to delete dir 
in trash safely

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11105) TestRBWBlockInvalidation#testRWRInvalidation fails intermittently

2016-11-04 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11105:
-
Attachment: HDFS-11105.001.patch

> TestRBWBlockInvalidation#testRWRInvalidation fails intermittently
> -
>
> Key: HDFS-11105
> URL: https://issues.apache.org/jira/browse/HDFS-11105
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11105.001.patch
>
>
> The test {{TestRBWBlockInvalidation#testRWRInvalidation}}  fails 
> intermittently. The stack infos:
> {code}
> org.junit.ComparisonFailure: expected:<[old gs data
> new gs data
> ]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testRWRInvalidation(TestRBWBlockInvalidation.java:225)
> {code}
> The issue is caused by the blocks reported not completed, similar to 
> HDFS-10499.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11105) TestRBWBlockInvalidation#testRWRInvalidation fails intermittently

2016-11-04 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11105:
-
Status: Patch Available  (was: Open)

Attach a initial patch, use {{GenericTestUtils.waitFor}} to make a improvement.

> TestRBWBlockInvalidation#testRWRInvalidation fails intermittently
> -
>
> Key: HDFS-11105
> URL: https://issues.apache.org/jira/browse/HDFS-11105
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> The test {{TestRBWBlockInvalidation#testRWRInvalidation}}  fails 
> intermittently. The stack infos:
> {code}
> org.junit.ComparisonFailure: expected:<[old gs data
> new gs data
> ]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testRWRInvalidation(TestRBWBlockInvalidation.java:225)
> {code}
> The issue is caused by the blocks reported not completed, similar to 
> HDFS-10499.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11105) TestRBWBlockInvalidation#testRWRInvalidation fails intermittently

2016-11-04 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11105:


 Summary: TestRBWBlockInvalidation#testRWRInvalidation fails 
intermittently
 Key: HDFS-11105
 URL: https://issues.apache.org/jira/browse/HDFS-11105
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1
Reporter: Yiqun Lin
Assignee: Yiqun Lin


The test {{TestRBWBlockInvalidation#testRWRInvalidation}}  fails 
intermittently. The stack infos:
{code}
org.junit.ComparisonFailure: expected:<[old gs data
new gs data
]> but was:<[]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testRWRInvalidation(TestRBWBlockInvalidation.java:225)
{code}
The issue is caused by the blocks reported not completed, similar to HDFS-10499.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636202#comment-15636202
 ] 

Brahma Reddy Battula commented on HDFS-9482:


Testfailure unrelated and {{checkstyle}} can be ignored.. and uploaded patch 
for branch-2(just removed EC related classes) and branch-2.8(as HDFS-9371 not 
merged, need to keep one constructor as public).
[~arpitagarwal] can you please review.?

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9482:
---
Attachment: HDFS-9482-branch-2.8.patch

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9482:
---
Attachment: HDFS-9482-branch-2.patch

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635871#comment-15635871
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 204 unchanged - 0 fixed = 205 total (was 204) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.web.TestWebHDFSForHA |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837073/HDFS-9337_14.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8c999ed9134f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 69dd5fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17425/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17425/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17425/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17425/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid NPE
> 

[jira] [Updated] (HDFS-11104) BlockPlacementPolicyDefault choose favoredNodes in turn which may cause imbalance

2016-11-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11104:

Description: 
if client transfer favoredNodes when it writes files into hdfs,chooseTarget in 
BlockPlacementPolicyDefault prior chooseTarget in turn:
{quote}
DatanodeStorageInfo[] chooseTarget(String src,
  int numOfReplicas,
  Node writer,
  Set excludedNodes,
  long blocksize,
  List favoredNodes,
  BlockStoragePolicy storagePolicy) {
try {
...

   *for (int i = 0; i < favoredNodes.size() && results.size() < numOfReplicas; 
i++)* {
DatanodeDescriptor favoredNode = favoredNodes.get(i);
// Choose a single node which is local to favoredNode.
// 'results' is updated within chooseLocalNode
final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
results, avoidStaleNodes, storageTypes, false);
  ...
{quote}
why not shuffle it here? Make block more balanced, save the cost balancer will 
pay and make cluster more stable.
{quote}
for (DatanodeDescriptor favoredNode : DFSUtil.shuffle(favoredNodes.toArray(new 
DatanodeDescriptor[favoredNodes.size()]))) 
{quote}

  was:
if client transfer favoredNodes when it writes files into hdfs,chooseTarget in 
BlockPlacementPolicyDefault prior chooseTarget in turn:
{quote}
DatanodeStorageInfo[] chooseTarget(String src,
  int numOfReplicas,
  Node writer,
  Set excludedNodes,
  long blocksize,
  List favoredNodes,
  BlockStoragePolicy storagePolicy) {
try {
...

   *for (int i = 0; i < favoredNodes.size() && results.size() < numOfReplicas; 
i++)* {
DatanodeDescriptor favoredNode = favoredNodes.get(i);
// Choose a single node which is local to favoredNode.
// 'results' is updated within chooseLocalNode
final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
results, avoidStaleNodes, storageTypes, false);
  ...
{quote}
why not shuffle it?
{quote}
 *for (DatanodeDescriptor favoredNode : 
DFSUtil.shuffle(favoredNodes.toArray(new 
DatanodeDescriptor[favoredNodes.size()]))) *
{quote}


> BlockPlacementPolicyDefault choose favoredNodes in turn which may cause 
> imbalance
> -
>
> Key: HDFS-11104
> URL: https://issues.apache.org/jira/browse/HDFS-11104
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Doris Gu
>
> if client transfer favoredNodes when it writes files into hdfs,chooseTarget 
> in BlockPlacementPolicyDefault prior chooseTarget in turn:
> {quote}
> DatanodeStorageInfo[] chooseTarget(String src,
>   int numOfReplicas,
>   Node writer,
>   Set excludedNodes,
>   long blocksize,
>   List favoredNodes,
>   BlockStoragePolicy storagePolicy) {
> try {
> ...
>*for (int i = 0; i < favoredNodes.size() && results.size() < 
> numOfReplicas; i++)* {
> DatanodeDescriptor favoredNode = favoredNodes.get(i);
> // Choose a single node which is local to favoredNode.
> // 'results' is updated within chooseLocalNode
> final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
> favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
> results, avoidStaleNodes, storageTypes, false);
>   ...
> {quote}
> why not shuffle it here? Make block more balanced, save the cost balancer 
> will pay and make cluster more stable.
> {quote}
> for (DatanodeDescriptor favoredNode : 
> DFSUtil.shuffle(favoredNodes.toArray(new 
> DatanodeDescriptor[favoredNodes.size()]))) 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11104) BlockPlacementPolicyDefault choose favoredNodes in turn which may cause imbalance

2016-11-04 Thread Doris Gu (JIRA)
Doris Gu created HDFS-11104:
---

 Summary: BlockPlacementPolicyDefault choose favoredNodes in turn 
which may cause imbalance
 Key: HDFS-11104
 URL: https://issues.apache.org/jira/browse/HDFS-11104
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Doris Gu


if client transfer favoredNodes when it writes files into hdfs,chooseTarget in 
BlockPlacementPolicyDefault prior chooseTarget in turn:
{quote}
DatanodeStorageInfo[] chooseTarget(String src,
  int numOfReplicas,
  Node writer,
  Set excludedNodes,
  long blocksize,
  List favoredNodes,
  BlockStoragePolicy storagePolicy) {
try {
...

   *for (int i = 0; i < favoredNodes.size() && results.size() < numOfReplicas; 
i++)* {
DatanodeDescriptor favoredNode = favoredNodes.get(i);
// Choose a single node which is local to favoredNode.
// 'results' is updated within chooseLocalNode
final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
results, avoidStaleNodes, storageTypes, false);
  ...
{quote}
why not shuffle it?
{quote}
 *for (DatanodeDescriptor favoredNode : 
DFSUtil.shuffle(favoredNodes.toArray(new 
DatanodeDescriptor[favoredNodes.size()]))) *
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635795#comment-15635795
 ] 

Hadoop QA commented on HDFS-9482:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project: The patch generated 20 new 
+ 855 unchanged - 22 fixed = 875 total (was 877) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9482 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837068/HDFS-9482-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 93ff5c6074a0 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 69dd5fa |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17424/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17424/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17424/testReport/ |
| modules | C: 

[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_14.patch

Thanks [~vinayrpet] for review, Had updated the patch with covering all mandory 
params for different ops ,Please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, HDFS-9337_14.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635649#comment-15635649
 ] 

Yuanbo Liu commented on HDFS-11102:
---

Prompting up info before deleting could be annoying and incompatible.
I'd prefer implementing a command like "hadoop fs -clearTrash" to clear trash 
safely.

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635632#comment-15635632
 ] 

Hadoop QA commented on HDFS-11103:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 4077 ASF License warnings. {color} 
|
| {color:black}{color} | {color:black} {color} | {color:black} 90m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11103 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837059/HDFS-11103-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 579e483b94fb 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / eb8f2b2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17423/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17423/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17423/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17423/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Cleanup some dependencies
> 

[jira] [Commented] (HDFS-10994) Support an XOR policy XOR-2-1-64k in HDFS

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635594#comment-15635594
 ] 

Hadoop QA commented on HDFS-10994:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 38s{color} | {color:orange} root: The patch generated 27 new + 200 unchanged 
- 6 fixed = 227 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10994 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837051/HDFS-10994-v3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 21fee02a188b 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 69dd5fa |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17422/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Senthilkumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635565#comment-15635565
 ] 

Senthilkumar commented on HDFS-11102:
-

Seems to be good improvement to delete API .. +1 . 

Good to get confirmation before removing as below 

" 
Below directories will be deleted permanently:
/user/someone/.Trash with 423 files with 24MB

Are you sure to do that: [Y/N]
"


> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635543#comment-15635543
 ] 

Brahma Reddy Battula commented on HDFS-9482:


Thanks a lot [~arpitagarwal] Uploaded the patch to address all the above 
comments.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11100) Recursively deleting directory containing file protected by sticky bit should fail

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11100:
--
Summary: Recursively deleting directory containing file protected by sticky 
bit should fail  (was: Recursively deleting directory with file protected by 
sticky bit should fail)

> Recursively deleting directory containing file protected by sticky bit should 
> fail
> --
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11100) Recursively deleting directory with file protected by sticky bit should fail

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11100:
--
Hadoop Flags: Incompatible change

> Recursively deleting directory with file protected by sticky bit should fail
> 
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11100) Recursively deleting directory with file protected by sticky bit should fail

2016-11-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635538#comment-15635538
 ] 

John Zhuge commented on HDFS-11100:
---

{{checkStickyBit}} is only called for the current inode for the recursive 
delete. It is {{/tmp/test/sbit}} in the example. Maybe we should check sticky 
bit recursively in the subtree? It would be an incompatible change.



> Recursively deleting directory with file protected by sticky bit should fail
> 
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-04 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9482:
---
Attachment: HDFS-9482-003.patch

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635520#comment-15635520
 ] 

Xiaoyu Yao commented on HDFS-11102:
---

Thanks [~cltlfcjin] for reporting the issue. Please check the -safely delete 
option added by HADOOP-12358 to see if that helps your use cases.

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] Lantao Jin shared "HDFS-11102: Deleting .Trash without -skipTrash should be confirmed" with you

2016-11-04 Thread Lantao Jin (JIRA)
Lantao Jin shared an issue with you
---



> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.

 Also shared with
  u...@hadoop.apache.org
  hdfs-...@hadoop.apache.org



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635449#comment-15635449
 ] 

Lantao Jin edited comment on HDFS-11102 at 11/4/16 6:45 AM:


The simplest double check is displaying the file count and usage capacity of 
the deleting directory, for the example mentioned above is:
{code}
Below directories will be deleted permanently:
/user/someone/.Trash with 172483232 files with 9.4PB

Are you sure to do that: [Y/N]
{code}
If user use the right command, it will display like:
{code}
Below directories will be deleted permanently:
/user/someone/.Trash with 423 files with 24MB

Are you sure to do that: [Y/N]
{code}

Or just forbid deleting multiple directories include trash:
{code}
Can not delete multiple directories include a trash! Command will not be 
executed.
{code}

Because that if users want to delete by one line command, they can use 
"skipTrash".


was (Author: cltlfcjin):
The simplest double check is displaying the file count and usage capacity of 
the deleting directory, for the example mentioned above is:
{code}
Below directories will be deleted permanently:
/user/someone/.Trash with 172483232 files with 9.4PB

Are you sure to do that: [Y/N]
{code}
If user use the right command, it will display like:
{code}
Below directories will be deleted permanently:
/user/someone/.Trash with 423 files with 24MB

Are you sure to do that: [Y/N]
{code}

Or just forbid deleting multiple directories include trash:
{code}
Can not delete multiple directories include a trash! Command will not be 
executed.
{code}

All cases I mentioned are not related to skipTrash

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11103:

Status: Patch Available  (was: Open)

> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.patch
>
>
> Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635449#comment-15635449
 ] 

Lantao Jin commented on HDFS-11102:
---

The simplest double check is displaying the file count and usage capacity of 
the deleting directory, for the example mentioned above is:
{code}
Below directories will be deleted permanently:
/user/someone/.Trash with 172483232 files with 9.4PB

Are you sure to do that: [Y/N]
{code}
If user use the right command, it will display like:
{code}
Below directories will be deleted permanently:
/user/someone/.Trash with 423 files with 24MB

Are you sure to do that: [Y/N]
{code}

Or just forbid deleting multiple directories include trash:
{code}
Can not delete multiple directories include a trash! Command will not be 
executed.
{code}

All cases I mentioned are not related to skipTrash

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11103:

Attachment: HDFS-11103-HDFS-7240.001.patch

> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.patch
>
>
> Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-04 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11103:
---

 Summary: Ozone: Cleanup some dependencies
 Key: HDFS-11103
 URL: https://issues.apache.org/jira/browse/HDFS-11103
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Priority: Trivial
 Fix For: HDFS-7240


Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-11103:
---

Assignee: Anu Engineer

> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
>
> Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11102) Deleting .Trash without -skipTrash should be confirmed

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11102:
--
Summary: Deleting .Trash without -skipTrash should be confirmed  (was: 
Delete .Trash using command without -skipTrash should be confirmed by re-typing 
like Y/N)

> Deleting .Trash without -skipTrash should be confirmed
> --
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11102) Delete .Trash using command without -skipTrash should be confirmed by re-typing like Y/N

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11102:
--
Summary: Delete .Trash using command without -skipTrash should be confirmed 
by re-typing like Y/N  (was: Delete .Trash using command without -shipTrash 
should be confirmed by re-typing like Y/N)

> Delete .Trash using command without -skipTrash should be confirmed by 
> re-typing like Y/N
> 
>
> Key: HDFS-11102
> URL: https://issues.apache.org/jira/browse/HDFS-11102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As a Hadoop DEVOPS, I saw lots of cases that user delete their data by 
> mistake.  Most of them can be recovered from trash but the rest ones were not 
> luck.
> A system can’t guess user's purpose,but a good system should help user to 
> avoid their mistakes.
> There is a very common case like:
> If a user want to delete some dir from HDFS, they may use:
> {code}
> hadoop -fs -rm -r /user/someone/pathToBeDelete
> {code}
> The directory /user/someone/pathToBeDelete will move into 
> {code}
> /user/someone/.Trash/current/user/someone/pathToBeDelete
> {code}
> If user want delete it permanently, option "-skipTrash" can be attached. 
> That's the design and Hadoop knows the user's purpose well.
> Usually, user didn't use "skipTrash" for safety consideration. That's good 
> till now.
> But the purpose is to delete some data for saving more space. Then the user 
> begin to delete it from Trash with the below command:
> {code}
> hadoop -fs -rm -r /user/someone/ .Trash
> {code}
> Why not just delete 
> "/user/someone/.Trash/current/user/someone/pathToBeDelete" is that because 
> the user knows only pathToBeDelete in trash directory now.
> The trash include pathToBeDelete will be deleted permanently.
> *But Wait! Do you see the blank space before the dot?*
> If you also type this command by "copy-paste" include some space or invisible 
> char, the whole /user/someone directory and the whole /user/someone/.Trash 
> will be deleted unfortunately. *Jesus, that's means the directory 
> /user/someone is deleted permanently and unexpectedly!*
> So I think *any ".Trash" word appears in the "rm" command without "skip" 
> should be launched a double checking by system to help people to avoid their 
> mistake.*
> If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11102) Delete .Trash using command without -shipTrash should be confirmed by re-typing like Y/N

2016-11-04 Thread Lantao Jin (JIRA)
Lantao Jin created HDFS-11102:
-

 Summary: Delete .Trash using command without -shipTrash should be 
confirmed by re-typing like Y/N
 Key: HDFS-11102
 URL: https://issues.apache.org/jira/browse/HDFS-11102
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Lantao Jin


As a Hadoop DEVOPS, I saw lots of cases that user delete their data by mistake. 
 Most of them can be recovered from trash but the rest ones were not luck.

A system can’t guess user's purpose,but a good system should help user to avoid 
their mistakes.
There is a very common case like:
If a user want to delete some dir from HDFS, they may use:
{code}
hadoop -fs -rm -r /user/someone/pathToBeDelete
{code}
The directory /user/someone/pathToBeDelete will move into 
{code}
/user/someone/.Trash/current/user/someone/pathToBeDelete
{code}
If user want delete it permanently, option "-skipTrash" can be attached. That's 
the design and Hadoop knows the user's purpose well.
Usually, user didn't use "skipTrash" for safety consideration. That's good till 
now.

But the purpose is to delete some data for saving more space. Then the user 
begin to delete it from Trash with the below command:
{code}
hadoop -fs -rm -r /user/someone/ .Trash
{code}
Why not just delete "/user/someone/.Trash/current/user/someone/pathToBeDelete" 
is that because the user knows only pathToBeDelete in trash directory now.
The trash include pathToBeDelete will be deleted permanently.

*But Wait! Do you see the blank space before the dot?*
If you also type this command by "copy-paste" include some space or invisible 
char, the whole /user/someone directory and the whole /user/someone/.Trash will 
be deleted unfortunately. *Jesus, that's means the directory /user/someone is 
deleted permanently and unexpectedly!*

So I think *any ".Trash" word appears in the "rm" command without "skip" should 
be launched a double checking by system to help people to avoid their mistake.*

If you also agree this design, I will offer a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11101) TestDFSShell#testMoveWithTargetPortEmpty fails intermittently

2016-11-04 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-11101:
---

Assignee: Brahma Reddy Battula

> TestDFSShell#testMoveWithTargetPortEmpty fails intermittently
> -
>
> Key: HDFS-11101
> URL: https://issues.apache.org/jira/browse/HDFS-11101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11101.patch
>
>
> {noformat}
> java.io.IOException: Port is already in use; giving up after 10 times.
>   at 
> org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:778)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-04 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635376#comment-15635376
 ] 

Vinayakumar B commented on HDFS-9337:
-

according to 
[doc|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File],
 there are many other commands for which different params are mandatory, even 
though there is no NPE for them. 
In doc, check the examples given, params without square brackets are mandatory. 
They can all be validated.

please update the patch to cover all those mandory params for different ops.

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org