[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=475135=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475135
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 27/Aug/20 04:11
Start Date: 27/Aug/20 04:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-681369503


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  0s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  8s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 22s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 22s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 20s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   1m 48s |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-hdfs-client in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   3m 18s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-hdfs-client in trunk failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-common in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javadoc  |   0m 21s |  hadoop-hdfs in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javadoc  |   0m 21s |  hadoop-hdfs-client in trunk failed with 
JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +0 :ok: |  spotbugs  |   6m 37s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 21s |  hadoop-common in trunk failed.  |
   | -1 :x: |  findbugs  |   0m 23s |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  findbugs  |   0m 22s |  hadoop-hdfs-client in trunk failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 22s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 22s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 22s |  hadoop-hdfs-client in the patch 
failed.  |
   | -1 :x: |  compile  |   0m 21s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  cc  |   0m 21s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   0m 21s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 22s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  cc  |   0m 22s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 22s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 20s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-hdfs-client in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |   0m 22s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-common in the patch failed with 
JDK 

[jira] [Updated] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-15510:

Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your contribution, [~hemanthboyina].

Thanks for your review and discussion, [~elgoiri], [~brahmareddy], [~ayushtkn].

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Fix For: 3.4.0
>
> Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, 
> HDFS-15510.003.patch, HDFS-15510.004.patch, HDFS-15510.005.patch, 
> HDFS-15510.006.patch
>
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185570#comment-17185570
 ] 

Lisheng Sun edited comment on HDFS-15267 at 8/27/20, 2:55 AM:
--

[~ayushtkn] 

>From the code point of view, HttpFSFileSystem is used as a proxy service, and 
>then forwards the request to the DistributedFileSystem. When request 
>HttpFSFileSystem every time, you will request DistributedFileSystem. So what 
>you see is double.


was (Author: leosun08):
>From the code point of view, HttpFSFileSystem is used as a proxy service, and 
>then forwards the request to the DistributedFileSystem. When request 
>HttpFSFileSystem every time, you will request DistributedFileSystem. So what 
>you see is double.

> Implement Statistics Count for HttpFSFileSystem
> ---
>
> Key: HDFS-15267
> URL: https://issues.apache.org/jira/browse/HDFS-15267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15267.001.patch, HDFS-15267.002.patch
>
>
> As of now, there is  no count of ops maintained for HttpFSFileSystem like 
> DistributedFIleSysmtem & WebHDFSFilesystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185570#comment-17185570
 ] 

Lisheng Sun commented on HDFS-15267:


>From the code point of view, HttpFSFileSystem is used as a proxy service, and 
>then forwards the request to the DistributedFileSystem. When request 
>HttpFSFileSystem every time, you will request DistributedFileSystem. So what 
>you see is double.

> Implement Statistics Count for HttpFSFileSystem
> ---
>
> Key: HDFS-15267
> URL: https://issues.apache.org/jira/browse/HDFS-15267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15267.001.patch, HDFS-15267.002.patch
>
>
> As of now, there is  no count of ops maintained for HttpFSFileSystem like 
> DistributedFIleSysmtem & WebHDFSFilesystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185532#comment-17185532
 ] 

Fei Hui commented on HDFS-15540:


[~sodonnell] Good catch! It looks good!

> Directories protected from delete can still be moved to the trash
> -
>
> Key: HDFS-15540
> URL: https://issues.apache.org/jira/browse/HDFS-15540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15540.001.patch
>
>
> With HDFS-8983, HDFS-14802 and HDFS-15243 we are able to list protected 
> directories which cannot be deleted or renamed, provided the following is set:
> fs.protected.directories: 
> dfs.protected.subdirectories.enable: true
> Testing this feature out, I can see it mostly works fine, but protected 
> non-empty folders can still be moved to the trash. In this example 
> /dir/protected is set in fs.protected.directories, and 
> dfs.protected.subdirectories.enable is true.
> {code}
> hadoop fs -ls -R /dir
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir1
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir1/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir2
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir2/file1
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
> rm: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected/subdir1 
> /dir/protected/subdir1-moved
> mv: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> ** ALL GOOD SO FAR **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected/subdir1
> 2020-08-26 16:54:32,404 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected/subdir1' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected/subdir1
> ** It moved the protected sub-dir to the trash, where it will be deleted **
> ** Checking the top level dir, it is the same **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected 
> rm: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected /dir/protected-new
> mv: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected 
> 2020-08-26 16:55:32,402 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected1598460932388
> {code}
> The reason for this, seems to be that "move to trash" uses a different rename 
> method in FSNameSystem and FSDirRenameOp which avoids the 
> DFSUtil.checkProtectedDescendants(...) in the earlier Jiras.
> I believe that "move to trash" should be protected in the same way as a 
> -skipTrash delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185502#comment-17185502
 ] 

Takanobu Asanuma commented on HDFS-15510:
-

+1.

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, 
> HDFS-15510.003.patch, HDFS-15510.004.patch, HDFS-15510.005.patch, 
> HDFS-15510.006.patch
>
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185498#comment-17185498
 ] 

Hadoop QA commented on HDFS-15540:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}143m 50s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185402#comment-17185402
 ] 

Íñigo Goiri commented on HDFS-15510:


+1 on  [^HDFS-15510.006.patch].

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, 
> HDFS-15510.003.patch, HDFS-15510.004.patch, HDFS-15510.005.patch, 
> HDFS-15510.006.patch
>
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185390#comment-17185390
 ] 

Stephen O'Donnell commented on HDFS-15540:
--

[~ferhui] you worked on HDFS-14802  - would you like to check this one too?

> Directories protected from delete can still be moved to the trash
> -
>
> Key: HDFS-15540
> URL: https://issues.apache.org/jira/browse/HDFS-15540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15540.001.patch
>
>
> With HDFS-8983, HDFS-14802 and HDFS-15243 we are able to list protected 
> directories which cannot be deleted or renamed, provided the following is set:
> fs.protected.directories: 
> dfs.protected.subdirectories.enable: true
> Testing this feature out, I can see it mostly works fine, but protected 
> non-empty folders can still be moved to the trash. In this example 
> /dir/protected is set in fs.protected.directories, and 
> dfs.protected.subdirectories.enable is true.
> {code}
> hadoop fs -ls -R /dir
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir1
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir1/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir2
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir2/file1
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
> rm: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected/subdir1 
> /dir/protected/subdir1-moved
> mv: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> ** ALL GOOD SO FAR **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected/subdir1
> 2020-08-26 16:54:32,404 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected/subdir1' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected/subdir1
> ** It moved the protected sub-dir to the trash, where it will be deleted **
> ** Checking the top level dir, it is the same **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected 
> rm: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected /dir/protected-new
> mv: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected 
> 2020-08-26 16:55:32,402 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected1598460932388
> {code}
> The reason for this, seems to be that "move to trash" uses a different rename 
> method in FSNameSystem and FSDirRenameOp which avoids the 
> DFSUtil.checkProtectedDescendants(...) in the earlier Jiras.
> I believe that "move to trash" should be protected in the same way as a 
> -skipTrash delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15540:
-
Attachment: HDFS-15540.001.patch

> Directories protected from delete can still be moved to the trash
> -
>
> Key: HDFS-15540
> URL: https://issues.apache.org/jira/browse/HDFS-15540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15540.001.patch
>
>
> With HDFS-8983, HDFS-14802 and HDFS-15243 we are able to list protected 
> directories which cannot be deleted or renamed, provided the following is set:
> fs.protected.directories: 
> dfs.protected.subdirectories.enable: true
> Testing this feature out, I can see it mostly works fine, but protected 
> non-empty folders can still be moved to the trash. In this example 
> /dir/protected is set in fs.protected.directories, and 
> dfs.protected.subdirectories.enable is true.
> {code}
> hadoop fs -ls -R /dir
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir1
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir1/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir2
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir2/file1
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
> rm: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected/subdir1 
> /dir/protected/subdir1-moved
> mv: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> ** ALL GOOD SO FAR **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected/subdir1
> 2020-08-26 16:54:32,404 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected/subdir1' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected/subdir1
> ** It moved the protected sub-dir to the trash, where it will be deleted **
> ** Checking the top level dir, it is the same **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected 
> rm: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected /dir/protected-new
> mv: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected 
> 2020-08-26 16:55:32,402 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected1598460932388
> {code}
> The reason for this, seems to be that "move to trash" uses a different rename 
> method in FSNameSystem and FSDirRenameOp which avoids the 
> DFSUtil.checkProtectedDescendants(...) in the earlier Jiras.
> I believe that "move to trash" should be protected in the same way as a 
> -skipTrash delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15540:
-
Status: Patch Available  (was: Open)

> Directories protected from delete can still be moved to the trash
> -
>
> Key: HDFS-15540
> URL: https://issues.apache.org/jira/browse/HDFS-15540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15540.001.patch
>
>
> With HDFS-8983, HDFS-14802 and HDFS-15243 we are able to list protected 
> directories which cannot be deleted or renamed, provided the following is set:
> fs.protected.directories: 
> dfs.protected.subdirectories.enable: true
> Testing this feature out, I can see it mostly works fine, but protected 
> non-empty folders can still be moved to the trash. In this example 
> /dir/protected is set in fs.protected.directories, and 
> dfs.protected.subdirectories.enable is true.
> {code}
> hadoop fs -ls -R /dir
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir1
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir1/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir2
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir2/file1
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
> rm: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected/subdir1 
> /dir/protected/subdir1-moved
> mv: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> ** ALL GOOD SO FAR **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected/subdir1
> 2020-08-26 16:54:32,404 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected/subdir1' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected/subdir1
> ** It moved the protected sub-dir to the trash, where it will be deleted **
> ** Checking the top level dir, it is the same **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected 
> rm: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected /dir/protected-new
> mv: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected 
> 2020-08-26 16:55:32,402 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected1598460932388
> {code}
> The reason for this, seems to be that "move to trash" uses a different rename 
> method in FSNameSystem and FSDirRenameOp which avoids the 
> DFSUtil.checkProtectedDescendants(...) in the earlier Jiras.
> I believe that "move to trash" should be protected in the same way as a 
> -skipTrash delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2020-08-26 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-14096:
---
Fix Version/s: 3.2.2

> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14096-01.patch, HDFS-14096-02.patch, 
> HDFS-14096-03.patch, HDFS-14096-04.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185387#comment-17185387
 ] 

Ayush Saxena commented on HDFS-15267:
-

Thanx [~leosun08]
Can you introduce a test as well.
I don't exactly remember but I remember this had issues, and that I got to know 
during the test. I think the calls get delegated to {{DistributedFileSystem}} 
which has statistics implemented, so the counts were actually there, with this 
they were getting double, so I didn't follow this up. Give a check

> Implement Statistics Count for HttpFSFileSystem
> ---
>
> Key: HDFS-15267
> URL: https://issues.apache.org/jira/browse/HDFS-15267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15267.001.patch, HDFS-15267.002.patch
>
>
> As of now, there is  no count of ops maintained for HttpFSFileSystem like 
> DistributedFIleSysmtem & WebHDFSFilesystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185386#comment-17185386
 ] 

Stephen O'Donnell commented on HDFS-15540:
--

I will post a patch for this shortly. Here is the output with my patch in place:


{code:java}
// SAME AS BEFORE
[hdfs@09d47d85a75a /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
rm: Cannot delete/rename subdirectory under protected subdirectory 
/dir/protected

// MOVE TO TRASH NOW FAILS - which is what we want
[hdfs@09d47d85a75a /]$ hadoop fs -rm -r -f  /dir/protected/subdir1
rm: Failed to move to trash: hdfs://nn1/dir/protected/subdir1: Cannot 
delete/rename subdirectory under protected subdirectory /dir/protected

[hdfs@09d47d85a75a /]$ hadoop fs -rm -r -f  /dir/protected
rm: Failed to move to trash: hdfs://nn1/dir/protected: Cannot delete/rename 
non-empty protected directory /dir/protected
 {code}

> Directories protected from delete can still be moved to the trash
> -
>
> Key: HDFS-15540
> URL: https://issues.apache.org/jira/browse/HDFS-15540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> With HDFS-8983, HDFS-14802 and HDFS-15243 we are able to list protected 
> directories which cannot be deleted or renamed, provided the following is set:
> fs.protected.directories: 
> dfs.protected.subdirectories.enable: true
> Testing this feature out, I can see it mostly works fine, but protected 
> non-empty folders can still be moved to the trash. In this example 
> /dir/protected is set in fs.protected.directories, and 
> dfs.protected.subdirectories.enable is true.
> {code}
> hadoop fs -ls -R /dir
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir1
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir1/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir2
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir2/file1
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
> rm: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected/subdir1 
> /dir/protected/subdir1-moved
> mv: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> ** ALL GOOD SO FAR **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected/subdir1
> 2020-08-26 16:54:32,404 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected/subdir1' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected/subdir1
> ** It moved the protected sub-dir to the trash, where it will be deleted **
> ** Checking the top level dir, it is the same **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected 
> rm: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected /dir/protected-new
> mv: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected 
> 2020-08-26 16:55:32,402 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected1598460932388
> {code}
> The reason for this, seems to be that "move to trash" uses a different rename 
> method in FSNameSystem and FSDirRenameOp which avoids the 
> DFSUtil.checkProtectedDescendants(...) in the earlier Jiras.
> I believe that "move to trash" should be protected in the same way as a 
> -skipTrash delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185380#comment-17185380
 ] 

Ayush Saxena commented on HDFS-15540:
-

Thanx [~sodonnell] for the report, It makes sense, we shouldn't allow moving to 
trash as well

> Directories protected from delete can still be moved to the trash
> -
>
> Key: HDFS-15540
> URL: https://issues.apache.org/jira/browse/HDFS-15540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> With HDFS-8983, HDFS-14802 and HDFS-15243 we are able to list protected 
> directories which cannot be deleted or renamed, provided the following is set:
> fs.protected.directories: 
> dfs.protected.subdirectories.enable: true
> Testing this feature out, I can see it mostly works fine, but protected 
> non-empty folders can still be moved to the trash. In this example 
> /dir/protected is set in fs.protected.directories, and 
> dfs.protected.subdirectories.enable is true.
> {code}
> hadoop fs -ls -R /dir
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir1
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir1/file1
> drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir2
> -rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir2/file1
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
> rm: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected/subdir1 
> /dir/protected/subdir1-moved
> mv: Cannot delete/rename subdirectory under protected subdirectory 
> /dir/protected
> ** ALL GOOD SO FAR **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected/subdir1
> 2020-08-26 16:54:32,404 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected/subdir1' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected/subdir1
> ** It moved the protected sub-dir to the trash, where it will be deleted **
> ** Checking the top level dir, it is the same **
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected 
> rm: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected /dir/protected-new
> mv: Cannot delete/rename non-empty protected directory /dir/protected
> [hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected 
> 2020-08-26 16:55:32,402 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nn1/dir/protected' to trash at: 
> hdfs://nn1/user/hdfs/.Trash/Current/dir/protected1598460932388
> {code}
> The reason for this, seems to be that "move to trash" uses a different rename 
> method in FSNameSystem and FSDirRenameOp which avoids the 
> DFSUtil.checkProtectedDescendants(...) in the earlier Jiras.
> I believe that "move to trash" should be protected in the same way as a 
> -skipTrash delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15540) Directories protected from delete can still be moved to the trash

2020-08-26 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDFS-15540:


 Summary: Directories protected from delete can still be moved to 
the trash
 Key: HDFS-15540
 URL: https://issues.apache.org/jira/browse/HDFS-15540
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.4.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


With HDFS-8983, HDFS-14802 and HDFS-15243 we are able to list protected 
directories which cannot be deleted or renamed, provided the following is set:

fs.protected.directories: 
dfs.protected.subdirectories.enable: true

Testing this feature out, I can see it mostly works fine, but protected 
non-empty folders can still be moved to the trash. In this example 
/dir/protected is set in fs.protected.directories, and 
dfs.protected.subdirectories.enable is true.


{code}
hadoop fs -ls -R /dir

drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected
-rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/file1
drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir1
-rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir1/file1
drwxr-xr-x - hdfs supergroup 0 2020-08-26 16:52 /dir/protected/subdir2
-rw-r--r-- 3 hdfs supergroup 174 2020-08-26 16:52 /dir/protected/subdir2/file1

[hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected/subdir1
rm: Cannot delete/rename subdirectory under protected subdirectory 
/dir/protected

[hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected/subdir1 
/dir/protected/subdir1-moved
mv: Cannot delete/rename subdirectory under protected subdirectory 
/dir/protected

** ALL GOOD SO FAR **

[hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected/subdir1
2020-08-26 16:54:32,404 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://nn1/dir/protected/subdir1' to trash at: 
hdfs://nn1/user/hdfs/.Trash/Current/dir/protected/subdir1

** It moved the protected sub-dir to the trash, where it will be deleted **

** Checking the top level dir, it is the same **

[hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f -skipTrash /dir/protected 
rm: Cannot delete/rename non-empty protected directory /dir/protected

[hdfs@7d67ed1af9b0 /]$ hadoop fs -mv /dir/protected /dir/protected-new
mv: Cannot delete/rename non-empty protected directory /dir/protected

[hdfs@7d67ed1af9b0 /]$ hadoop fs -rm -r -f /dir/protected 
2020-08-26 16:55:32,402 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://nn1/dir/protected' to trash at: 
hdfs://nn1/user/hdfs/.Trash/Current/dir/protected1598460932388
{code}

The reason for this, seems to be that "move to trash" uses a different rename 
method in FSNameSystem and FSDirRenameOp which avoids the 
DFSUtil.checkProtectedDescendants(...) in the earlier Jiras.

I believe that "move to trash" should be protected in the same way as a 
-skipTrash delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-08-26 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15539:
--
Parent: HDFS-15477
Issue Type: Sub-task  (was: Improvement)

> When disallowing snapshot on a dir, throw exception if its trash root is not 
> empty
> --
>
> Key: HDFS-15539
> URL: https://issues.apache.org/jira/browse/HDFS-15539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
> trash root in that dir anymore (if any). The risk is the trash root will be 
> left there forever.
> We need to throw an exception there and prompt the user to clean up or rename 
> the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its snapshot root is not empty

2020-08-26 Thread Siyao Meng (Jira)
Siyao Meng created HDFS-15539:
-

 Summary: When disallowing snapshot on a dir, throw exception if 
its snapshot root is not empty
 Key: HDFS-15539
 URL: https://issues.apache.org/jira/browse/HDFS-15539
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Siyao Meng
Assignee: Siyao Meng


When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
trash root in that dir anymore (if any). The risk is the trash root will be 
left there forever.
We need to throw an exception there and prompt the user to clean up or rename 
the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-08-26 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15539:
--
Summary: When disallowing snapshot on a dir, throw exception if its trash 
root is not empty  (was: When disallowing snapshot on a dir, throw exception if 
its snapshot root is not empty)

> When disallowing snapshot on a dir, throw exception if its trash root is not 
> empty
> --
>
> Key: HDFS-15539
> URL: https://issues.apache.org/jira/browse/HDFS-15539
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
> trash root in that dir anymore (if any). The risk is the trash root will be 
> left there forever.
> We need to throw an exception there and prompt the user to clean up or rename 
> the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15117) EC: Add getECTopologyResultForPolicies to DistributedFileSystem

2020-08-26 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185278#comment-17185278
 ] 

Ayush Saxena commented on HDFS-15117:
-

Thanx [~umamaheswararao] for checking, yes, it seems we can't directly 
cherry-pick this, due to proto compatibility issue.

but I think if we require this, changing to this :
{code:java}
List policies = req.getPoliciesList();
{code}

Should work, without any difference?

> EC: Add getECTopologyResultForPolicies to DistributedFileSystem
> ---
>
> Key: HDFS-15117
> URL: https://issues.apache.org/jira/browse/HDFS-15117
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: ec
> Fix For: 3.3.0
>
> Attachments: HDFS-15117-01.patch, HDFS-15117-02.patch, 
> HDFS-15117-03.patch, HDFS-15117-04.patch, HDFS-15117-05.patch, 
> HDFS-15117-06.patch, HDFS-15117-07.patch, HDFS-15117-08.patch
>
>
> Add getECTopologyResultForPolicies API to distributed filesystem.
> It is as of now only present as part of ECAdmin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185243#comment-17185243
 ] 

Hadoop QA commented on HDFS-15510:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 37m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
50s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
23s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 

[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185217#comment-17185217
 ] 

Hemanth Boyina commented on HDFS-15510:
---

thanks for the review [~tasanuma] , updated the patch with 
[^HDFS-15510.006.patch] which fixes the test , waiting for jenkins to check the 
result 

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, 
> HDFS-15510.003.patch, HDFS-15510.004.patch, HDFS-15510.005.patch, 
> HDFS-15510.006.patch
>
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185210#comment-17185210
 ] 

Takanobu Asanuma commented on HDFS-15510:
-

[~hemanthboyina] Looks good for me except for the test. Could you fix it?

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, 
> HDFS-15510.003.patch, HDFS-15510.004.patch, HDFS-15510.005.patch, 
> HDFS-15510.006.patch
>
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185183#comment-17185183
 ] 

Hadoop QA commented on HDFS-15267:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
29s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
5s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 

[jira] [Updated] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HDFS-15510:
--
Attachment: HDFS-15510.006.patch

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, 
> HDFS-15510.003.patch, HDFS-15510.004.patch, HDFS-15510.005.patch, 
> HDFS-15510.006.patch
>
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-08-26 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185104#comment-17185104
 ] 

Vinayakumar B commented on HDFS-15098:
--

Hi [~seanlau], Please check comments

1. Change the name {{JceCtrcryptoCodec}} to {{JceCtrCryptoCodec}}

2. No need to have following redundant constants. Instead directly use the 
original constants itself. For example, instead of using {{AES_BLOCK_SIZE}} use 
{{CipherSuite.AES_CTR_NOPADDING.getAlgorithmBlockSize()}}
{code:java}
  protected static final CipherSuite AES_SUITE = CipherSuite.AES_CTR_NOPADDING;
  protected static final CipherSuite SM4_SUITE = CipherSuite.SM4_CTR_NOPADDING;

  protected static final int AES_BLOCK_SIZE = AES_SUITE.getAlgorithmBlockSize();
  protected static final int SM4_BLOCK_SIZE = SM4_SUITE.getAlgorithmBlockSize();
{code}
Example in JceAesCtrCryptoCodec
{code:java}
  @Override
  public CipherSuite  getCipherSuite() {
return CipherSuite.AES_CTR_NOPADDING;
  }

  @Override
  public void calculateIV(byte[] initIV, long counter, byte[] iv) {
super.calculateIV(initIV, counter, iv,
getCipherSuite().getAlgorithmBlockSize());
  }

  @Override
  public Encryptor createEncryptor() throws GeneralSecurityException {
return new JceCtrCipher(Cipher.ENCRYPT_MODE, getProvider(),
getCipherSuite(), "AES");
  }

  @Override
  public Decryptor createDecryptor() throws GeneralSecurityException {
return new JceCtrCipher(Cipher.DECRYPT_MODE, getProvider(),
getCipherSuite(), "AES");
  }
{code}
3. In JceCtrCryptoCodec,
 Instead of following method,
{code:java}
public void log(GeneralSecurityException e) {
   }
   {code}
make an abstract method to get Logger and use that logger to log message
 in JceCtrCryptoCodec.java
{code:java}
 protected abstract Logger getLogger();
   {code}
and
{code:java}
   } catch(GeneralSecurityException e) {
  getLogger().warn(e.getMessage());
  random = new SecureRandom();
}
   {code}
In JceAesCtrCryptoCodec and JceSm4CtrCryptoCodec
{code:java}
@Override
public Logger getLogger() {
return LOG;
}
   {code}
4. Similar to #2 and #3, make the changes in OpensslCtrCryptoCodec, 
OpensslAesCtrCryptoCodec and OpensslSm4CtrCryptoCodec. Instead of {{useLog}} 
use {{getLogger()}} method to get the corresponding logger and move following 
to corresponding classes, make them private and return them in {{getLogger()}}.
{code:java}
   protected static final Logger SM4_LOG =
LoggerFactory.getLogger(OpensslSm4CtrCryptoCodec.class.getName());
  protected static final Logger AES_LOG =
LoggerFactory.getLogger(OpensslAesCtrCryptoCodec.class.getName());

 {code}
5. In OpensslCtrCryptoCodec, initialization of engineId is done only in case of 
SM4. So this part of code can be moved to 
{{OpensslSm4CtrCryptoCodec#setConf()}} as below.
{code:java}
   @Override
  public void setConf(Configuration conf) {
super.setConf(conf);
setEngineId(conf.get(HADOOP_SECURITY_OPENSSL_ENGINE_ID_KEY));
  }
 {code}
6. {{OpensslCtrCryptoCodec#close()}} can be changed as below
{code:java}
   public void close() throws IOException {
if (this.random instanceof Closeable) {
  Closeable r = (Closeable) this.random;
  IOUtils.cleanupWithLogger(getLogger(), r);
}
  }
 {code}
7. Revert unnecessary changes in HdfsKMSUtil.java
 8. in KeyProvider constructor move addition of BouncyCastleProvider out of if 
block as following
{code:java}
   public KeyProvider(Configuration conf) {
this.conf = new Configuration(conf);
// Added for HADOOP-15473. Configured serialFilter property fixes
// java.security.UnrecoverableKeyException in JDK 8u171.
if(System.getProperty(JCEKS_KEY_SERIAL_FILTER) == null) {
  String serialFilter =
  conf.get(HADOOP_SECURITY_CRYPTO_JCEKS_KEY_SERIALFILTER,
  JCEKS_KEY_SERIALFILTER_DEFAULT);
  System.setProperty(JCEKS_KEY_SERIAL_FILTER, serialFilter);
}
String jceProvider = conf.get(HADOOP_SECURITY_CRYPTO_JCE_PROVIDER_KEY);
if (BouncyCastleProvider.PROVIDER_NAME.equals(jceProvider)) {
  Security.addProvider(new BouncyCastleProvider());
}
  }
 {code}
9. Right now all tests of {{TestCryptoStreamsWithJceSm4CtrCryptoCodec}} are 
getting skipped. Do following changes to get it fixed.
 Remove below code in {{TestCryptoStreamsWithJceSm4CtrCryptoCodec#init()}}
{code:java}
 try {
  KeyGenerator keyGenerator = KeyGenerator.getInstance("SM4");
} catch (Exception e) {
  Assume.assumeTrue(false);
}
 {code}
Add Following config in {{TestCryptoStreamsWithJceSm4CtrCryptoCodec#init()}}
{code:java}
 conf.set(HADOOP_SECURITY_CRYPTO_JCE_PROVIDER_KEY, 
BouncyCastleProvider.PROVIDER_NAME);
 {code}
10. To avoid javac warnings in 
{{TestCryptoStreamsWithOpensslSm4CtrCryptoCodec}} and 
{{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} 

[jira] [Commented] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185075#comment-17185075
 ] 

Lisheng Sun commented on HDFS-15267:


the v002 patch trigger new check.

> Implement Statistics Count for HttpFSFileSystem
> ---
>
> Key: HDFS-15267
> URL: https://issues.apache.org/jira/browse/HDFS-15267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15267.001.patch, HDFS-15267.002.patch
>
>
> As of now, there is  no count of ops maintained for HttpFSFileSystem like 
> DistributedFIleSysmtem & WebHDFSFilesystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-15267:
---
Attachment: HDFS-15267.002.patch

> Implement Statistics Count for HttpFSFileSystem
> ---
>
> Key: HDFS-15267
> URL: https://issues.apache.org/jira/browse/HDFS-15267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15267.001.patch, HDFS-15267.002.patch
>
>
> As of now, there is  no count of ops maintained for HttpFSFileSystem like 
> DistributedFIleSysmtem & WebHDFSFilesystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185050#comment-17185050
 ] 

Hadoop QA commented on HDFS-15267:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs-project in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs-project in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-hdfs-project {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-hdfs-client in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs-httpfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
1m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-hdfs-client in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs-httpfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-client in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-hdfs-httpfs in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
28s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-hdfs-client in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs-httpfs in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-project in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 12s{color} 
| {color:red} 

[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185040#comment-17185040
 ] 

Hadoop QA commented on HDFS-15510:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
13s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |

[jira] [Assigned] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-08-26 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-15516:
--

Assignee: jianghua zhu  (was: Shashikant Banerjee)

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-08-26 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15516:
---
Reporter: Shashikant Banerjee  (was: jianghua zhu)

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185016#comment-17185016
 ] 

Lisheng Sun commented on HDFS-15267:


hi [~ayushtkn] Are you still work on this jira?

 if not can I take over.

> Implement Statistics Count for HttpFSFileSystem
> ---
>
> Key: HDFS-15267
> URL: https://issues.apache.org/jira/browse/HDFS-15267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15267.001.patch
>
>
> As of now, there is  no count of ops maintained for HttpFSFileSystem like 
> DistributedFIleSysmtem & WebHDFSFilesystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15267) Implement Statistics Count for HttpFSFileSystem

2020-08-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-15267:
---
Attachment: HDFS-15267.001.patch
Status: Patch Available  (was: Open)

> Implement Statistics Count for HttpFSFileSystem
> ---
>
> Key: HDFS-15267
> URL: https://issues.apache.org/jira/browse/HDFS-15267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15267.001.patch
>
>
> As of now, there is  no count of ops maintained for HttpFSFileSystem like 
> DistributedFIleSysmtem & WebHDFSFilesystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-26 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HDFS-15510:
--
Attachment: HDFS-15510.005.patch

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, 
> HDFS-15510.003.patch, HDFS-15510.004.patch, HDFS-15510.005.patch
>
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15536) RBF: Clear Quota in Router was not consistent

2020-08-26 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HDFS-15536:
--
Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> RBF:  Clear Quota in Router was not consistent 
> ---
>
> Key: HDFS-15536
> URL: https://issues.apache.org/jira/browse/HDFS-15536
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15536.001.patch, HDFS-15536.002.patch, 
> HDFS-15536.003.patch, HDFS-15536.testrepro.patch
>
>
> *)create a mount point
> *) set quota for mount point through dfsrouteradmin
> *) clear quota for the same mount point through dfsrouteradmin
> check the content summary of mount point , quota was not cleared , though the 
> mount table store has the quota cleared



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15536) RBF: Clear Quota in Router was not consistent

2020-08-26 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184990#comment-17184990
 ] 

Hemanth Boyina commented on HDFS-15536:
---

thanks for the review [~elgoiri]

committed to trunk and branch 3.3

> RBF:  Clear Quota in Router was not consistent 
> ---
>
> Key: HDFS-15536
> URL: https://issues.apache.org/jira/browse/HDFS-15536
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
> Attachments: HDFS-15536.001.patch, HDFS-15536.002.patch, 
> HDFS-15536.003.patch, HDFS-15536.testrepro.patch
>
>
> *)create a mount point
> *) set quota for mount point through dfsrouteradmin
> *) clear quota for the same mount point through dfsrouteradmin
> check the content summary of mount point , quota was not cleared , though the 
> mount table store has the quota cleared



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2020-08-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184986#comment-17184986
 ] 

Lisheng Sun commented on HDFS-14694:


The failed UT is ok in my local.
{quote}TestHDFSContractMultipartUploader>AbstractContractMultipartUploaderTest.testConcurrentUploads:815
 ? IllegalArgument
{quote}
This failed UT is metioned in HDFS-15471  and this issue will fix it.

So All the failed UT is unrelated to this patch.

[~ayushtkn] Could you help review this patch? Thank you.

> Call recoverLease on DFSOutputStream close exception
> 
>
> Key: HDFS-14694
> URL: https://issues.apache.org/jira/browse/HDFS-14694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chen Zhang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, 
> HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch, 
> HDFS-14694.006.patch
>
>
> HDFS uses file-lease to manage opened files, when a file is not closed 
> normally, NN will recover lease automatically after hard limit exceeded. But 
> for a long running service(e.g. HBase), the hdfs-client will never die and NN 
> don't have any chances to recover the file.
> Usually client program needs to handle exceptions by themself to avoid this 
> condition(e.g. HBase automatically call recover lease for files that not 
> closed normally), but in our experience, most services (in our company) don't 
> process this condition properly, which will cause lots of files in abnormal 
> status or even data loss.
> This Jira propose to add a feature that call recoverLease operation 
> automatically when DFSOutputSteam close encounters exception. It should be 
> disabled by default, but when somebody builds a long-running service based on 
> HDFS, they can enable this option.
> We've add this feature to our internal Hadoop distribution for more than 3 
> years, it's quite useful according our experience.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15471) TestHDFSContractMultipartUploader fails on trunk

2020-08-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HDFS-15471:
--

Assignee: Lisheng Sun

> TestHDFSContractMultipartUploader fails on trunk
> 
>
> Key: HDFS-15471
> URL: https://issues.apache.org/jira/browse/HDFS-15471
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Lisheng Sun
>Priority: Major
>  Labels: test
>
> {{TestHDFSContractMultipartUploader}} fails on trunk with 
> {{IllegalArgumentException}}
> {code:bash}
> [ERROR] 
> testConcurrentUploads(org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader)
>   Time elapsed: 0.127 s  <<< ERROR!
> java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:127)
>   at 
> org.apache.hadoop.test.LambdaTestUtils$ProportionalRetryInterval.(LambdaTestUtils.java:907)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.testConcurrentUploads(AbstractContractMultipartUploaderTest.java:815)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2020-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184973#comment-17184973
 ] 

Hadoop QA commented on HDFS-14694:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
32s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
49s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| 

[jira] [Commented] (HDFS-15117) EC: Add getECTopologyResultForPolicies to DistributedFileSystem

2020-08-26 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184957#comment-17184957
 ] 

Uma Maheswara Rao G commented on HDFS-15117:


[~ayushtkn], Thanks for checking.
I think we can't merge this patch to 3.2 because of protoBuff version 
compatibility issues?
{code:java}
   @Override       
   public GetECTopologyResultForPoliciesResponseProto 
getECTopologyResultForPolicies(
       RpcController controller, GetECTopologyResultForPoliciesRequestProto 
req)       
   throws ServiceException {       
try {      
  ProtocolStringList policies = req.getPoliciesList();   {code}
Looks like we used ProtocolStringList class which seems to be added in proto 
2.6.
Since 3.2 was pointing to 2.5, this can't be compiled there.
[https://abi-laboratory.pro/?view=compat_report=java=protobuf-java=2.5.0=2.6.0=be7b9=src]

> EC: Add getECTopologyResultForPolicies to DistributedFileSystem
> ---
>
> Key: HDFS-15117
> URL: https://issues.apache.org/jira/browse/HDFS-15117
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: ec
> Fix For: 3.3.0
>
> Attachments: HDFS-15117-01.patch, HDFS-15117-02.patch, 
> HDFS-15117-03.patch, HDFS-15117-04.patch, HDFS-15117-05.patch, 
> HDFS-15117-06.patch, HDFS-15117-07.patch, HDFS-15117-08.patch
>
>
> Add getECTopologyResultForPolicies API to distributed filesystem.
> It is as of now only present as part of ECAdmin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org