[jira] [Commented] (HDFS-15755) Import powermock to test of hadoop-hdfs-project

2020-12-29 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256320#comment-17256320
 ] 

Yang Yun commented on HDFS-15755:
-

Thanks [~ayushtkn] for the review.

I'm writing a test case for checking  of  mounted volume failure and use 
Powermock to mock DiskChecker. I'll update this Jira with code changes later.

For the version, I just copy from yarn project, I'll try the default version or 
change in hadoop-project/pom.xml.

> Import powermock to test of hadoop-hdfs-project
> ---
>
> Key: HDFS-15755
> URL: https://issues.apache.org/jira/browse/HDFS-15755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, test
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15755.001.patch
>
>
> For mocking static methods in unit test, import PowerMock to 
> hadoop-hdfs-project.
> The same version PowerMock has been imported in part of hadoop-yarn-project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15755) Import powermock to test of hadoop-hdfs-project

2020-12-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256304#comment-17256304
 ] 

Hadoop QA commented on HDFS-15755:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
29s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 34s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green}{color} | {color:green} The patch has no ill-formed 
XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 46s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/384/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | 

[jira] [Commented] (HDFS-15755) Import powermock to test of hadoop-hdfs-project

2020-12-29 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256284#comment-17256284
 ] 

Ayush Saxena commented on HDFS-15755:
-

Do you tend to use Powermock someewhere in hdfs? If yes, can you link that 
jira. If no, then probably we can wait when we have a requirement to do so.

In case of yes :
* Why this?

{code:java}
   mockito-core
+  2.8.9
{code}
Version should be changed in hadoop-project/pom.xml only.
* Here also version should be picked up from hadoop-project/pom.xml can use 
powermock.version variable 

{code:java}
+
+  org.powermock
+  powermock-api-mockito2
+  1.7.1
{code}

And usually the dependency should be added there as well



> Import powermock to test of hadoop-hdfs-project
> ---
>
> Key: HDFS-15755
> URL: https://issues.apache.org/jira/browse/HDFS-15755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, test
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15755.001.patch
>
>
> For mocking static methods in unit test, import PowerMock to 
> hadoop-hdfs-project.
> The same version PowerMock has been imported in part of hadoop-yarn-project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15754) Create packet metrics for DataNode

2020-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15754?focusedWorklogId=529363=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-529363
 ]

ASF GitHub Bot logged work on HDFS-15754:
-

Author: ASF GitHub Bot
Created on: 30/Dec/20 03:11
Start Date: 30/Dec/20 03:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2578:
URL: https://github.com/apache/hadoop/pull/2578#issuecomment-752311951


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 40s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 38s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 46s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/2/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 8 new + 124 unchanged 
- 0 fixed = 132 total (was 124)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 35s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 148m 13s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 250m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.TestLeaseRecovery |
   |   | hadoop.tracing.TestTracing |
   |   | hadoop.hdfs.TestFileAppend4 |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2578 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0f0a973ee3b8 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk 

[jira] [Work logged] (HDFS-15754) Create packet metrics for DataNode

2020-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15754?focusedWorklogId=529362=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-529362
 ]

ASF GitHub Bot logged work on HDFS-15754:
-

Author: ASF GitHub Bot
Created on: 30/Dec/20 03:10
Start Date: 30/Dec/20 03:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2578:
URL: https://github.com/apache/hadoop/pull/2578#issuecomment-752311686


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 36s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 34s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 49s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 8 new + 124 unchanged 
- 0 fixed = 132 total (was 124)  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 36s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 146m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 249m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.TestHAAuxiliaryPort |
   |   | hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.ha.TestNNHealthCheck |
   |   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.TestDFSClientFailover |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 

[jira] [Updated] (HDFS-15755) Import powermock to test of hadoop-hdfs-project

2020-12-29 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15755:

Attachment: HDFS-15755.001.patch
Status: Patch Available  (was: Open)

> Import powermock to test of hadoop-hdfs-project
> ---
>
> Key: HDFS-15755
> URL: https://issues.apache.org/jira/browse/HDFS-15755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, test
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15755.001.patch
>
>
> For mocking static methods in unit test, import PowerMock to 
> hadoop-hdfs-project.
> The same version PowerMock has been imported in part of hadoop-yarn-project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15755) Import powermock to test of hadoop-hdfs-project

2020-12-29 Thread Yang Yun (Jira)
Yang Yun created HDFS-15755:
---

 Summary: Import powermock to test of hadoop-hdfs-project
 Key: HDFS-15755
 URL: https://issues.apache.org/jira/browse/HDFS-15755
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, test
Reporter: Yang Yun
Assignee: Yang Yun


For mocking static methods in unit test, import PowerMock to 
hadoop-hdfs-project.

The same version PowerMock has been imported in part of hadoop-yarn-project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15754) Create packet metrics for DataNode

2020-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15754?focusedWorklogId=529308=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-529308
 ]

ASF GitHub Bot logged work on HDFS-15754:
-

Author: ASF GitHub Bot
Created on: 29/Dec/20 23:12
Start Date: 29/Dec/20 23:12
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #2578:
URL: https://github.com/apache/hadoop/pull/2578#discussion_r549883080



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
##
@@ -183,6 +183,11 @@
   @Metric private MutableRate checkAndUpdateOp;
   @Metric private MutableRate updateReplicaUnderRecoveryOp;
 
+  @Metric MutableCounterLong totalPacketsReceived;

Review comment:
   We'll need to add these new metrics to 
[here](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Metrics.html#datanode)
 right?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
##
@@ -161,6 +163,65 @@ public void testReceivePacketMetrics() throws Exception {
 }
   }
 
+  @Test
+  public void testReceivePacketSlowMetrics() throws Exception {
+Configuration conf = new HdfsConfiguration();
+final int interval = 1;
+conf.set(DFSConfigKeys.DFS_METRICS_PERCENTILES_INTERVALS_KEY, "" + 
interval);
+MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+.numDataNodes(3).build();
+try {
+  cluster.waitActive();
+  DistributedFileSystem fs = cluster.getFileSystem();
+  final DataNodeFaultInjector injector =
+  Mockito.mock(DataNodeFaultInjector.class);
+  Mockito.doAnswer(new Answer() {
+@Override
+public Object answer(InvocationOnMock invocationOnMock)
+throws Throwable {
+  // make the op taking longer time
+  Thread.sleep(1000);
+  return null;
+}
+  }).when(injector).stopSendingPacketDownstream(Mockito.anyString());
+  Mockito.doAnswer(new Answer() {
+@Override
+public Object answer(InvocationOnMock invocationOnMock)
+throws Throwable {
+  // make the op taking longer time
+  Thread.sleep(1000);
+  return null;
+}
+  }).when(injector).delayWriteToOsCache();
+  Mockito.doAnswer(new Answer() {
+@Override
+public Object answer(InvocationOnMock invocationOnMock)
+throws Throwable {
+  // make the op taking longer time
+  Thread.sleep(1000);
+  return null;
+}
+  }).when(injector).delayWriteToDisk();
+  DataNodeFaultInjector.set(injector);
+  Path testFile = new Path("/testFlushNanosMetric.txt");
+  FSDataOutputStream fout = fs.create(testFile);
+  fout.write(new byte[1]);
+  fout.hsync();
+  fout.close();
+  List datanodes = cluster.getDataNodes();
+  DataNode datanode = datanodes.get(0);
+  MetricsRecordBuilder dnMetrics = 
getMetrics(datanode.getMetrics().name());
+  assertTrue("More than 1 packet received",
+  getLongCounter("TotalPacketsReceived", dnMetrics) > 1L);
+  assertTrue("More than 1 slow packet to mirror",
+  getLongCounter("TotalPacketsSlowWriteToMirror", dnMetrics) > 1L);
+  assertCounter("TotalPacketsSlowWriteToDisk", 1L, dnMetrics);
+  assertCounter("TotalPacketsSlowWriteOsCache", 0L, dnMetrics);
+} finally {
+  if (cluster != null) {cluster.shutdown();}

Review comment:
   nit: code style





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 529308)
Time Spent: 20m  (was: 10m)

> Create packet metrics for DataNode
> --
>
> Key: HDFS-15754
> URL: https://issues.apache.org/jira/browse/HDFS-15754
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In BlockReceiver, right now when there is slowness in writeToMirror, 
> writeToDisk and writeToOsCache, it is dumped in the debug log. In practice we 
> have found these are quite useful signal to detect issues in DataNode, so it 
> will be great these metrics can be exposed by JMX.
> Also we introduced totalPacket received to use a percentage as a signal to 
> detect the 

[jira] [Work logged] (HDFS-15754) Create packet metrics for DataNode

2020-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15754?focusedWorklogId=529303=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-529303
 ]

ASF GitHub Bot logged work on HDFS-15754:
-

Author: ASF GitHub Bot
Created on: 29/Dec/20 22:59
Start Date: 29/Dec/20 22:59
Worklog Time Spent: 10m 
  Work Description: fengnanli opened a new pull request #2578:
URL: https://github.com/apache/hadoop/pull/2578


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 529303)
Remaining Estimate: 0h
Time Spent: 10m

> Create packet metrics for DataNode
> --
>
> Key: HDFS-15754
> URL: https://issues.apache.org/jira/browse/HDFS-15754
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In BlockReceiver, right now when there is slowness in writeToMirror, 
> writeToDisk and writeToOsCache, it is dumped in the debug log. In practice we 
> have found these are quite useful signal to detect issues in DataNode, so it 
> will be great these metrics can be exposed by JMX.
> Also we introduced totalPacket received to use a percentage as a signal to 
> detect the potentially underperforming datanode since datanodes across one 
> HDFS cluster may received different numbers of packets totally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15754) Create packet metrics for DataNode

2020-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15754:
--
Labels: pull-request-available  (was: )

> Create packet metrics for DataNode
> --
>
> Key: HDFS-15754
> URL: https://issues.apache.org/jira/browse/HDFS-15754
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In BlockReceiver, right now when there is slowness in writeToMirror, 
> writeToDisk and writeToOsCache, it is dumped in the debug log. In practice we 
> have found these are quite useful signal to detect issues in DataNode, so it 
> will be great these metrics can be exposed by JMX.
> Also we introduced totalPacket received to use a percentage as a signal to 
> detect the potentially underperforming datanode since datanodes across one 
> HDFS cluster may received different numbers of packets totally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15754) Create packet metrics for DataNode

2020-12-29 Thread Fengnan Li (Jira)
Fengnan Li created HDFS-15754:
-

 Summary: Create packet metrics for DataNode
 Key: HDFS-15754
 URL: https://issues.apache.org/jira/browse/HDFS-15754
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Fengnan Li
Assignee: Fengnan Li


In BlockReceiver, right now when there is slowness in writeToMirror, 
writeToDisk and writeToOsCache, it is dumped in the debug log. In practice we 
have found these are quite useful signal to detect issues in DataNode, so it 
will be great these metrics can be exposed by JMX.
Also we introduced totalPacket received to use a percentage as a signal to 
detect the potentially underperforming datanode since datanodes across one HDFS 
cluster may received different numbers of packets totally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15690) Add lz4-java as hadoop-hdfs test dependency

2020-12-29 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-15690:

Fix Version/s: 3.3.1

> Add lz4-java as hadoop-hdfs test dependency
> ---
>
> Key: HDFS-15690
> URL: https://issues.apache.org/jira/browse/HDFS-15690
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: L. C. Hsieh
>Assignee: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> TestFSImage.testNativeCompression fails with "java.lang.NoClassDefFoundError: 
> net/jpountz/lz4/LZ4Factory":
> https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/305/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFSImage/testNativeCompression/
> We need to add lz4-java to hadoop-hdfs test dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15748) RBF: Move the router related part from hadoop-federation-balance module to hadoop-hdfs-rbf.

2020-12-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256148#comment-17256148
 ] 

Hadoop QA commented on HDFS-15748:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 3 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
12s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
41s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
4s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
2s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; 
considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
0s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
14s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
36s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 26m 36s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/383/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt{color}
 | {color:red} root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 139 new + 1902 unchanged - 
139 fixed = 2041 total (was 2041) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
12s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 23m 12s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/383/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt{color}
 | {color:red} root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with 
JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 122 new + 
1813 unchanged - 122 fixed = 1935 total (was 1935) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 24s{color} | 

[jira] [Commented] (HDFS-15748) RBF: Move the router related part from hadoop-federation-balance module to hadoop-hdfs-rbf.

2020-12-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256078#comment-17256078
 ] 

Íñigo Goiri commented on HDFS-15748:


Just to be clear, I was fine doing the move; just highlighting that usually 
this would have implications but given that we haven't released, it is fine.
I like the "rbfbalance" naming though.

> RBF: Move the router related part from hadoop-federation-balance module to 
> hadoop-hdfs-rbf.
> ---
>
> Key: HDFS-15748
> URL: https://issues.apache.org/jira/browse/HDFS-15748
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15748.001.patch, HDFS-15748.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15748) RBF: Move the router related part from hadoop-federation-balance module to hadoop-hdfs-rbf.

2020-12-29 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256031#comment-17256031
 ] 

Jinglun commented on HDFS-15748:


Hi [~elgoiri], thanks your comments ! I didn't do this directly because it 
would be a big work and hard to review. So I planed to do it in some small 
steps.

Submit v02. Let hadoop-hdfs-rbf depends on hadoop-federation-balance(provided). 
And hadoop-federation-balance depends on hadoop-distcp(provided).

> RBF: Move the router related part from hadoop-federation-balance module to 
> hadoop-hdfs-rbf.
> ---
>
> Key: HDFS-15748
> URL: https://issues.apache.org/jira/browse/HDFS-15748
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15748.001.patch, HDFS-15748.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15748) RBF: Move the router related part from hadoop-federation-balance module to hadoop-hdfs-rbf.

2020-12-29 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15748:
---
Attachment: HDFS-15748.002.patch

> RBF: Move the router related part from hadoop-federation-balance module to 
> hadoop-hdfs-rbf.
> ---
>
> Key: HDFS-15748
> URL: https://issues.apache.org/jira/browse/HDFS-15748
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15748.001.patch, HDFS-15748.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie reopened HDFS-15752:
--

Leave it open until realese version is given

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> attach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live Replica: 2
>                No. of excess Replica: 0
>                No. of stale Replica: 0
>                No. of decommissioned Replica: 0
>                No. of decommissioning Replica: 0
>                No. of corrupted Replica: 0
>                Block replica on datanode/rack: hadoop13/default-rack is 
> HEALTHY
>                Block replica on datanode/rack: hadoop12/default-rack is 
> HEALTHY
>            9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>                Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255954#comment-17255954
 ] 

lujie edited comment on HDFS-15752 at 12/29/20, 12:19 PM:
--

this issue has been fixed by previous patch 
https://issues.apache.org/jira/browse/HDFS-15717. 

also can be found  at [https://github.com/apache/hadoop/pull/2529.]

[~ayushtkn]  Thansk for your nice explain!

 


was (Author: xiaoheipangzi):
[~ayushtkn]

I think we need fix it at other versions as soon as possiable.

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> attach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live Replica: 2
>                No. of excess Replica: 0
>                No. of stale Replica: 0
>                No. of decommissioned Replica: 0
>                No. of decommissioning Replica: 0
>                No. of corrupted Replica: 0
>                Block replica on datanode/rack: hadoop13/default-rack is 
> HEALTHY
>                Block replica on datanode/rack: hadoop12/default-rack is 
> HEALTHY
>            9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>                Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HDFS-15752:
-
Description: 
attach my reproduce step to let others know we need  prevent it.
{quote}reproduce step
 # login as one user, in our case, super user .
 # hadoop fs -mkdir /private
 # hadoop fs -chmod 700 /private
 # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
 # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the name 
of files in /private can be company name, bank name, customer's name,or other 
sensitive infomration, so we need chmod /private and files in it to 700)      
 # login as non-admin user, named as user1
 # hdfs fsck -blockId $blockID   #  $blockID  belong to 
file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
block id. We can also find a suitable one by brute force search.
 # check the output
               Block Id: blk_1073741825
               Block belongs to: 
{color:#ff}/private/file_name_sensitive.txt{color}
               No. of Expected Replica: 3
               No. of live Replica: 2
               No. of excess Replica: 0
               No. of stale Replica: 0
               No. of decommissioned Replica: 0
               No. of decommissioning Replica: 0
               No. of corrupted Replica: 0
               Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
               Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
           9. we can see that user1 can see the file name in /private. But in 
correct case, for example,  user1  do "ls /private", the outpur is
               Permission denied: user=user1, access=READ_EXECUTE, 
inode="/private":hdfs:hdfs:drwx--{quote}

  was:
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

reattach my reproduce step to let others know we need  prevent it.
{quote}reproduce step
 # login as one user, in our case, super user .
 # hadoop fs -mkdir /private
 # hadoop fs -chmod 700 /private
 # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
 # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the name 
of files in /private can be company name, bank name, customer's name,or other 
sensitive infomration, so we need chmod /private and files in it to 700)      
 # login as non-admin user, named as user1
 # hdfs fsck -blockId $blockID   #  $blockID  belong to 
file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
block id. We can also find a suitable one by brute force search.
 # check the output
               Block Id: blk_1073741825
               Block belongs to: 
{color:#ff}/private/file_name_sensitive.txt{color}
               No. of Expected Replica: 3
               No. of live Replica: 2
               No. of excess Replica: 0
               No. of stale Replica: 0
               No. of decommissioned Replica: 0
               No. of decommissioning Replica: 0
               No. of corrupted Replica: 0
               Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
               Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
           9. we can see that user1 can see the file name in /private. But in 
correct case, for example,  user1  do "ls /private", the outpur is
               Permission denied: user=user1, access=READ_EXECUTE, 
inode="/private":hdfs:hdfs:drwx--{quote}


> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> attach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live Replica: 2
>                No. of 

[jira] [Issue Comment Deleted] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HDFS-15752:
-
Comment: was deleted

(was: [~ayushtkn]

It is something different, I send a email to hadoop security, but has not 
received any response.  Can i forward the email to you?and is  
[ayushsax...@apache.org|mailto:ayushsax...@apache.org]  your email address?)

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live Replica: 2
>                No. of excess Replica: 0
>                No. of stale Replica: 0
>                No. of decommissioned Replica: 0
>                No. of decommissioning Replica: 0
>                No. of corrupted Replica: 0
>                Block replica on datanode/rack: hadoop13/default-rack is 
> HEALTHY
>                Block replica on datanode/rack: hadoop12/default-rack is 
> HEALTHY
>            9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>                Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HDFS-15752:
-
Comment: was deleted

(was: [~ayushtkn]

clould you please mark it as Duplication.)

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live Replica: 2
>                No. of excess Replica: 0
>                No. of stale Replica: 0
>                No. of decommissioned Replica: 0
>                No. of decommissioning Replica: 0
>                No. of corrupted Replica: 0
>                Block replica on datanode/rack: hadoop13/default-rack is 
> HEALTHY
>                Block replica on datanode/rack: hadoop12/default-rack is 
> HEALTHY
>            9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>                Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255954#comment-17255954
 ] 

lujie commented on HDFS-15752:
--

[~ayushtkn]

I think we need fix it at other versions as soon as possiable.

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live Replica: 2
>                No. of excess Replica: 0
>                No. of stale Replica: 0
>                No. of decommissioned Replica: 0
>                No. of decommissioning Replica: 0
>                No. of corrupted Replica: 0
>                Block replica on datanode/rack: hadoop13/default-rack is 
> HEALTHY
>                Block replica on datanode/rack: hadoop12/default-rack is 
> HEALTHY
>            9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>                Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie resolved HDFS-15752.
--
Resolution: Duplicate

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live Replica: 2
>                No. of excess Replica: 0
>                No. of stale Replica: 0
>                No. of decommissioned Replica: 0
>                No. of decommissioning Replica: 0
>                No. of corrupted Replica: 0
>                Block replica on datanode/rack: hadoop13/default-rack is 
> HEALTHY
>                Block replica on datanode/rack: hadoop12/default-rack is 
> HEALTHY
>            9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>                Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HDFS-15752:
-
Description: 
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

reattach my reproduce step to let others know we need  prevent it.
{quote}reproduce step
 # login as one user, in our case, super user .
 # hadoop fs -mkdir /private
 # hadoop fs -chmod 700 /private
 # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
 # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the name 
of files in /private can be company name, bank name, customer's name,or other 
sensitive infomration, so we need chmod /private and files in it to 700)      
 # login as non-admin user, named as user1
 # hdfs fsck -blockId $blockID   #  $blockID  belong to 
file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
block id. We can also find a suitable one by brute force search.
 # check the output
               Block Id: blk_1073741825
               Block belongs to: 
{color:#ff}/private/file_name_sensitive.txt{color}
               No. of Expected Replica: 3
               No. of live Replica: 2
               No. of excess Replica: 0
               No. of stale Replica: 0
               No. of decommissioned Replica: 0
               No. of decommissioning Replica: 0
               No. of corrupted Replica: 0
               Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
               Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
           9. we can see that user1 can see the file name in /private. But in 
correct case, for example,  user1  do "ls /private", the outpur is
               Permission denied: user=user1, access=READ_EXECUTE, 
inode="/private":hdfs:hdfs:drwx--{quote}

  was:
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

reattach my reproduce step.
{quote}reproduce step
 # login as one user, in our case, super user .
 # hadoop fs -mkdir /private
 # hadoop fs -chmod 700 /private
 # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
 # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the name 
of files in /private can be company name, bank name, customer's name,or other 
sensitive infomration, so we need chmod /private and files in it to 700)      
 # login as non-admin user, named as user1
 # hdfs fsck -blockId $blockID   #  $blockID  belong to 
file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
block id. We can also find a suitable one by brute force search.
 # check the output
               Block Id: blk_1073741825
               Block belongs to: 
{color:#ff}/private/file_name_sensitive.txt{color}
               No. of Expected Replica: 3
               No. of live Replica: 2
               No. of excess Replica: 0
               No. of stale Replica: 0
               No. of decommissioned Replica: 0
               No. of decommissioning Replica: 0
               No. of corrupted Replica: 0
               Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
               Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
           9. we can see that user1 can see the file name in /private. But in 
correct case, for example,  user1  do "ls /private", the outpur is
               Permission denied: user=user1, access=READ_EXECUTE, 
inode="/private":hdfs:hdfs:drwx--{quote}


> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step to let others know we need  prevent it.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> 

[jira] [Updated] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HDFS-15752:
-
Description: 
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

reattach my reproduce step.
{quote}reproduce step
 # login as one user, in our case, super user .
 # hadoop fs -mkdir /private
 # hadoop fs -chmod 700 /private
 # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
 # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the name 
of files in /private can be company name, bank name, customer's name,or other 
sensitive infomration, so we need chmod /private and files in it to 700)      
 # login as non-admin user, named as user1
 # hdfs fsck -blockId $blockID   #  $blockID  belong to 
file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
block id. We can also find a suitable one by brute force search.
 # check the output
               Block Id: blk_1073741825
               Block belongs to: 
{color:#ff}/private/file_name_sensitive.txt{color}
               No. of Expected Replica: 3
               No. of live Replica: 2
               No. of excess Replica: 0
               No. of stale Replica: 0
               No. of decommissioned Replica: 0
               No. of decommissioning Replica: 0
               No. of corrupted Replica: 0
               Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
               Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
           9. we can see that user1 can see the file name in /private. But in 
correct case, for example,  user1  do "ls /private", the outpur is
               Permission denied: user=user1, access=READ_EXECUTE, 
inode="/private":hdfs:hdfs:drwx--{quote}

  was:
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

reattach my reproduce step.
{quote}reproduce step
 # login as one user, in our case, super user .
 # hadoop fs -mkdir /private
 # hadoop fs -chmod 700 /private
 # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
 # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the name 
of files in /private can be company name, bank name, customer's name,or other 
sensitive infomration, so we need chmod /private and files in it to 700)      
 # login as non-admin user, named as user1
 # hdfs fsck -blockId $blockID   #  $blockID  belong to 
file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
block id. We can also find a suitable one by brute force search.
 # check the output
              Block Id: blk_1073741825
              Block belongs to: 
{color:#ff}/private/file_name_sensitive.txt{color}
              No. of Expected Replica: 3
              No. of live Replica: 2
              No. of excess Replica: 0
              No. of stale Replica: 0
              No. of decommissioned Replica: 0
              No. of decommissioning Replica: 0
              No. of corrupted Replica: 0
              Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
              Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
          9. we can see that user1 can see the file name in /private. But in 
correct case, for example,  user1  do "ls /private", the outpur is
              Permission denied: user=user1, access=READ_EXECUTE, 
inode="/private":hdfs:hdfs:drwx--{quote}


> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>                Block Id: blk_1073741825
>                Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>                No. of Expected Replica: 3
>                No. of live 

[jira] [Commented] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255951#comment-17255951
 ] 

lujie commented on HDFS-15752:
--

[~ayushtkn]

clould you please mark it as Duplication.

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>               Block Id: blk_1073741825
>               Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>               No. of Expected Replica: 3
>               No. of live Replica: 2
>               No. of excess Replica: 0
>               No. of stale Replica: 0
>               No. of decommissioned Replica: 0
>               No. of decommissioning Replica: 0
>               No. of corrupted Replica: 0
>               Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
>               Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
>           9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>               Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HDFS-15752:
-
Description: 
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

reattach my reproduce step.
{quote}reproduce step
 # login as one user, in our case, super user .
 # hadoop fs -mkdir /private
 # hadoop fs -chmod 700 /private
 # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
 # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the name 
of files in /private can be company name, bank name, customer's name,or other 
sensitive infomration, so we need chmod /private and files in it to 700)      
 # login as non-admin user, named as user1
 # hdfs fsck -blockId $blockID   #  $blockID  belong to 
file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
block id. We can also find a suitable one by brute force search.
 # check the output
              Block Id: blk_1073741825
              Block belongs to: 
{color:#ff}/private/file_name_sensitive.txt{color}
              No. of Expected Replica: 3
              No. of live Replica: 2
              No. of excess Replica: 0
              No. of stale Replica: 0
              No. of decommissioned Replica: 0
              No. of decommissioning Replica: 0
              No. of corrupted Replica: 0
              Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
              Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
          9. we can see that user1 can see the file name in /private. But in 
correct case, for example,  user1  do "ls /private", the outpur is
              Permission denied: user=user1, access=READ_EXECUTE, 
inode="/private":hdfs:hdfs:drwx--{quote}

  was:
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

For record, i re


> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> reattach my reproduce step.
> {quote}reproduce step
>  # login as one user, in our case, super user .
>  # hadoop fs -mkdir /private
>  # hadoop fs -chmod 700 /private
>  # echo "data" | hadoop fs -put - /private/file_name_sensitive.txt      
>  # hadoop fs -chmod 700 /private/file_name_sensitive.txt            #(the 
> name of files in /private can be company name, bank name, customer's name,or 
> other sensitive infomration, so we need chmod /private and files in it to 
> 700)      
>  # login as non-admin user, named as user1
>  # hdfs fsck -blockId $blockID   #  $blockID  belong to 
> file_name_sensitive.txt, user1 can infer the blockID  based on his/her own  
> block id. We can also find a suitable one by brute force search.
>  # check the output
>               Block Id: blk_1073741825
>               Block belongs to: 
> {color:#ff}/private/file_name_sensitive.txt{color}
>               No. of Expected Replica: 3
>               No. of live Replica: 2
>               No. of excess Replica: 0
>               No. of stale Replica: 0
>               No. of decommissioned Replica: 0
>               No. of decommissioning Replica: 0
>               No. of corrupted Replica: 0
>               Block replica on datanode/rack: hadoop13/default-rack is HEALTHY
>               Block replica on datanode/rack: hadoop12/default-rack is HEALTHY
>           9. we can see that user1 can see the file name in /private. But in 
> correct case, for example,  user1  do "ls /private", the outpur is
>               Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/private":hdfs:hdfs:drwx--{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HDFS-15752:
-
Description: 
It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 

For record, i re

  was:keep it private now.


> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> It has been fix as part of https://issues.apache.org/jira/browse/HDFS-15717. 
> For record, i re



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15752) A user can obtain the infomation of blocks belong to other users

2020-12-29 Thread lujie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255877#comment-17255877
 ] 

lujie commented on HDFS-15752:
--

[~ayushtkn]

It is something different, I send a email to hadoop security, but has not 
received any response.  Can i forward the email to you?and is  
[ayushsax...@apache.org|mailto:ayushsax...@apache.org]  your email address?

> A user can obtain the infomation of blocks belong to other users
> 
>
> Key: HDFS-15752
> URL: https://issues.apache.org/jira/browse/HDFS-15752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Priority: Blocker
>  Labels: fsck
>
> keep it private now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org