[jira] [Work logged] (HDFS-16017) RBF: The getListing method should not overwrite the Listings returned by the NameNode

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16017?focusedWorklogId=594383=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-594383
 ]

ASF GitHub Bot logged work on HDFS-16017:
-

Author: ASF GitHub Bot
Created on: 11/May/21 04:22
Start Date: 11/May/21 04:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2993:
URL: https://github.com/apache/hadoop/pull/2993#issuecomment-83353


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  22m 20s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2993/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterMountTable |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2993/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2993 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux bc8933dbe030 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 716b53f6009ca2012fedc2deb11c66ed72809b36 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Updated] (HDFS-16007) Deserialization of ReplicaState should avoid throwing ArrayIndexOutOfBoundsException

2021-05-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16007:
-
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.3.

> Deserialization of ReplicaState should avoid throwing 
> ArrayIndexOutOfBoundsException
> 
>
> Key: HDFS-16007
> URL: https://issues.apache.org/jira/browse/HDFS-16007
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: junwen yang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> ReplicaState enum is using ordinal to conduct serialization and 
> deserialization, which is vulnerable to the order, to cause issues similar to 
> HDFS-15624.
> To avoid it, either adding comments to let later developer not to change this 
> enum, or add index checking in the read and getState function to avoid index 
> out of bound error. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16004) BackupNode and QJournal lack Permission check.

2021-05-10 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342276#comment-17342276
 ] 

Xiaoqiao He commented on HDFS-16004:


Thanks [~shv] for digging the historical issue and detailed comments. +1 for 
unnecessary improve BackupNode anymore. However I think we should enhance 
permission check for request to JournalNode which is deployed widely And it has 
risk without any permission check for request. What do you think?

> BackupNode and QJournal lack Permission check.
> --
>
> Key: HDFS-16004
> URL: https://issues.apache.org/jira/browse/HDFS-16004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I have some doubt when i configurate secure HDFS.  I know we have Service 
> Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
> so on.
> But i do not find such Authorization   for JournalProtocol after reading the 
> code in HDFSPolicyProvider.  And if we have, how can i configurate such 
> Authorization?
>  
> Besides  even NamenodeProtocol has Service Level Authorization, its methods 
> still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
> implemented NamenodeProtocol  for example:
>  
> _public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
>       _throws IOException {_
>     _String operationName = "startCheckpoint";_
>     _checkNNStartup();_
>     _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
> _.._
>  
> I found that the methods in  BackupNodeRpcServer who implemented 
> JournalProtocol  lack of such  Permission check. See below:
>  
>  
>     _public void startLogSegment(JournalInfo journalInfo, long epoch,_
>         _long txid) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().namenodeStartedLogSegment(txid);_
>     _}_
>  
>     _@Override_
>     _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
>         _int numTxns, byte[] records) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().journal(firstTxId, numTxns, records);_
>     _}_
>  
> Do we need add Permission check for them?
>  
> Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16007) Vulnerabilities found when serializing enum value

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16007?focusedWorklogId=594379=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-594379
 ]

ASF GitHub Bot logged work on HDFS-16007:
-

Author: ASF GitHub Bot
Created on: 11/May/21 03:38
Start Date: 11/May/21 03:38
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2982:
URL: https://github.com/apache/hadoop/pull/2982#issuecomment-837735622


   Merged. Thank you @virajjasani 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 594379)
Time Spent: 1.5h  (was: 1h 20m)

> Vulnerabilities found when serializing enum value
> -
>
> Key: HDFS-16007
> URL: https://issues.apache.org/jira/browse/HDFS-16007
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: junwen yang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> ReplicaState enum is using ordinal to conduct serialization and 
> deserialization, which is vulnerable to the order, to cause issues similar to 
> HDFS-15624.
> To avoid it, either adding comments to let later developer not to change this 
> enum, or add index checking in the read and getState function to avoid index 
> out of bound error. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16007) Vulnerabilities found when serializing enum value

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16007?focusedWorklogId=594378=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-594378
 ]

ASF GitHub Bot logged work on HDFS-16007:
-

Author: ASF GitHub Bot
Created on: 11/May/21 03:38
Start Date: 11/May/21 03:38
Worklog Time Spent: 10m 
  Work Description: aajisaka merged pull request #2982:
URL: https://github.com/apache/hadoop/pull/2982


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 594378)
Time Spent: 1h 20m  (was: 1h 10m)

> Vulnerabilities found when serializing enum value
> -
>
> Key: HDFS-16007
> URL: https://issues.apache.org/jira/browse/HDFS-16007
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: junwen yang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> ReplicaState enum is using ordinal to conduct serialization and 
> deserialization, which is vulnerable to the order, to cause issues similar to 
> HDFS-15624.
> To avoid it, either adding comments to let later developer not to change this 
> enum, or add index checking in the read and getState function to avoid index 
> out of bound error. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16007) Deserialization of ReplicaState should avoid throwing ArrayIndexOutOfBoundsException

2021-05-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16007:
-
Summary: Deserialization of ReplicaState should avoid throwing 
ArrayIndexOutOfBoundsException  (was: Vulnerabilities found when serializing 
enum value)

> Deserialization of ReplicaState should avoid throwing 
> ArrayIndexOutOfBoundsException
> 
>
> Key: HDFS-16007
> URL: https://issues.apache.org/jira/browse/HDFS-16007
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: junwen yang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> ReplicaState enum is using ordinal to conduct serialization and 
> deserialization, which is vulnerable to the order, to cause issues similar to 
> HDFS-15624.
> To avoid it, either adding comments to let later developer not to change this 
> enum, or add index checking in the read and getState function to avoid index 
> out of bound error. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15973) RBF: Add permission check before doting router federation rename.

2021-05-10 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342263#comment-17342263
 ] 

Jinglun commented on HDFS-15973:


Hi [~zhengzhuobinzzb] [~elgoiri], do you have time to help reviewing v05 ? 
Thanks very much !

> RBF: Add permission check before doting router federation rename.
> -
>
> Key: HDFS-15973
> URL: https://issues.apache.org/jira/browse/HDFS-15973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15973.001.patch, HDFS-15973.002.patch, 
> HDFS-15973.003.patch, HDFS-15973.004.patch, HDFS-15973.005.patch
>
>
> The router federation rename is lack of permission check. It is a security 
> issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16004) BackupNode and QJournal lack Permission check.

2021-05-10 Thread lujie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342159#comment-17342159
 ] 

lujie edited comment on HDFS-16004 at 5/11/21, 1:12 AM:


Hey Guys, closing it as won't fix is better as it is a real problem of 
BackupNode. But i am still contern whether somebody still use BackupNode. And 
should we let them know this bug by this issue?

But i still think we need to check {{QJournalProtocol(!)}}


was (Author: xiaoheipangzi):
Hey Guys, closing it as won't fix is better as it is a real problem of 
BackupNode. But i am still contern whether somebody still use BackupNode. And 
should we let them know this bug by this issue?

So don't {{QJournalProtocol}}  and InterQJournal need to check?

> BackupNode and QJournal lack Permission check.
> --
>
> Key: HDFS-16004
> URL: https://issues.apache.org/jira/browse/HDFS-16004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I have some doubt when i configurate secure HDFS.  I know we have Service 
> Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
> so on.
> But i do not find such Authorization   for JournalProtocol after reading the 
> code in HDFSPolicyProvider.  And if we have, how can i configurate such 
> Authorization?
>  
> Besides  even NamenodeProtocol has Service Level Authorization, its methods 
> still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
> implemented NamenodeProtocol  for example:
>  
> _public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
>       _throws IOException {_
>     _String operationName = "startCheckpoint";_
>     _checkNNStartup();_
>     _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
> _.._
>  
> I found that the methods in  BackupNodeRpcServer who implemented 
> JournalProtocol  lack of such  Permission check. See below:
>  
>  
>     _public void startLogSegment(JournalInfo journalInfo, long epoch,_
>         _long txid) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().namenodeStartedLogSegment(txid);_
>     _}_
>  
>     _@Override_
>     _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
>         _int numTxns, byte[] records) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().journal(firstTxId, numTxns, records);_
>     _}_
>  
> Do we need add Permission check for them?
>  
> Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16004) BackupNode and QJournal lack Permission check.

2021-05-10 Thread lujie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342159#comment-17342159
 ] 

lujie edited comment on HDFS-16004 at 5/11/21, 1:10 AM:


Hey Guys, closing it as won't fix is better as it is a real problem of 
BackupNode. But i am still contern whether somebody still use BackupNode. And 
should we let them know this bug by this issue?

So don't {{QJournalProtocol}}  and InterQJournal need to check?


was (Author: xiaoheipangzi):
Hey Guys, closing it as won't fix is better as it is a real problem of 
BackupNode.

So don't {{QJournalProtocol}}  and InterQJournal need to check?

> BackupNode and QJournal lack Permission check.
> --
>
> Key: HDFS-16004
> URL: https://issues.apache.org/jira/browse/HDFS-16004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I have some doubt when i configurate secure HDFS.  I know we have Service 
> Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
> so on.
> But i do not find such Authorization   for JournalProtocol after reading the 
> code in HDFSPolicyProvider.  And if we have, how can i configurate such 
> Authorization?
>  
> Besides  even NamenodeProtocol has Service Level Authorization, its methods 
> still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
> implemented NamenodeProtocol  for example:
>  
> _public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
>       _throws IOException {_
>     _String operationName = "startCheckpoint";_
>     _checkNNStartup();_
>     _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
> _.._
>  
> I found that the methods in  BackupNodeRpcServer who implemented 
> JournalProtocol  lack of such  Permission check. See below:
>  
>  
>     _public void startLogSegment(JournalInfo journalInfo, long epoch,_
>         _long txid) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().namenodeStartedLogSegment(txid);_
>     _}_
>  
>     _@Override_
>     _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
>         _int numTxns, byte[] records) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().journal(firstTxId, numTxns, records);_
>     _}_
>  
> Do we need add Permission check for them?
>  
> Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16004) BackupNode and QJournal lack Permission check.

2021-05-10 Thread lujie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342159#comment-17342159
 ] 

lujie commented on HDFS-16004:
--

Hey Guys, closing it as won't fix is better as it is a real problem of 
BackupNode.

So don't {{QJournalProtocol}}  and InterQJournal need to check?

> BackupNode and QJournal lack Permission check.
> --
>
> Key: HDFS-16004
> URL: https://issues.apache.org/jira/browse/HDFS-16004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I have some doubt when i configurate secure HDFS.  I know we have Service 
> Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
> so on.
> But i do not find such Authorization   for JournalProtocol after reading the 
> code in HDFSPolicyProvider.  And if we have, how can i configurate such 
> Authorization?
>  
> Besides  even NamenodeProtocol has Service Level Authorization, its methods 
> still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
> implemented NamenodeProtocol  for example:
>  
> _public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
>       _throws IOException {_
>     _String operationName = "startCheckpoint";_
>     _checkNNStartup();_
>     _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
> _.._
>  
> I found that the methods in  BackupNodeRpcServer who implemented 
> JournalProtocol  lack of such  Permission check. See below:
>  
>  
>     _public void startLogSegment(JournalInfo journalInfo, long epoch,_
>         _long txid) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().namenodeStartedLogSegment(txid);_
>     _}_
>  
>     _@Override_
>     _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
>         _int numTxns, byte[] records) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().journal(firstTxId, numTxns, records);_
>     _}_
>  
> Do we need add Permission check for them?
>  
> Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-05-10 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15997:
--
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently dfsadmin -provisionSnapshotTrash only supports creating trash root 
> one by one.
> This jira adds -all argument to create trash root on ALL snapshottable dirs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk

2021-05-10 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342112#comment-17342112
 ] 

Hadoop QA commented on HDFS-4114:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red}{color} | {color:red} HDFS-4114 does not apply to trunk. Rebase 
required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-4114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12697947/h4114_20150210.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/600/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Remove the BackupNode and CheckpointNode from trunk
> ---
>
> Key: HDFS-4114
> URL: https://issues.apache.org/jira/browse/HDFS-4114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eli Collins
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-4114.000.patch, HDFS-4114.001.patch, 
> HDFS-4114.patch, h4114_20150210.patch
>
>
> Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
> BackupNode and CheckpointNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16004) BackupNode and QJournal lack Permission check.

2021-05-10 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342108#comment-17342108
 ] 

Konstantin Shvachko commented on HDFS-16004:


Hey guys. I wouldn't worry about {{BackupNode}}. It was supposed to be removed 
as redundant HDFS-4114.
Same with {{JournalProtocol}} as it is used exclusively for {{BackupNode}}.
This is an old code, that is not supposed to be used. There were some 
controversial issues about removing {{BackupNode}}, but I don't think they 
still stand.
{{QJournalProtocol}} is the one to be used with QJM.
If it is fine, then we can close this issue as wont fix or not a problem.

> BackupNode and QJournal lack Permission check.
> --
>
> Key: HDFS-16004
> URL: https://issues.apache.org/jira/browse/HDFS-16004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I have some doubt when i configurate secure HDFS.  I know we have Service 
> Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
> so on.
> But i do not find such Authorization   for JournalProtocol after reading the 
> code in HDFSPolicyProvider.  And if we have, how can i configurate such 
> Authorization?
>  
> Besides  even NamenodeProtocol has Service Level Authorization, its methods 
> still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
> implemented NamenodeProtocol  for example:
>  
> _public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
>       _throws IOException {_
>     _String operationName = "startCheckpoint";_
>     _checkNNStartup();_
>     _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
> _.._
>  
> I found that the methods in  BackupNodeRpcServer who implemented 
> JournalProtocol  lack of such  Permission check. See below:
>  
>  
>     _public void startLogSegment(JournalInfo journalInfo, long epoch,_
>         _long txid) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().namenodeStartedLogSegment(txid);_
>     _}_
>  
>     _@Override_
>     _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
>         _int numTxns, byte[] records) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().journal(firstTxId, numTxns, records);_
>     _}_
>  
> Do we need add Permission check for them?
>  
> Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?focusedWorklogId=594201=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-594201
 ]

ASF GitHub Bot logged work on HDFS-15997:
-

Author: ASF GitHub Bot
Created on: 10/May/21 19:41
Start Date: 10/May/21 19:41
Worklog Time Spent: 10m 
  Work Description: smengcl merged pull request #2958:
URL: https://github.com/apache/hadoop/pull/2958


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 594201)
Time Spent: 1h  (was: 50m)

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently dfsadmin -provisionSnapshotTrash only supports creating trash root 
> one by one.
> This jira adds -all argument to create trash root on ALL snapshottable dirs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?focusedWorklogId=594200=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-594200
 ]

ASF GitHub Bot logged work on HDFS-15997:
-

Author: ASF GitHub Bot
Created on: 10/May/21 19:40
Start Date: 10/May/21 19:40
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #2958:
URL: https://github.com/apache/hadoop/pull/2958#issuecomment-837215795


   Failing UTs are irrelevant


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 594200)
Time Spent: 50m  (was: 40m)

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently dfsadmin -provisionSnapshotTrash only supports creating trash root 
> one by one.
> This jira adds -all argument to create trash root on ALL snapshottable dirs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2021-05-10 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341089#comment-17341089
 ] 

Konstantin Shvachko edited comment on HDFS-14703 at 5/10/21, 7:30 PM:
--

Updated the POC patches to current trunk. There were indeed some missing parts 
in the first patch.
 See [^003-partitioned-inodeMap-POC.tar.gz].

Also created a remote branch called {{fgl}} in hadoop repo with both patches 
applied to current trunk. [~xinglin] is working on adding {{create()}} call to 
FGL. Right now only {{mkdirs()}} is supported.


was (Author: shv):
Updated the POC patches to current trunk. There were indeed some missing parts 
in the first patch.
 See [^003-partitioned-inodeMap-POC.tar.gz].

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: 001-partitioned-inodeMap-POC.tar.gz, 
> 002-partitioned-inodeMap-POC.tar.gz, 003-partitioned-inodeMap-POC.tar.gz, 
> NameNode Fine-Grained Locking.pdf, NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16008) RBF: Tool to initialize ViewFS Mapping to Router

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16008?focusedWorklogId=594119=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-594119
 ]

ASF GitHub Bot logged work on HDFS-16008:
-

Author: ASF GitHub Bot
Created on: 10/May/21 17:09
Start Date: 10/May/21 17:09
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2981:
URL: https://github.com/apache/hadoop/pull/2981#discussion_r629530054



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
##
@@ -241,6 +241,22 @@ Mount table permission can be set by following command:
 
 The option mode is UNIX-style permissions for the mount table. Permissions are 
specified in octal, e.g. 0755. By default, this is set to 0755.
 
+ Init ViewFs To Router
+Router supports initializing the ViewFS mount point to the Router. The mapping 
directory protocol of ViewFS must be HDFS, and the initializer only supports 
one-to-one mapping.
+
+For example, use the following viewfs to configure the initial mount table to 
the router.
+
+
+  
+fs.viewfs.mounttable.ClusterX.link./data
+hdfs://nn1-clusterx.example.com:8020/data
+  
+
+
+The ViewFS mount table can be initialized to the Router by using the following 
command:
+
+[hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -initViewFsToMountTable 
ClusterX

Review comment:
   Correct.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 594119)
Time Spent: 2h 20m  (was: 2h 10m)

> RBF: Tool to initialize ViewFS Mapping to Router
> 
>
> Key: HDFS-16008
> URL: https://issues.apache.org/jira/browse/HDFS-16008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.1
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This is a tool for initializing ViewFS Mapping to Router.
> Some companies are currently migrating from viewfs to router, I think they 
> need this tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16017) RBF: The getListing method should not overwrite the Listings returned by the NameNode

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16017?focusedWorklogId=594094=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-594094
 ]

ASF GitHub Bot logged work on HDFS-16017:
-

Author: ASF GitHub Bot
Created on: 10/May/21 16:31
Start Date: 10/May/21 16:31
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2993:
URL: https://github.com/apache/hadoop/pull/2993#discussion_r629504887



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -513,6 +513,44 @@ public void testProxyListFilesLargeDir() throws 
IOException {
 }
   }
 
+  @Test
+  public void testProxyListFilesWithOverwrite() throws IOException {
+// Create a parent point as well as a subfolder mount
+// /parent
+//ns0 -> /parent
+// /parent/sub1
+//ns0 -> /parent/sub1
+String parent = "/parent";
+String sub1 = parent + "/sub1";
+Path parentPath = new Path(parent);
+// Add mount point
+for (RouterContext rc : cluster.getRouters()) {
+  MockResolver resolver =
+  (MockResolver) rc.getRouter().getSubclusterResolver();
+  resolver.addLocation(parent, ns, parent);
+  resolver.addLocation(sub1, ns, sub1);
+}
+// The sub1 folder created is the same as the sub1 mount point directory
+nnFS.mkdirs(new Path(sub1));
+
+FileStatus[] routerFsResult = routerFS.listStatus(parentPath);
+FileStatus[] nnFsResult = nnFS.listStatus(parentPath);
+// Checking
+assertEquals(1, routerFsResult.length);
+assertEquals(1, nnFsResult.length);
+
+FileStatus nnFileStatus = nnFsResult[0];
+FileStatus routerFileStatus = routerFsResult[0];
+
+assert nnFileStatus != null;

Review comment:
   Let's do a unit test assert instead.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 594094)
Time Spent: 0.5h  (was: 20m)

> RBF: The getListing method should not overwrite the Listings returned by the 
> NameNode
> -
>
> Key: HDFS-16017
> URL: https://issues.apache.org/jira/browse/HDFS-16017
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.1
>Reporter: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Regarding the getListing method, when the directory returned by NameNode is a 
> mount point in MountTable, the list returned by Namenode shall prevail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16006) RBF: TestRouterFederationRename is flaky

2021-05-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-16006:
---
Summary: RBF: TestRouterFederationRename is flaky  (was: 
TestRouterFederationRename is flaky)

> RBF: TestRouterFederationRename is flaky
> 
>
> Key: HDFS-16006
> URL: https://issues.apache.org/jira/browse/HDFS-16006
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
>
>
> {quote}
> [ERROR] Errors: 
> [ERROR]   
> TestRouterFederationRename.testCounter:440->Object.wait:502->Object.wait:-2 ? 
> TestTimedOut
> [ERROR]   TestRouterFederationRename.testSetup:145 ? Remote The directory 
> /src cannot be...
> [ERROR]   TestRouterFederationRename.testSetup:145 ? Remote The directory 
> /src cannot be...
> {quote}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2970/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16021) heap-use-after-free in hdfsThreadDestructor

2021-05-10 Thread Jeremy Coulon (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341967#comment-17341967
 ] 

Jeremy Coulon commented on HDFS-16021:
--

Side-note:

I believe that HDFS-15270, was originally opened on OpenJ9 bug tracker by 
someone from my organisation. The proposed fix doesn't work!

 

Both *env* and **env* are invalid but not NULL.

> heap-use-after-free in hdfsThreadDestructor
> ---
>
> Key: HDFS-16021
> URL: https://issues.apache.org/jira/browse/HDFS-16021
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.1, 2.9.2, 3.3.0
>Reporter: Jeremy Coulon
>Priority: Major
> Attachments: fix-hdfsThreadDestructor.patch, hdfs-asan.log
>
>
> Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 
>  
> We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
> long time. Crash is almost systematic with OpenJ9 and more sporadic with 
> Hotspot JVM.
>  
> I finally went to the root cause of this bug thanks to AddressSanitizer. This 
> is quite difficult to setup because you need to rebuild both the test-case, 
> hadoop and openjdk-hotspot >= 13 with specific compiler options.
>  
> See hdfs-asan.log for details.
>  
> *Analysis:*
> In hdfsThreadDestructor(), you are making several JNI calls in order to 
> detach the thread from the JVM:
>  
> {code:java}
> /* Detach the current thread from the JVM */
> if (env) {
>   ret = (*env)->GetJavaVM(env, );
>   /*
>*  More code here...
>*/
> }{code}
> This is fine if the thread was created in the C/C++ world.
>  
> However if the thread was created in the Java world, this is absolutely 
> wrong. When a Java thread terminates, the JVM deallocates some memory which 
> contains (among other things) the thread specific JNIEnv. Then 
> hdfsThreadDestructor() is called. The *env* variable is not NULL but points 
> to memory which was just released. This is heap-use-after-free detected by 
> ASan.
>  
> I have been working on a patch that fixes the issue (see attachment).
>  
> Here is the idea:
>  * In hdfsThreadDestructor(), we need to know if the thread was created by 
> Java or C/C++ . If it was created by C/C++ we should make JNI calls in order 
> to detach the current thread. If it was created by Java, we don't need to 
> make any JNI call: thread is already detached.
>  * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
> C/C++. It can be done by calling *vm->GetEnv()*. Then we store this 
> information inside ThreadLocalState.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16021) heap-use-after-free in hdfsThreadDestructor

2021-05-10 Thread Jeremy Coulon (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Coulon updated HDFS-16021:
-
Description: 
Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 

 

We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
long time. Crash is almost systematic with OpenJ9 and more sporadic with 
Hotspot JVM.

 

I finally went to the root cause of this bug thanks to AddressSanitizer. This 
is quite difficult to setup because you need to rebuild both the test-case, 
hadoop and openjdk-hotspot >= 13 with specific compiler options.

 

See hdfs-asan.log for details.

 

*Analysis:*

In hdfsThreadDestructor(), you are making several JNI calls in order to detach 
the thread from the JVM:

 
{code:java}
/* Detach the current thread from the JVM */
if (env) {
  ret = (*env)->GetJavaVM(env, );
  /*
   *  More code here...
   */
}{code}
This is fine if the thread was created in the C/C++ world.

 

However if the thread was created in the Java world, this is absolutely wrong. 
When a Java thread terminates, the JVM deallocates some memory which contains 
(among other things) the thread specific JNIEnv. Then hdfsThreadDestructor() is 
called. The *env* variable is not NULL but points to memory which was just 
released. This is heap-use-after-free detected by ASan.

 

I have been working on a patch that fixes the issue (see attachment).

 

Here is the idea:
 * In hdfsThreadDestructor(), we need to know if the thread was created by Java 
or C/C++ . If it was created by C/C++ we should make JNI calls in order to 
detach the current thread. If it was created by Java, we don't need to make any 
JNI call: thread is already detached.
 * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
C/C++. It can be done by calling *vm->GetEnv()*. Then we store this information 
inside ThreadLocalState.

  was:
Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 

 

We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
long time. Crash is almost systematic with OpenJ9 and more sporadic with 
Hotspot JVM.

 

I finally went to the root cause of this bug thanks to AddressSanitizer. This 
is quite difficult to setup because you need to rebuild both the test-case, 
hadoop and openjdk-hotspot >= 13 with specific compiler options.

 

See hdfs-asan.log for details.

 

*Analysis:*

In hdfsThreadDestructor(), you are making several JNI calls in order to detach 
the thread from the JVM:

 
{code:java}
/* Detach the current thread from the JVM */
if (env) {
  ret = (*env)->GetJavaVM(env, );
  /*
   *  More code here...
   */
}{code}
This is fine if the thread was created in the C/C++ world.

 

However if the thread was created in the Java world, this is absolutely wrong. 
When a Java thread terminates, the JVM deallocates some memory which contains 
(among other things) the thread specific JNIEnv. Then hdfsThreadDestructor() is 
called. The *env* variable is not NULL but points to memory which was just 
released. This is heap-use-after-free detected by ASan.

 

I have been working on a patch that fixes the issue (see attachment).

 

Here is the idea:
 * In hdfsThreadDestructor(), we need to know if the thread was create by Java 
or C/C++. If it was created by C/C++, we should make JNI calls in order to 
detach the current thread. If it was created by Java, we don't need to make any 
JNI call: thread is already detached.
 * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
C/C++. It can be done by calling *vm->GetEnv()*. Then we store this information 
inside ThreadLocalState.


> heap-use-after-free in hdfsThreadDestructor
> ---
>
> Key: HDFS-16021
> URL: https://issues.apache.org/jira/browse/HDFS-16021
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.1, 2.9.2, 3.3.0
>Reporter: Jeremy Coulon
>Priority: Major
> Attachments: fix-hdfsThreadDestructor.patch, hdfs-asan.log
>
>
> Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 
>  
> We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
> long time. Crash is almost systematic with OpenJ9 and more sporadic with 
> Hotspot JVM.
>  
> I finally went to the root cause of this bug thanks to AddressSanitizer. This 
> is quite difficult to setup because you need to rebuild both the test-case, 
> hadoop and openjdk-hotspot >= 13 with specific compiler options.
>  
> See hdfs-asan.log for details.
>  
> *Analysis:*
> In hdfsThreadDestructor(), you are making several JNI calls in order to 
> detach the thread from the JVM:
>  
> {code:java}
> /* Detach the current thread from the JVM */
> if (env) {
>   ret = (*env)->GetJavaVM(env, );
>   /*
>*  More code here...
>*/
> }{code}
> This is fine if the thread was created in the 

[jira] [Updated] (HDFS-16021) heap-use-after-free in hdfsThreadDestructor

2021-05-10 Thread Jeremy Coulon (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Coulon updated HDFS-16021:
-
Description: 
Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 

 

We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
long time. Crash is almost systematic with OpenJ9 and more sporadic with 
Hotspot JVM.

 

I finally went to the root cause of this bug thanks to AddressSanitizer. This 
is quite difficult to setup because you need to rebuild both the test-case, 
hadoop and openjdk-hotspot >= 13 with specific compiler options.

 

See hdfs-asan.log for details.

 

*Analysis:*

In hdfsThreadDestructor(), you are making several JNI calls in order to detach 
the thread from the JVM:

 
{code:java}
/* Detach the current thread from the JVM */
if (env) {
  ret = (*env)->GetJavaVM(env, );
  /*
   *  More code here...
   */
}{code}
This is fine if the thread was created in the C/C++ world.

 

However if the thread was created in the Java world, this is absolutely wrong. 
When a Java thread terminates, the JVM deallocates some memory which contains 
(among other things) the thread specific JNIEnv. Then hdfsThreadDestructor() is 
called. The *env* variable is not NULL but points to memory which was just 
released. This is heap-use-after-free detected by ASan.

 

I have been working on a patch that fixes the issue (see attachment).

 

Here is the idea:
 * In hdfsThreadDestructor(), we need to know if the thread was create by Java 
or C/C++. If it was created by C/C++, we should make JNI calls in order to 
detach the current thread. If it was created by Java, we don't need to make any 
JNI call: thread is already detached.
 * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
C/C++. It can be done by calling *vm->GetEnv()*. Then we store this information 
inside ThreadLocalState.

  was:
Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 

 

We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
long time. Crash is almost systematic with OpenJ9 and more sporadic with 
Hotspot JVM.

 

I finally went to the root cause of this bug thanks to AddressSanitizer. This 
is quite difficult to setup because you need to rebuild both the test-case, 
hadoop and openjdk-hotspot >= 13 with specific compiler options.

 

See hdfs-asan.log for details.

 

*Analysis:*

In hdfsThreadDestructor(), you are making several JNI calls in order to detach 
the thread from the JVM:

 
{code:java}
/* Detach the current thread from the JVM */
if (env) {
  ret = (*env)->GetJavaVM(env, );
  /*
   *  More code here...
   */
}{code}
This is fine if the thread was created in the C/C++ world.

 

However if the thread was created in the Java world, this is absolutely wrong. 
When a Java thread terminates, the JVM deallocates some memory which contains 
(among other things) the thread specific JNIEnv. Then hdfsThreadDestructor() is 
called. The *env* variable is not NULL but points to memory which was just 
released. This is heap-use-after-free detected by ASan.

 

I have been working on a patch that fixes the issue (see attachment).

 

Here is the idea:
 * In hdfsThreadDestructor(), we need to know if the thread was create by Java 
or C/C++.+ If it was created by C/C+, we should make JNI calls in order to 
detach the current thread. If it was created by Java, we don't need to make any 
JNI call: thread is already detached.
 * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
C/C++. It can be done by calling *vm->GetEnv()*. Then we store this information 
inside ThreadLocalState.


> heap-use-after-free in hdfsThreadDestructor
> ---
>
> Key: HDFS-16021
> URL: https://issues.apache.org/jira/browse/HDFS-16021
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.1, 2.9.2, 3.3.0
>Reporter: Jeremy Coulon
>Priority: Major
> Attachments: fix-hdfsThreadDestructor.patch, hdfs-asan.log
>
>
> Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 
>  
> We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
> long time. Crash is almost systematic with OpenJ9 and more sporadic with 
> Hotspot JVM.
>  
> I finally went to the root cause of this bug thanks to AddressSanitizer. This 
> is quite difficult to setup because you need to rebuild both the test-case, 
> hadoop and openjdk-hotspot >= 13 with specific compiler options.
>  
> See hdfs-asan.log for details.
>  
> *Analysis:*
> In hdfsThreadDestructor(), you are making several JNI calls in order to 
> detach the thread from the JVM:
>  
> {code:java}
> /* Detach the current thread from the JVM */
> if (env) {
>   ret = (*env)->GetJavaVM(env, );
>   /*
>*  More code here...
>*/
> }{code}
> This is fine if the thread was created in the 

[jira] [Updated] (HDFS-16021) heap-use-after-free in hdfsThreadDestructor

2021-05-10 Thread Jeremy Coulon (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Coulon updated HDFS-16021:
-
Description: 
Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 

 

We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
long time. Crash is almost systematic with OpenJ9 and more sporadic with 
Hotspot JVM.

 

I finally went to the root cause of this bug thanks to AddressSanitizer. This 
is quite difficult to setup because you need to rebuild both the test-case, 
hadoop and openjdk-hotspot >= 13 with specific compiler options.

 

See hdfs-asan.log for details.

 

*Analysis:*

In hdfsThreadDestructor(), you are making several JNI calls in order to detach 
the thread from the JVM:

 
{code:java}
/* Detach the current thread from the JVM */
if (env) {
  ret = (*env)->GetJavaVM(env, );
  /*
   *  More code here...
   */
}{code}
This is fine if the thread was created in the C/C++ world.

 

However if the thread was created in the Java world, this is absolutely wrong. 
When a Java thread terminates, the JVM deallocates some memory which contains 
(among other things) the thread specific JNIEnv. Then hdfsThreadDestructor() is 
called. The *env* variable is not NULL but points to memory which was just 
released. This is heap-use-after-free detected by ASan.

 

I have been working on a patch that fixes the issue (see attachment).

 

Here is the idea:
 * In hdfsThreadDestructor(), we need to know if the thread was create by Java 
or C/C++.+ If it was created by C/C+, we should make JNI calls in order to 
detach the current thread. If it was created by Java, we don't need to make any 
JNI call: thread is already detached.
 * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
C/C++. It can be done by calling *vm->GetEnv()*. Then we store this information 
inside ThreadLocalState.

  was:
Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 

 

We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
long time. Crash is almost systematic with OpenJ9 and more sporadic with 
Hotspot JVM.

 

I finally went to the root cause of this bug thanks to AddressSanitizer. This 
is quite difficult to setup because you need to rebuild both the test-case, 
hadoop and openjdk-hotspot >= 13 with specific compiler options.

 

See hdfs-asan.log for details.

 

*Analysis:*

In hdfsThreadDestructor(), you are making several JNI calls in order to detach 
the thread from the JVM:

 
{code:java}
/* Detach the current thread from the JVM */
if (env) {
  ret = (*env)->GetJavaVM(env, );
  /*
   *  More code here...
   */
}{code}
This is fine if the thread was created in the C/C++ world.

 

However if the thread was created in the Java world, this is absolutely wrong. 
When a Java thread terminates, the JVM deallocates some memory which contains 
(among other things) the thread specific JNIEnv. Then hdfsThreadDestructor() is 
called. The *env* variable is not NULL but points to memory which was just 
released. This is heap-use-after-free detected by ASan.

 

I have been working on a patch that fixes the issue (see attachment).

 

Here is the idea:
 * In hdfsThreadDestructor(), we need to know if the thread was create by Java 
or C/C++. If it was created by C/C++, we should make JNI calls in order to 
detach the current thread. If it was created by Java, we don't need to make any 
JNI call: thread is already detached.
 * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
C/C++. It can be done by calling *vm->GetEnv()*. Then we store this information 
inside ThreadLocalState.


> heap-use-after-free in hdfsThreadDestructor
> ---
>
> Key: HDFS-16021
> URL: https://issues.apache.org/jira/browse/HDFS-16021
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.1, 2.9.2, 3.3.0
>Reporter: Jeremy Coulon
>Priority: Major
> Attachments: fix-hdfsThreadDestructor.patch, hdfs-asan.log
>
>
> Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 
>  
> We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
> long time. Crash is almost systematic with OpenJ9 and more sporadic with 
> Hotspot JVM.
>  
> I finally went to the root cause of this bug thanks to AddressSanitizer. This 
> is quite difficult to setup because you need to rebuild both the test-case, 
> hadoop and openjdk-hotspot >= 13 with specific compiler options.
>  
> See hdfs-asan.log for details.
>  
> *Analysis:*
> In hdfsThreadDestructor(), you are making several JNI calls in order to 
> detach the thread from the JVM:
>  
> {code:java}
> /* Detach the current thread from the JVM */
> if (env) {
>   ret = (*env)->GetJavaVM(env, );
>   /*
>*  More code here...
>*/
> }{code}
> This is fine if the thread was created in the 

[jira] [Created] (HDFS-16021) heap-use-after-free in hdfsThreadDestructor

2021-05-10 Thread Jeremy Coulon (Jira)
Jeremy Coulon created HDFS-16021:


 Summary: heap-use-after-free in hdfsThreadDestructor
 Key: HDFS-16021
 URL: https://issues.apache.org/jira/browse/HDFS-16021
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0, 2.9.2, 2.9.1
Reporter: Jeremy Coulon
 Attachments: fix-hdfsThreadDestructor.patch, hdfs-asan.log

Related to HDFS-12628 HDFS-13585 HDFS-14488 HDFS-15270 

 

We have experienced crashes located in libhdfs hdfsThreadDestructor() for a 
long time. Crash is almost systematic with OpenJ9 and more sporadic with 
Hotspot JVM.

 

I finally went to the root cause of this bug thanks to AddressSanitizer. This 
is quite difficult to setup because you need to rebuild both the test-case, 
hadoop and openjdk-hotspot >= 13 with specific compiler options.

 

See hdfs-asan.log for details.

 

*Analysis:*

In hdfsThreadDestructor(), you are making several JNI calls in order to detach 
the thread from the JVM:

 
{code:java}
/* Detach the current thread from the JVM */
if (env) {
  ret = (*env)->GetJavaVM(env, );
  /*
   *  More code here...
   */
}{code}
This is fine if the thread was created in the C/C++ world.

 

However if the thread was created in the Java world, this is absolutely wrong. 
When a Java thread terminates, the JVM deallocates some memory which contains 
(among other things) the thread specific JNIEnv. Then hdfsThreadDestructor() is 
called. The *env* variable is not NULL but points to memory which was just 
released. This is heap-use-after-free detected by ASan.

 

I have been working on a patch that fixes the issue (see attachment).

 

Here is the idea:
 * In hdfsThreadDestructor(), we need to know if the thread was create by Java 
or C/C++. If it was created by C/C++, we should make JNI calls in order to 
detach the current thread. If it was created by Java, we don't need to make any 
JNI call: thread is already detached.
 * In getGlobalJNIEnv(), we can detect if the thread was created by Java or 
C/C++. It can be done by calling *vm->GetEnv()*. Then we store this information 
inside ThreadLocalState.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16020) DatanodeReportType should add LIVE_NOT_DECOMMISSIONING type

2021-05-10 Thread lei w (Jira)
lei w created HDFS-16020:


 Summary: DatanodeReportType should add LIVE_NOT_DECOMMISSIONING 
type
 Key: HDFS-16020
 URL: https://issues.apache.org/jira/browse/HDFS-16020
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, namanode
Reporter: lei w


Balancer builds cluster nodes by 
getDatanodeStorageReport(DatanodeReportType.LIVE) method。If the user does not 
specify the exclude node list, the balancer may migrate data to the DataNode in 
the decommission state. Should we filter out nodes in the decommission state by 
a new DatanodeReportType(LIVE_NOT_DECOMMISSIONING) regardless of whether the 
user specifies the exclude node list ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16015) RBF: Read and write data through router select dn according to real user ip

2021-05-10 Thread zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341723#comment-17341723
 ] 

zhu edited comment on HDFS-16015 at 5/10/21, 2:13 PM:
--

[~ayushtkn] ,[~elgoiri] ,[~hexiaoqiao] Looking forward your comments.


was (Author: zhuxiangyi):
[~elgoiri] ,[~elgoiri] ,[~hexiaoqiao] Looking forward your comments.

> RBF: Read and write data through router select dn according to real user ip
> ---
>
> Key: HDFS-16015
> URL: https://issues.apache.org/jira/browse/HDFS-16015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Read and write data through router select dn according to real user ip.
> relates to HDFS-13248
> *General train of thought:*
> 1. RouterRpcClient set the real user ip to CallerContext.
> 2. The Server processes the Rpc to check whether the CallerContext contains 
> the real user ip field, if it contains the real user IP and verify the 
> legitimacy of the ip, if the verification passes, set the real IP to the Call.
> 3. Modify the getClientMachine method, if there is a real user ip in the 
> Call, return the real user ip, if it does not return the server to monitor 
> the IP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15714) HDFS Provided Storage Read/Write Mount Support On-the-fly

2021-05-10 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341922#comment-17341922
 ] 

Bhavik Patel commented on HDFS-15714:
-

[~PhiloHe] Thank you for putting it all together in a single PR, including the 
design document.

It looks clean and each operation explained very well in detail :)

I am going through the design doc and along with that checking the code for the 
same. In the *Read Mount* section, it is mentioned that
{code:java}
During mounting a
storage in readOnly mode, metadata will be pulled from the remote by S3 client. 
And they will
be persisted to HDFS namespace after the conversion
{code}
can you please help me to figure out the code section where it is exactly 
trying to pull to the *metadata* from the S3 client and which information 
metadata include?

> HDFS Provided Storage Read/Write Mount Support On-the-fly
> -
>
> Key: HDFS-15714
> URL: https://issues.apache.org/jira/browse/HDFS-15714
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: 3.4.0
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15714-01.patch, 
> HDFS_Provided_Storage_Design-V1.pdf, HDFS_Provided_Storage_Performance-V1.pdf
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> HDFS Provided Storage (PS) is a feature to tier HDFS over other file systems. 
> In HDFS-9806, PROVIDED storage type was introduced to HDFS. Through 
> configuring external storage with PROVIDED tag for DataNode, user can enable 
> application to access data stored externally from HDFS side. However, there 
> are two issues need to be addressed. Firstly, mounting external storage 
> on-the-fly, namely dynamic mount, is lacking. It is necessary to get it 
> supported to flexibly combine HDFS with an external storage at runtime. 
> Secondly, PS write is not supported by current HDFS. But in real 
> applications, it is common to transfer data bi-directionally for read/write 
> between HDFS and external storage.
> Through this JIRA, we are presenting our work for PS write support and 
> dynamic mount support for both read & write. Please note in the community 
> several JIRAs have been filed for these topics. Our work is based on these 
> previous community work, with new design & implementation to support called 
> writeBack mount and enable admin to add any mount on-the-fly. We appreciate 
> those folks in the community for their great contribution! See their pending 
> JIRAs: HDFS-14805 & HDFS-12090.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16019) HDFS: Inode CheckPoint

2021-05-10 Thread zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341865#comment-17341865
 ] 

zhu commented on HDFS-16019:


[~weichiu] Thanks for your comments. 

This improvement is different from NameNode Analytics. This improvement is to 
omit the "image processing" part of the OIV tool, and use CheckPoint to 
generate a text file that can be directly used for analysis. This improvement 
does not provide complex image analysis functions. In addition, this function 
does not require an additional server, just turn it on on the standby node.

> HDFS: Inode CheckPoint 
> ---
>
> Key: HDFS-16019
> URL: https://issues.apache.org/jira/browse/HDFS-16019
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.1
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>
> *background*
> The OIV IMAGE analysis tool has brought us many benefits, such as file size 
> distribution, cold and hot data, abnormal growth directory analysis. But in 
> my opinion he is too slow, especially the big IMAGE.
> After Hadoop 2.3, the format of IMAGE has changed. For OIV tools, it is 
> necessary to load the entire IMAGE into the memory to output the inode 
> information into a text format. For large IMAGE, this process takes a long 
> time and consumes more resources and requires a large memory machine to 
> analyze.
> Although, HDFS provides the dfs.namenode.legacy-oiv-image.dir parameter to 
> get the old version of IMAGE through CheckPoint. The old IMAGE parsing does 
> not require too many resources, but we need to parse the IMAGE again through 
> the hdfs oiv_legacy command to get the text information of the Inode, which 
> is relatively time-consuming.
> **
> *Solution*
> We can ask the standby node to periodically check the Inode and serialize the 
> Inode in text mode. For OutPut, different FileSystems can be used according 
> to the configuration, such as the local file system or the HDFS file system.
> The advantage of providing HDFS file system is that we can analyze Inode 
> directly through spark/hive. I think the block information corresponding to 
> the Inode may not be of much use. The size of the file and the number of 
> copies are more useful to us.
> In addition, the sequential output of the Inode is not necessary. We can 
> speed up the CheckPoint for the Inode, and use the partition for the 
> serialized Inode to output different files. Use a production thread to put 
> Inode in the Queue, and use multi-threaded consumption Queue to write to 
> different partition files. For output files, compression can also be used to 
> reduce disk IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341854#comment-17341854
 ] 

Ayush Saxena commented on HDFS-16003:
-

Committed to trunk and branch-3.3

Thanx [~lei w] for the contribution, [~hexiaoqiao] and [~zhuqi] for the 
reviews!!!

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.001.path, HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-16003:

Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-16003.001.path, HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341852#comment-17341852
 ] 

Ayush Saxena commented on HDFS-16003:
-

Test test failures aren't related.

+1

 

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.001.path, HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341839#comment-17341839
 ] 

Hadoop QA commented on HDFS-16003:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
5s{color} | {color:blue}{color} | {color:blue} The patch file was not named 
according to hadoop's naming conventions. Please see 
https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 20m 
45s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m  
1s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green}{color} | {color:green} patch has no errors when 

[jira] [Commented] (HDFS-16019) HDFS: Inode CheckPoint

2021-05-10 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341836#comment-17341836
 ] 

Wei-Chiu Chuang commented on HDFS-16019:


Sounds very similar to what NameNode Analytics provides? [~zero45]  HDFS-15763

> HDFS: Inode CheckPoint 
> ---
>
> Key: HDFS-16019
> URL: https://issues.apache.org/jira/browse/HDFS-16019
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.1
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>
> *background*
> The OIV IMAGE analysis tool has brought us many benefits, such as file size 
> distribution, cold and hot data, abnormal growth directory analysis. But in 
> my opinion he is too slow, especially the big IMAGE.
> After Hadoop 2.3, the format of IMAGE has changed. For OIV tools, it is 
> necessary to load the entire IMAGE into the memory to output the inode 
> information into a text format. For large IMAGE, this process takes a long 
> time and consumes more resources and requires a large memory machine to 
> analyze.
> Although, HDFS provides the dfs.namenode.legacy-oiv-image.dir parameter to 
> get the old version of IMAGE through CheckPoint. The old IMAGE parsing does 
> not require too many resources, but we need to parse the IMAGE again through 
> the hdfs oiv_legacy command to get the text information of the Inode, which 
> is relatively time-consuming.
> **
> *Solution*
> We can ask the standby node to periodically check the Inode and serialize the 
> Inode in text mode. For OutPut, different FileSystems can be used according 
> to the configuration, such as the local file system or the HDFS file system.
> The advantage of providing HDFS file system is that we can analyze Inode 
> directly through spark/hive. I think the block information corresponding to 
> the Inode may not be of much use. The size of the file and the number of 
> copies are more useful to us.
> In addition, the sequential output of the Inode is not necessary. We can 
> speed up the CheckPoint for the Inode, and use the partition for the 
> serialized Inode to output different files. Use a production thread to put 
> Inode in the Queue, and use multi-threaded consumption Queue to write to 
> different partition files. For output files, compression can also be used to 
> reduce disk IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16019) HDFS: Inode CheckPoint

2021-05-10 Thread zhu (Jira)
zhu created HDFS-16019:
--

 Summary: HDFS: Inode CheckPoint 
 Key: HDFS-16019
 URL: https://issues.apache.org/jira/browse/HDFS-16019
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namanode
Affects Versions: 3.3.1
Reporter: zhu
Assignee: zhu


*background*
The OIV IMAGE analysis tool has brought us many benefits, such as file size 
distribution, cold and hot data, abnormal growth directory analysis. But in my 
opinion he is too slow, especially the big IMAGE.
After Hadoop 2.3, the format of IMAGE has changed. For OIV tools, it is 
necessary to load the entire IMAGE into the memory to output the inode 
information into a text format. For large IMAGE, this process takes a long time 
and consumes more resources and requires a large memory machine to analyze.
Although, HDFS provides the dfs.namenode.legacy-oiv-image.dir parameter to get 
the old version of IMAGE through CheckPoint. The old IMAGE parsing does not 
require too many resources, but we need to parse the IMAGE again through the 
hdfs oiv_legacy command to get the text information of the Inode, which is 
relatively time-consuming.
**

*Solution*
We can ask the standby node to periodically check the Inode and serialize the 
Inode in text mode. For OutPut, different FileSystems can be used according to 
the configuration, such as the local file system or the HDFS file system.
The advantage of providing HDFS file system is that we can analyze Inode 
directly through spark/hive. I think the block information corresponding to the 
Inode may not be of much use. The size of the file and the number of copies are 
more useful to us.
In addition, the sequential output of the Inode is not necessary. We can speed 
up the CheckPoint for the Inode, and use the partition for the serialized Inode 
to output different files. Use a production thread to put Inode in the Queue, 
and use multi-threaded consumption Queue to write to different partition files. 
For output files, compression can also be used to reduce disk IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16018) Optimize the display of hdfs "count -e" or "count -t" command

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16018?focusedWorklogId=593897=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-593897
 ]

ASF GitHub Bot logged work on HDFS-16018:
-

Author: ASF GitHub Bot
Created on: 10/May/21 09:51
Start Date: 10/May/21 09:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2994:
URL: https://github.com/apache/hadoop/pull/2994#issuecomment-836487481


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2994/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 46 
unchanged - 0 fixed = 47 total (was 46)  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2994/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2994 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1159a7a58dc6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cf57f8063757a49ff2ffe8410688676c91ce5a9a |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2994/1/testReport/ |
   | Max. process+thread count | 3158 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341800#comment-17341800
 ] 

Ayush Saxena commented on HDFS-16003:
-

[~lei w]  guess you had a typo in the patch name. Instead of having extention 
as .patch you have made it .path
Not sure if jenkins will pick it up.
Can you re upload with correct extension, if Jenkins doesn’t pick it up

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.001.path, HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341776#comment-17341776
 ] 

Qi Zhu commented on HDFS-16003:
---

LGTM +1. 

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.001.path, HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16015) RBF: Read and write data through router select dn according to real user ip

2021-05-10 Thread zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhu updated HDFS-16015:
---
Description: 
Read and write data through router select dn according to real user ip.

relates to HDFS-13248

*General train of thought:*

1. RouterRpcClient set the real user ip to CallerContext.
2. The Server processes the Rpc to check whether the CallerContext contains the 
real user ip field, if it contains the real user IP and verify the legitimacy 
of the ip, if the verification passes, set the real IP to the Call.
3. Modify the getClientMachine method, if there is a real user ip in the Call, 
return the real user ip, if it does not return the server to monitor the IP.

  was:
Read and write data through router select dn according to real user ip.

relates to HDFS-13248

*General train of thought:*

1.


> RBF: Read and write data through router select dn according to real user ip
> ---
>
> Key: HDFS-16015
> URL: https://issues.apache.org/jira/browse/HDFS-16015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Read and write data through router select dn according to real user ip.
> relates to HDFS-13248
> *General train of thought:*
> 1. RouterRpcClient set the real user ip to CallerContext.
> 2. The Server processes the Rpc to check whether the CallerContext contains 
> the real user ip field, if it contains the real user IP and verify the 
> legitimacy of the ip, if the verification passes, set the real IP to the Call.
> 3. Modify the getClientMachine method, if there is a real user ip in the 
> Call, return the real user ip, if it does not return the server to monitor 
> the IP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16015) RBF: Read and write data through router select dn according to real user ip

2021-05-10 Thread zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341723#comment-17341723
 ] 

zhu commented on HDFS-16015:


[~elgoiri] ,[~elgoiri] ,[~hexiaoqiao] Looking forward your comments.

> RBF: Read and write data through router select dn according to real user ip
> ---
>
> Key: HDFS-16015
> URL: https://issues.apache.org/jira/browse/HDFS-16015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Read and write data through router select dn according to real user ip.
> relates to HDFS-13248
> *General train of thought:*
> 1. RouterRpcClient set the real user ip to CallerContext.
> 2. The Server processes the Rpc to check whether the CallerContext contains 
> the real user ip field, if it contains the real user IP and verify the 
> legitimacy of the ip, if the verification passes, set the real IP to the Call.
> 3. Modify the getClientMachine method, if there is a real user ip in the 
> Call, return the real user ip, if it does not return the server to monitor 
> the IP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16015) RBF: Read and write data through router select dn according to real user ip

2021-05-10 Thread zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhu updated HDFS-16015:
---
Description: 
Read and write data through router select dn according to real user ip.

relates to HDFS-13248

*General train of thought:*

1.

  was:
Read and write data through router select dn according to real user ip.

relates to [HDFS-13248|https://issues.apache.org/jira/browse/HDFS-13248]


> RBF: Read and write data through router select dn according to real user ip
> ---
>
> Key: HDFS-16015
> URL: https://issues.apache.org/jira/browse/HDFS-16015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Read and write data through router select dn according to real user ip.
> relates to HDFS-13248
> *General train of thought:*
> 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16018) Optimize the display of hdfs "count -e" or "count -t" command

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16018:
--
Labels: pull-request-available  (was: )

> Optimize the display of hdfs "count -e" or "count -t" command
> -
>
> Key: HDFS-16018
> URL: https://issues.apache.org/jira/browse/HDFS-16018
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: fs_count_fixed.png, fs_count_origin.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The display of `fs -count -e`or `fs -count -t` is not aligned.
> *Current display:*
> *!fs_count_origin.png|width=1184,height=156!*
> *Fixed display:*
> *!fs_count_fixed.png|width=1217,height=157!*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16018) Optimize the display of hdfs "count -e" or "count -t" command

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16018?focusedWorklogId=593847=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-593847
 ]

ASF GitHub Bot logged work on HDFS-16018:
-

Author: ASF GitHub Bot
Created on: 10/May/21 06:50
Start Date: 10/May/21 06:50
Worklog Time Spent: 10m 
  Work Description: whbing opened a new pull request #2994:
URL: https://github.com/apache/hadoop/pull/2994


   …mand
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 593847)
Remaining Estimate: 0h
Time Spent: 10m

> Optimize the display of hdfs "count -e" or "count -t" command
> -
>
> Key: HDFS-16018
> URL: https://issues.apache.org/jira/browse/HDFS-16018
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Minor
> Attachments: fs_count_fixed.png, fs_count_origin.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The display of `fs -count -e`or `fs -count -t` is not aligned.
> *Current display:*
> *!fs_count_origin.png|width=1184,height=156!*
> *Fixed display:*
> *!fs_count_fixed.png|width=1217,height=157!*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16011) HDFS: Support viewfs nested mount

2021-05-10 Thread zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341707#comment-17341707
 ] 

zhu commented on HDFS-16011:


[~ayushtkn] Thanks for your comment.

The ViewFS that supports nested mounting will match the mount point recently 
according to the path, if it can be matched, it will return to 
targetFileSystem, if it can’t match, it will return to InternalDirFs.
*For example, the following:*

*mount point*

    /a/b -> /a/b
    /a/b/c/d -> /a/b/c/d

*resolve:*

    /a/b    (/a/b targetFileSystem)
    /a/b/c (/a/b targetFileSystem)
    /a        (/a InternalDirFs)

I think there is no need to change the processing logic of getListing() and 
getContensummary(). This ViewFs nested installation has been used in our 
internal version for some time, and these two methods have also been verified. 
Indeed, renaming needs improvement. For target file systems with the same 
permissions, rename should be used. I'll add test cases if it works.

> HDFS: Support viewfs nested mount
> -
>
> Key: HDFS-16011
> URL: https://issues.apache.org/jira/browse/HDFS-16011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, viewfs
>Affects Versions: 3.2.2, 3.3.1
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current viewfs does not support nested mount points. The current viewfs 
> does not support nested mount points. 
> *E.g:*
>     1./home/ => /home/
>     2./home/work => /home/work
> If mount point 1 is loaded, mount point 2 cannot be added, and the following 
> exception will be thrown when loading 2.
> {code:java}
> throw new FileAlreadyExistsException("Path " + nextInode.fullPath +
>  " already exists as link");
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16007) Vulnerabilities found when serializing enum value

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16007?focusedWorklogId=593843=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-593843
 ]

ASF GitHub Bot logged work on HDFS-16007:
-

Author: ASF GitHub Bot
Created on: 10/May/21 06:24
Start Date: 10/May/21 06:24
Worklog Time Spent: 10m 
  Work Description: virajjasani edited a comment on pull request #2982:
URL: https://github.com/apache/hadoop/pull/2982#issuecomment-836238030


   @aajisaka sorry to bother you again. If you are fine with QA result, could 
you please help merge this PR and branch-3.3 backport?
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 593843)
Time Spent: 1h 10m  (was: 1h)

> Vulnerabilities found when serializing enum value
> -
>
> Key: HDFS-16007
> URL: https://issues.apache.org/jira/browse/HDFS-16007
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: junwen yang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ReplicaState enum is using ordinal to conduct serialization and 
> deserialization, which is vulnerable to the order, to cause issues similar to 
> HDFS-15624.
> To avoid it, either adding comments to let later developer not to change this 
> enum, or add index checking in the read and getState function to avoid index 
> out of bound error. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16007) Vulnerabilities found when serializing enum value

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16007?focusedWorklogId=593842=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-593842
 ]

ASF GitHub Bot logged work on HDFS-16007:
-

Author: ASF GitHub Bot
Created on: 10/May/21 06:24
Start Date: 10/May/21 06:24
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #2982:
URL: https://github.com/apache/hadoop/pull/2982#issuecomment-836238030


   @aajisaka sorry to bother you again. If you are fine with QA result, could 
you please help merge this PR?
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 593842)
Time Spent: 1h  (was: 50m)

> Vulnerabilities found when serializing enum value
> -
>
> Key: HDFS-16007
> URL: https://issues.apache.org/jira/browse/HDFS-16007
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: junwen yang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> ReplicaState enum is using ordinal to conduct serialization and 
> deserialization, which is vulnerable to the order, to cause issues similar to 
> HDFS-15624.
> To avoid it, either adding comments to let later developer not to change this 
> enum, or add index checking in the read and getState function to avoid index 
> out of bound error. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341694#comment-17341694
 ] 

Xiaoqiao He commented on HDFS-16003:


LGTM, +1 on [^HDFS-16003.001.path]. Pending Jenkins reports.

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.001.path, HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16017) RBF: The getListing method should not overwrite the Listings returned by the NameNode

2021-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16017?focusedWorklogId=593839=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-593839
 ]

ASF GitHub Bot logged work on HDFS-16017:
-

Author: ASF GitHub Bot
Created on: 10/May/21 06:16
Start Date: 10/May/21 06:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2993:
URL: https://github.com/apache/hadoop/pull/2993#issuecomment-836232413


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  18m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2993/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.federation.router.TestRouterMountTable |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2993/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2993 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux c5dbfe371e97 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 73a8a2b274b0888cafb3c2ba68498bf1fa07b20e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 

[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16003:
-
Attachment: HDFS-16003.001.path

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.001.path, HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-05-10 Thread lei w (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341681#comment-17341681
 ] 

lei w commented on HDFS-16003:
--

Thanks [~hexiaoqiao] and [~ayushtkn]. Fix the checkstyle at HDFS-16003.001.path

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15968) Improve the log for The DecayRpcScheduler

2021-05-10 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341675#comment-17341675
 ] 

Qi Zhu commented on HDFS-15968:
---

Thanks [~bpatel] for cotribution.

The path LGTM +1.

> Improve the log for The DecayRpcScheduler 
> --
>
> Key: HDFS-15968
> URL: https://issues.apache.org/jira/browse/HDFS-15968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15968.001.patch
>
>
> Improve the log for The DecayRpcScheduler to make use of the SELF4j logger 
> factory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org