[jira] [Commented] (HDFS-8791) block ID-based DN storage layout can be very slow for datanode on ext4

2020-07-07 Thread fengwu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153258#comment-17153258
 ] 

fengwu commented on HDFS-8791:
--

Hi,

Learn  from above ,Currently, datanode layout changes support rolling upgrades, 
on the other hand downgrading is not supported between datanode layout changes. 
  

when  upgrade from 2.7.2 to 3.1  is uncertainty ,  datanode not supported 
downgrade ,so we can only choose rollback. will cause data loss.  so 
rollupgrade  is not robust.

Is it possible to leave the choice of whether to introduce this feature to the 
user, let the user configure whether to enable it.

   Thanks

> block ID-based DN storage layout can be very slow for datanode on ext4
> --
>
> Key: HDFS-8791
> URL: https://issues.apache.org/jira/browse/HDFS-8791
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Nathan Roberts
>Assignee: Chris Trezzo
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 32x32DatanodeLayoutTesting-v1.pdf, 
> 32x32DatanodeLayoutTesting-v2.pdf, HDFS-8791-trunk-v1.patch, 
> HDFS-8791-trunk-v2-bin.patch, HDFS-8791-trunk-v2.patch, 
> HDFS-8791-trunk-v2.patch, HDFS-8791-trunk-v3-bin.patch, 
> hadoop-56-layout-datanode-dir.tgz, test-node-upgrade.txt
>
>
> We are seeing cases where the new directory layout causes the datanode to 
> basically cause the disks to seek for 10s of minutes. This can be when the 
> datanode is running du, and it can also be when it is performing a 
> checkDirs(). Both of these operations currently scan all directories in the 
> block pool and that's very expensive in the new layout.
> The new layout creates 256 subdirs, each with 256 subdirs. Essentially 64K 
> leaf directories where block files are placed.
> So, what we have on disk is:
> - 256 inodes for the first level directories
> - 256 directory blocks for the first level directories
> - 256*256 inodes for the second level directories
> - 256*256 directory blocks for the second level directories
> - Then the inodes and blocks to store the the HDFS blocks themselves.
> The main problem is the 256*256 directory blocks. 
> inodes and dentries will be cached by linux and one can configure how likely 
> the system is to prune those entries (vfs_cache_pressure). However, ext4 
> relies on the buffer cache to cache the directory blocks and I'm not aware of 
> any way to tell linux to favor buffer cache pages (even if it did I'm not 
> sure I would want it to in general).
> Also, ext4 tries hard to spread directories evenly across the entire volume, 
> this basically means the 64K directory blocks are probably randomly spread 
> across the entire disk. A du type scan will look at directories one at a 
> time, so the ioscheduler can't optimize the corresponding seeks, meaning the 
> seeks will be random and far. 
> In a system I was using to diagnose this, I had 60K blocks. A DU when things 
> are hot is less than 1 second. When things are cold, about 20 minutes.
> How do things get cold?
> - A large set of tasks run on the node. This pushes almost all of the buffer 
> cache out, causing the next DU to hit this situation. We are seeing cases 
> where a large job can cause a seek storm across the entire cluster.
> Why didn't the previous layout see this?
> - It might have but it wasn't nearly as pronounced. The previous layout would 
> be a few hundred directory blocks. Even when completely cold, these would 
> only take a few a hundred seeks which would mean single digit seconds.  
> - With only a few hundred directories, the odds of the directory blocks 
> getting modified is quite high, this keeps those blocks hot and much less 
> likely to be evicted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15448) When starting a DataNode, call BlockPoolManager#startAll() twice.

2020-07-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153254#comment-17153254
 ] 

Hadoop QA commented on HDFS-15448:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
39s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m  7s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}209m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29487/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15448 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13007248/HDFS-15448.003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 7d411801f57b 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4f26454a7d1 |
| Default Java | Private Build-1.8.0_252-8u2

[jira] [Comment Edited] (HDFS-15333) Vulnerability fixes need for jackson-databinding HDFS dependency library

2020-07-07 Thread weiyanen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153209#comment-17153209
 ] 

weiyanen edited comment on HDFS-15333 at 7/8/20, 3:40 AM:
--

So NOW, how can I resolve this vulnerability problem? 

I've used htrace-core4-4.1.0-incubating and it used jackson 2.4.0 which has 
vulnerability issues.

I must use htrace-core4-4.1.0-incubating, otherwise, I would get an error for 
"org/apache/htrace/core/Tracer$Builder Context: java.lang.NoClassDefFoundError: 
org/apache/htrace/core/Tracer$Builder".

 

Can we just ignore the Vulnerability Issue although code scan throw out this 
issue? Because "No JSON deserialization is involved the code path. Even JSON 
serialization is only used in specific span receivers which is barely used."


was (Author: weiyanen):
So NOW, how can I resolve this vulnerability problem? 

I've used htrace-core4-4.1.0-incubating and it used jackson 2.4.0 which has 
vulnerability issues.

I must use htrace-core4-4.1.0-incubating, otherwise, I would get an error for 
"org/apache/htrace/core/Tracer$Builder Context: java.lang.NoClassDefFoundError: 
org/apache/htrace/core/Tracer$Builder".

> Vulnerability fixes need for jackson-databinding HDFS dependency library
> 
>
> Key: HDFS-15333
> URL: https://issues.apache.org/jira/browse/HDFS-15333
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
> Environment: [^hdfs_imagescan_result.csv]
>Reporter: Hridesh
>Priority: Critical
> Attachments: hdfs_imagescan_result.csv
>
>
> HDFS has couple of dependency which is having jackson library  with 
> vulnerability. 
> Below are list of library used by HDFS which is having vulnerability:
>  * htrace-core4-4.1.0-incubating.jar:jackson-databind
>  * htrace-core-3.1.0-incubating.jar:jackson-databind
>  * aws-java-sdk-bundle-1.11.375.jar:jackson-databind
>  * hadoop-client-runtime-3.2.1.jar:jackson-databind
>  * jackson-databind-2.9.8.jar
>  * hadoop-client-runtime-3.2.1.jar:jackson-databind
>  
> For example:  "htrace-core4-4.1.0-incubating" build with jackson 2.4.0. POM 
> URL: 
> [https://github.com/apache/incubator-retired-htrace/blob/e12b5fcfaafa56d676fee5f873da01df6b61dac9/pom.xml.]
>  
> Jackson version < 2.9.1 has below list of vulnerabilities:
> CVE-2019-14379
> CVE-2019-16335
> CVE-2019-17531
> CVE-2019-14540
> CVE-2018-11307
> CVE-2019-12402
> CVE-2018-7489
> CVE-2018-12022
> CVE-2019-14439
> CVE-2017-15095
> CVE-2017-7525
> CVE-2017-17485
>  
> Attaching image scan result file.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15333) Vulnerability fixes need for jackson-databinding HDFS dependency library

2020-07-07 Thread weiyanen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153209#comment-17153209
 ] 

weiyanen commented on HDFS-15333:
-

So NOW, how can I resolve this vulnerability problem? 

I've used htrace-core4-4.1.0-incubating and it used jackson 2.4.0 which has 
vulnerability issues.

I must use htrace-core4-4.1.0-incubating, otherwise, I would get an error for 
"org/apache/htrace/core/Tracer$Builder Context: java.lang.NoClassDefFoundError: 
org/apache/htrace/core/Tracer$Builder".

> Vulnerability fixes need for jackson-databinding HDFS dependency library
> 
>
> Key: HDFS-15333
> URL: https://issues.apache.org/jira/browse/HDFS-15333
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
> Environment: [^hdfs_imagescan_result.csv]
>Reporter: Hridesh
>Priority: Critical
> Attachments: hdfs_imagescan_result.csv
>
>
> HDFS has couple of dependency which is having jackson library  with 
> vulnerability. 
> Below are list of library used by HDFS which is having vulnerability:
>  * htrace-core4-4.1.0-incubating.jar:jackson-databind
>  * htrace-core-3.1.0-incubating.jar:jackson-databind
>  * aws-java-sdk-bundle-1.11.375.jar:jackson-databind
>  * hadoop-client-runtime-3.2.1.jar:jackson-databind
>  * jackson-databind-2.9.8.jar
>  * hadoop-client-runtime-3.2.1.jar:jackson-databind
>  
> For example:  "htrace-core4-4.1.0-incubating" build with jackson 2.4.0. POM 
> URL: 
> [https://github.com/apache/incubator-retired-htrace/blob/e12b5fcfaafa56d676fee5f873da01df6b61dac9/pom.xml.]
>  
> Jackson version < 2.9.1 has below list of vulnerabilities:
> CVE-2019-14379
> CVE-2019-16335
> CVE-2019-17531
> CVE-2019-14540
> CVE-2018-11307
> CVE-2019-12402
> CVE-2018-7489
> CVE-2018-12022
> CVE-2019-14439
> CVE-2017-15095
> CVE-2017-7525
> CVE-2017-17485
>  
> Attaching image scan result file.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15448) When starting a DataNode, call BlockPoolManager#startAll() twice.

2020-07-07 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153188#comment-17153188
 ] 

jianghua zhu commented on HDFS-15448:
-

[~elgoiri] , your suggestions are very helpful to me.
Here I resubmitted a new patch file (HDFS-15448.003.patch).
[~elgoiri] , [~hexiaoqiao] , can you help review again?

 

> When starting a DataNode, call BlockPoolManager#startAll() twice.
> -
>
> Key: HDFS-15448
> URL: https://issues.apache.org/jira/browse/HDFS-15448
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Attachments: HDFS-15448.001.patch, HDFS-15448.002.patch, 
> HDFS-15448.003.patch, method_invoke_path.jpg
>
>
> When starting a DataNode, call BlockPoolManager#startAll() twice.
> The first call:
> BlockPoolManager#doRefreshNamenodes()
> private void doRefreshNamenodes(
>  Map> addrMap,
>  Map> lifelineAddrMap)
>  throws IOException {
>  ...
> startAll();
> ...
> }
> The second call:
> DataNode#runDatanodeDaemon()
> public void runDatanodeDaemon() throws IOException {
> blockPoolManager.startAll();
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15448) When starting a DataNode, call BlockPoolManager#startAll() twice.

2020-07-07 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15448:

Attachment: HDFS-15448.003.patch

> When starting a DataNode, call BlockPoolManager#startAll() twice.
> -
>
> Key: HDFS-15448
> URL: https://issues.apache.org/jira/browse/HDFS-15448
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Attachments: HDFS-15448.001.patch, HDFS-15448.002.patch, 
> HDFS-15448.003.patch, method_invoke_path.jpg
>
>
> When starting a DataNode, call BlockPoolManager#startAll() twice.
> The first call:
> BlockPoolManager#doRefreshNamenodes()
> private void doRefreshNamenodes(
>  Map> addrMap,
>  Map> lifelineAddrMap)
>  throws IOException {
>  ...
> startAll();
> ...
> }
> The second call:
> DataNode#runDatanodeDaemon()
> public void runDatanodeDaemon() throws IOException {
> blockPoolManager.startAll();
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15455) Expose HighestPriorityReplBlocks and LastReplicaBlocks statistics

2020-07-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152973#comment-17152973
 ] 

Íñigo Goiri commented on HDFS-15455:


Thanks [~surmountian], can you post a backport of HDFS-13658 for branch-2.9?

> Expose HighestPriorityReplBlocks and LastReplicaBlocks statistics
> -
>
> Key: HDFS-15455
> URL: https://issues.apache.org/jira/browse/HDFS-15455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>
> Similar to HDFS-13658, blocks with only 1 replica may cause the lost of 
> customer data, so we need to make this awared and take action if necessary. 
> This change is for HDFS 2.9 as we will still be using it for some time and 
> switching to HDFS 3.X is not simple work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15448) When starting a DataNode, call BlockPoolManager#startAll() twice.

2020-07-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152972#comment-17152972
 ] 

Íñigo Goiri commented on HDFS-15448:


[~hexiaoqiao] already went through the code paths.
We may want to add a check to createDataNode() and runDatanodeDaemon() to 
verify that the block manager is started.
I would make sure that the tests for createDataNode() checks for something like 
testBPServiceState().

> When starting a DataNode, call BlockPoolManager#startAll() twice.
> -
>
> Key: HDFS-15448
> URL: https://issues.apache.org/jira/browse/HDFS-15448
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Attachments: HDFS-15448.001.patch, HDFS-15448.002.patch, 
> method_invoke_path.jpg
>
>
> When starting a DataNode, call BlockPoolManager#startAll() twice.
> The first call:
> BlockPoolManager#doRefreshNamenodes()
> private void doRefreshNamenodes(
>  Map> addrMap,
>  Map> lifelineAddrMap)
>  throws IOException {
>  ...
> startAll();
> ...
> }
> The second call:
> DataNode#runDatanodeDaemon()
> public void runDatanodeDaemon() throws IOException {
> blockPoolManager.startAll();
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15425) Review Logging of DFSClient

2020-07-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152868#comment-17152868
 ] 

Hudson commented on HDFS-15425:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18417 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18417/])
HDFS-15425. Review Logging of DFSClient. Contributed by Hongbing Wang. 
(hexiaoqiao: rev 4f26454a7d1b560f959cdb2fb0641147a85642da)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java


> Review Logging of DFSClient
> ---
>
> Key: HDFS-15425
> URL: https://issues.apache.org/jira/browse/HDFS-15425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HDFS-15425.001.patch, HDFS-15425.002.patch, 
> HDFS-15425.003.patch
>
>
> Review use of SLF4J for DFSClient.LOG. 
> Make the code more concise and readable. 
> Less is more !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15425) Review Logging of DFSClient

2020-07-07 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15425:
---
Component/s: dfsclient

> Review Logging of DFSClient
> ---
>
> Key: HDFS-15425
> URL: https://issues.apache.org/jira/browse/HDFS-15425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HDFS-15425.001.patch, HDFS-15425.002.patch, 
> HDFS-15425.003.patch
>
>
> Review use of SLF4J for DFSClient.LOG. 
> Make the code more concise and readable. 
> Less is more !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15425) Review Logging of DFSClient

2020-07-07 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15425:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

commit to trunk.
Thanks [~wanghongbing] for your contributions!
Thanks [~elgoiri] for reviews!

> Review Logging of DFSClient
> ---
>
> Key: HDFS-15425
> URL: https://issues.apache.org/jira/browse/HDFS-15425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HDFS-15425.001.patch, HDFS-15425.002.patch, 
> HDFS-15425.003.patch
>
>
> Review use of SLF4J for DFSClient.LOG. 
> Make the code more concise and readable. 
> Less is more !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-07-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152729#comment-17152729
 ] 

Hadoop QA commented on HDFS-15025:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m 46s{color} | 
{color:red} root generated 26 new + 136 unchanged - 26 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  5s{color} | {color:orange} root: The patch generated 23 new + 726 unchanged 
- 3 fixed = 749 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 28s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 39s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+

[jira] [Updated] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-07-07 Thread hadoop_hdfs_hw (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hadoop_hdfs_hw updated HDFS-15025:
--
Attachment: HDFS-15025.001.patch
Status: Patch Available  (was: Open)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: hadoop_hdfs_hw
>Priority: Major
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> NVDIMM_patch(WIP).patch
>
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15448) When starting a DataNode, call BlockPoolManager#startAll() twice.

2020-07-07 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152556#comment-17152556
 ] 

jianghua zhu commented on HDFS-15448:
-

[~elgoiri] , [~linyiqun] , can you give me some more suggestions on this issue?

Thanks a lot.

> When starting a DataNode, call BlockPoolManager#startAll() twice.
> -
>
> Key: HDFS-15448
> URL: https://issues.apache.org/jira/browse/HDFS-15448
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Attachments: HDFS-15448.001.patch, HDFS-15448.002.patch, 
> method_invoke_path.jpg
>
>
> When starting a DataNode, call BlockPoolManager#startAll() twice.
> The first call:
> BlockPoolManager#doRefreshNamenodes()
> private void doRefreshNamenodes(
>  Map> addrMap,
>  Map> lifelineAddrMap)
>  throws IOException {
>  ...
> startAll();
> ...
> }
> The second call:
> DataNode#runDatanodeDaemon()
> public void runDatanodeDaemon() throws IOException {
> blockPoolManager.startAll();
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org