[jira] [Commented] (HDFS-14839) Use Java Concurrent BlockingQueue instead of Internal BlockQueue

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169236#comment-17169236
 ] 

Hadoop QA commented on HDFS-14839:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
3s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 24 unchanged - 6 fixed = 24 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m  7s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Exceptional return value of 
java.util.concurrent.BlockingQueue.offer(Object) ignored in 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeErasureCoded(ExtendedBlock,
 DatanodeDescriptor[], DatanodeStorageInfo[], byte[], ErasureCodingPolicy)  At 
DatanodeDescriptor.java:ignored in 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeErasureCoded(ExtendedBlock,
 DatanodeDescriptor[], DatanodeStorageInfo[], byte[], ErasureCodingPolicy)  At 
DatanodeDescriptor.java:[line 630] |
|  |  Exceptional return value of 
java.util.concurrent.BlockingQueue.offer(Object) ignored in 

[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169228#comment-17169228
 ] 

Hadoop QA commented on HDFS-15493:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
3s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 17s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/22/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15493 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008825/HDFS-15493.004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 1c92a7f6c99f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / a7fda2e38f2 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| unit | 

[jira] [Commented] (HDFS-13157) Do Not Remove Blocks Sequentially During Decommission

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169220#comment-17169220
 ] 

Hadoop QA commented on HDFS-13157:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
41s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 161 unchanged - 1 fixed = 163 total (was 162) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m 
18s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/21/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-13157 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979084/HDFS-13157.1.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 7785f115c6a5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / a7fda2e38f2 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| mvninstall | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/21/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 

[jira] [Commented] (HDFS-15497) Make snapshot limit on global as well per snapshot root directory configurable

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169218#comment-17169218
 ] 

Hadoop QA commented on HDFS-15497:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HDFS-15497 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15497 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008660/HDFS-15497.000.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/20/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Make snapshot limit on global as well per snapshot root directory configurable
> --
>
> Key: HDFS-15497
> URL: https://issues.apache.org/jira/browse/HDFS-15497
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-15497.000.patch
>
>
> Currently, there is no configurable limit imposed on the no of snapshots 
> remaining in the system neither on the filesystem level nor on a snaphottable 
> root directory. Too many snapshots in the system can potentially bloat up the 
> namespace and with ordered deletion feature on , too many snapshots per 
> snapshottable root directory will make the deletion of the oldest snapshot 
> more expensive. This Jira aims to impose these configurable limits .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15498) Show snapshots deletion status in snapList cmd

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169200#comment-17169200
 ] 

Hadoop QA commented on HDFS-15498:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
1s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
21s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
60 unchanged - 0 fixed = 61 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
27s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 12s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  There is an apparent infinite recursive loop in 

[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169199#comment-17169199
 ] 

Hadoop QA commented on HDFS-15493:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 49s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
|   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/17/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15493 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008825/HDFS-15493.004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux de273de581b4 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 

[jira] [Commented] (HDFS-15504) Bootstrap failed and return ERR_CODE_LOGS_UNAVAILABLE

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169197#comment-17169197
 ] 

Hadoop QA commented on HDFS-15504:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 27s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
|   | hadoop.hdfs.tools.TestECAdmin |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/18/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15504 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008804/HDFS-15504-001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall 

[jira] [Commented] (HDFS-13157) Do Not Remove Blocks Sequentially During Decommission

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169154#comment-17169154
 ] 

Hadoop QA commented on HDFS-13157:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} https://github.com/apache/hadoop/pull/1391 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/1391 |
| JIRA Issue | HDFS-13157 |
| Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1391/4/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Do Not Remove Blocks Sequentially During Decommission 
> --
>
> Key: HDFS-13157
> URL: https://issues.apache.org/jira/browse/HDFS-13157
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-13157.1.patch
>
>
> From what I understand of [DataNode 
> decommissioning|https://github.com/apache/hadoop/blob/42a1c98597e6dba2e371510a6b2b6b1fb94e4090/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java]
>  it appears that all the blocks are scheduled for removal _in order._. I'm 
> not 100% sure what the ordering is exactly, but I think it loops through each 
> data volume and schedules each block to be replicated elsewhere. The net 
> affect is that during a decommission, all of the DataNode transfer threads 
> slam on a single volume until it is cleaned out. At which point, they all 
> slam on the next volume, etc.
> Please randomize the block list so that there is a more even distribution 
> across all volumes when decommissioning a node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169107#comment-17169107
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} https://github.com/apache/hadoop/pull/582 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-582/3/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> dfs.datanode.ec.reconstruction.threads not take effect
> --
>
> Key: HDFS-14350
> URL: https://issues.apache.org/jira/browse/HDFS-14350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ec
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.2.0
>
>
> In ErasureCodingWorker, stripedReconstructionPool is create by 
> {code:java}
> initializeStripedBlkReconstructionThreadPool(conf.getInt(
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_KEY,
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_DEFAULT));
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>   LOG.debug("Using striped block reconstruction; pool threads={}",
>   numThreads);
>   stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>   numThreads, 60, new LinkedBlockingQueue<>(),
>   "StripedBlockReconstruction-", false);
>   stripedReconstructionPool.allowCoreThreadTimeOut(true);
> }{code}
> so stripedReconstructionPool is a ThreadPoolExecutor, and the queue is a 
> LinkedBlockingQueue, then the active thread is awalys 2, the 
> dfs.datanode.ec.reconstruction.threads not take effect.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14478) Add libhdfs APIs for openFile

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169106#comment-17169106
 ] 

Hadoop QA commented on HDFS-14478:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-955/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/955 |
| JIRA Issue | HDFS-14478 |
| Optional Tests | dupname asflicense compile cc mvnsite javac unit golang |
| uname | Linux f2ada2fb2dd7 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / e756fe35909 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| Multi-JDK versions | 

[jira] [Commented] (HDFS-14839) Use Java Concurrent BlockingQueue instead of Internal BlockQueue

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169086#comment-17169086
 ] 

Hadoop QA commented on HDFS-14839:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
26s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 24 unchanged - 6 fixed = 24 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} 

[jira] [Commented] (HDFS-15500) Add more assertions about ordered deletion of snapshot

2020-07-31 Thread Jitendra Nath Pandey (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169022#comment-17169022
 ] 

Jitendra Nath Pandey commented on HDFS-15500:
-

Another useful check:
 * With ordered deletions the diff lists of the snapshots should become 
immutable except the latest one.  Can we add an assertion / validation for this?

 

> Add more assertions about ordered deletion of snapshot
> --
>
> Key: HDFS-15500
> URL: https://issues.apache.org/jira/browse/HDFS-15500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Priority: Major
>
> The jira proposes to add new assertions, one of the assertion to start with is
> a) Add an assertion that with ordered snapshot deletion flag true, prior 
> snapshot in cleansubtree is null



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15482) Ordered snapshot deletion: hide the deleted snapshots from users

2020-07-31 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-15482:
-
Parent: (was: HDFS-15477)
Issue Type: Bug  (was: Sub-task)

> Ordered snapshot deletion: hide the deleted snapshots from users
> 
>
> Key: HDFS-15482
> URL: https://issues.apache.org/jira/browse/HDFS-15482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Shashikant Banerjee
>Priority: Major
>
> In HDFS-15480,  the behavior of deleting the non-earliest snapshots is 
> changed to marking them as deleted in XAttr but not actually deleting them.  
> The users are still able to access the these snapshots as usual.
> In this JIRA, the marked-for-deletion snapshots are hided so that they become 
> inaccessible
> to users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13157) Do Not Remove Blocks Sequentially During Decommission

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168982#comment-17168982
 ] 

Hadoop QA commented on HDFS-13157:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} https://github.com/apache/hadoop/pull/1391 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/1391 |
| JIRA Issue | HDFS-13157 |
| Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1391/3/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Do Not Remove Blocks Sequentially During Decommission 
> --
>
> Key: HDFS-13157
> URL: https://issues.apache.org/jira/browse/HDFS-13157
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-13157.1.patch
>
>
> From what I understand of [DataNode 
> decommissioning|https://github.com/apache/hadoop/blob/42a1c98597e6dba2e371510a6b2b6b1fb94e4090/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java]
>  it appears that all the blocks are scheduled for removal _in order._. I'm 
> not 100% sure what the ordering is exactly, but I think it loops through each 
> data volume and schedules each block to be replicated elsewhere. The net 
> affect is that during a decommission, all of the DataNode transfer threads 
> slam on a single volume until it is cleaned out. At which point, they all 
> slam on the next volume, etc.
> Please randomize the block list so that there is a more even distribution 
> across all volumes when decommissioning a node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15505) Fix NullPointerException when call getAdditionalDatanode method with null extendedBlock parameter

2020-07-31 Thread hang chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hang chen updated HDFS-15505:
-
Summary: Fix NullPointerException when call getAdditionalDatanode method 
with null extendedBlock parameter  (was: Fix NullPointerException when call 
getAdditionalDatanode with null extendedBlock)

> Fix NullPointerException when call getAdditionalDatanode method with null 
> extendedBlock parameter
> -
>
> Key: HDFS-15505
> URL: https://issues.apache.org/jira/browse/HDFS-15505
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 3.0.3, 3.1.2, 
> 3.3.0, 3.2.1, 3.1.3
>Reporter: hang chen
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> When client call getAdditionalDatanode method, it will initialize 
> GetAdditionalDatanodeRequestProto and send RPC request to Router/namenode. 
> However, if we call getAdditionalDatanode method with null extendedBlock 
> parameter, it will set GetAdditionalDatanodeRequestProto's blk field with 
> null, which will cause NullPointerException. The code show as follow.
> {code:java}
> // code placeholder
> GetAdditionalDatanodeRequestProto req = GetAdditionalDatanodeRequestProto
>  .newBuilder()
>  .setSrc(src)
>  .setFileId(fileId)
>  .setBlk(PBHelperClient.convert(blk))
>  .addAllExistings(PBHelperClient.convert(existings))
>  .addAllExistingStorageUuids(Arrays.asList(existingStorageIDs))
>  .addAllExcludes(PBHelperClient.convert(excludes))
>  .setNumAdditionalNodes(numAdditionalNodes)
>  .setClientName(clientName)
>  .build();{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15505) Fix NullPointerException when call getAdditionalDatanode with null extendedBlock

2020-07-31 Thread hang chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hang chen updated HDFS-15505:
-
Description: 
When client call getAdditionalDatanode method, it will initialize 
GetAdditionalDatanodeRequestProto and send RPC request to Router/namenode. 
However, if we call getAdditionalDatanode method with null extendedBlock 
parameter, it will set GetAdditionalDatanodeRequestProto's blk field with null, 
which will cause NullPointerException. The code show as follow.
{code:java}
// code placeholder
GetAdditionalDatanodeRequestProto req = GetAdditionalDatanodeRequestProto
 .newBuilder()
 .setSrc(src)
 .setFileId(fileId)
 .setBlk(PBHelperClient.convert(blk))
 .addAllExistings(PBHelperClient.convert(existings))
 .addAllExistingStorageUuids(Arrays.asList(existingStorageIDs))
 .addAllExcludes(PBHelperClient.convert(excludes))
 .setNumAdditionalNodes(numAdditionalNodes)
 .setClientName(clientName)
 .build();{code}
 

  was:
When client call getAdditionalDatanode method, it will initialize 
GetAdditionalDatanodeRequestProto and send RPC request to Router/namenode. 
However, if we call getAdditionalDatanode method with null extendedBlock 
parameter, it will set GetAdditionalDatanodeRequestProto's blk field with null, 
which will cause NullPointerException. The code show as follow.

 
{code:java}
// code placeholder
GetAdditionalDatanodeRequestProto req = GetAdditionalDatanodeRequestProto
 .newBuilder()
 .setSrc(src)
 .setFileId(fileId)
 .setBlk(PBHelperClient.convert(blk))
 .addAllExistings(PBHelperClient.convert(existings))
 .addAllExistingStorageUuids(Arrays.asList(existingStorageIDs))
 .addAllExcludes(PBHelperClient.convert(excludes))
 .setNumAdditionalNodes(numAdditionalNodes)
 .setClientName(clientName)
 .build();{code}
 


> Fix NullPointerException when call getAdditionalDatanode with null 
> extendedBlock
> 
>
> Key: HDFS-15505
> URL: https://issues.apache.org/jira/browse/HDFS-15505
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 3.0.3, 3.1.2, 
> 3.3.0, 3.2.1, 3.1.3
>Reporter: hang chen
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> When client call getAdditionalDatanode method, it will initialize 
> GetAdditionalDatanodeRequestProto and send RPC request to Router/namenode. 
> However, if we call getAdditionalDatanode method with null extendedBlock 
> parameter, it will set GetAdditionalDatanodeRequestProto's blk field with 
> null, which will cause NullPointerException. The code show as follow.
> {code:java}
> // code placeholder
> GetAdditionalDatanodeRequestProto req = GetAdditionalDatanodeRequestProto
>  .newBuilder()
>  .setSrc(src)
>  .setFileId(fileId)
>  .setBlk(PBHelperClient.convert(blk))
>  .addAllExistings(PBHelperClient.convert(existings))
>  .addAllExistingStorageUuids(Arrays.asList(existingStorageIDs))
>  .addAllExcludes(PBHelperClient.convert(excludes))
>  .setNumAdditionalNodes(numAdditionalNodes)
>  .setClientName(clientName)
>  .build();{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168917#comment-17168917
 ] 

Stephen O'Donnell commented on HDFS-15493:
--

I tested with the 004 patch:

 * Parallel Load on + this feature on - 209 / 207 seconds
 * Parallel Load on + this feature off - 225 / 228 seconds
 * Parallel Load off + this feature off = 370 / 408 seconds
 * Parallel Load off + this feature on = 325 / 341 seconds

This new patch improves things significantly, so I think we should go forward 
with this technique.

I think we should create a new patch where the feature cannot be enabled / 
disabled - just have it always on, as I cannot think of a good reason someone 
should turn it off, and it will make the code simpler if we just remove the 
switch. What do you think?

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, HDFS-15493.004.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15505) Fix NullPointerException when call getAdditionalDatanode with null extendedBlock

2020-07-31 Thread hang chen (Jira)
hang chen created HDFS-15505:


 Summary: Fix NullPointerException when call getAdditionalDatanode 
with null extendedBlock
 Key: HDFS-15505
 URL: https://issues.apache.org/jira/browse/HDFS-15505
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsclient
Affects Versions: 3.1.3, 3.2.1, 3.3.0, 3.1.2, 3.0.3, 3.1.1, 3.2.0, 3.0.2, 
3.0.1, 3.1.0, 3.0.0
Reporter: hang chen
 Fix For: 3.3.1, 3.4.0


When client call getAdditionalDatanode method, it will initialize 
GetAdditionalDatanodeRequestProto and send RPC request to Router/namenode. 
However, if we call getAdditionalDatanode method with null extendedBlock 
parameter, it will set GetAdditionalDatanodeRequestProto's blk field with null, 
which will cause NullPointerException. The code show as follow.

 
{code:java}
// code placeholder
GetAdditionalDatanodeRequestProto req = GetAdditionalDatanodeRequestProto
 .newBuilder()
 .setSrc(src)
 .setFileId(fileId)
 .setBlk(PBHelperClient.convert(blk))
 .addAllExistings(PBHelperClient.convert(existings))
 .addAllExistingStorageUuids(Arrays.asList(existingStorageIDs))
 .addAllExcludes(PBHelperClient.convert(excludes))
 .setNumAdditionalNodes(numAdditionalNodes)
 .setClientName(clientName)
 .build();{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15497) Make snapshot limit on global as well per snapshot root directory configurable

2020-07-31 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-15497:
-
Status: Patch Available  (was: Open)

> Make snapshot limit on global as well per snapshot root directory configurable
> --
>
> Key: HDFS-15497
> URL: https://issues.apache.org/jira/browse/HDFS-15497
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-15497.000.patch
>
>
> Currently, there is no configurable limit imposed on the no of snapshots 
> remaining in the system neither on the filesystem level nor on a snaphottable 
> root directory. Too many snapshots in the system can potentially bloat up the 
> namespace and with ordered deletion feature on , too many snapshots per 
> snapshottable root directory will make the deletion of the oldest snapshot 
> more expensive. This Jira aims to impose these configurable limits .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168670#comment-17168670
 ] 

Stephen O'Donnell commented on HDFS-15493:
--

{quote}
After reviewed code about update blocks map and name cache carefully,I found 
that it's feasible to start to do these  when started loading INodeSection, and 
shutdown the executors when completed loading INodeDirectorySection
{quote}

That is a good idea - I had not thought of doing this. Both the cache and block 
map is working with inodes, so its strange the existing code performed these 
steps in the Directory section. I will try to test performance on trunk today.

One suggestion / question - can you think of any reason someone would want to 
disable this new feature? It makes the code slightly more complex to make it 
optional, and I cannot really think of a reason why it would make sense to 
disable it (assuming it has no bugs). I would be to remove the configuration 
switch and just make it always on.

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, HDFS-15493.004.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168635#comment-17168635
 ] 

Chengwei Wang edited comment on HDFS-15493 at 7/31/20, 11:06 AM:
-

After reviewed code about update blocks map and name cache carefully,I found 
that it's feasible to start to do these  when started loading INodeSection, and 
shutdown the executors when completed loading INodeDirectorySection. So that, 
it taken almost no time cost to wait executor terminated.

Submit a patch [^HDFS-15493.004.patch]  base on this means.  It uses two single 
thread executors and updates without lock.

Tested this patch twice.
{code:java}
Test1.
20/07/31 18:27:50 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 18:27:50 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 1
20/07/31 18:27:51 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
367 seconds.
Test2.
20/07/31 18:48:03 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 18:48:03 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 1
20/07/31 18:48:04 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
363 seconds.{code}
It takes about 20% speed up base my tests and reduces the time cost from 460s+ 
to  360s+.

I think this patch may be the best choice, [~sodonnell] can you help me test it 
on trunk.

 


was (Author: smarthan):
After reviewed code about update blocks map and name cache carefully,I found 
that it's feasible to start to do these  when started loading INodeSection, and 
shutdown the executors when completed loading INodeDirectorySection. So that, 
it taken almost no time cost to wait executor terminated.

Submit a patch [^HDFS-15493.004.patch]  base on this means.  It uses two single 
thread executors and updates without lock.

Tested this patch twice.

 
{code:java}
Test1.
20/07/31 18:27:50 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 18:27:50 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 1
20/07/31 18:27:51 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
367 seconds.
Test2.
20/07/31 18:48:03 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 18:48:03 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 1
20/07/31 18:48:04 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
363 seconds.{code}
It takes about 20% speed up base my tests and reduces the time cost from 460s+ 
to  360s+.

I think this patch may be the best choice, [~sodonnell] can you help me test it 
on trunk.

 

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, HDFS-15493.004.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168635#comment-17168635
 ] 

Chengwei Wang commented on HDFS-15493:
--

After reviewed code about update blocks map and name cache carefully,I found 
that it's feasible to start to do these  when started loading INodeSection, and 
shutdown the executors when completed loading INodeDirectorySection. So that, 
it taken almost no time cost to wait executor terminated.

Submit a patch [^HDFS-15493.004.patch]  base on this means.  It uses two single 
thread executors and updates without lock.

Tested this patch twice.

 
{code:java}
Test1.
20/07/31 18:27:50 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 18:27:50 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 1
20/07/31 18:27:51 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
367 seconds.
Test2.
20/07/31 18:48:03 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 18:48:03 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 1
20/07/31 18:48:04 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
363 seconds.{code}
It takes about 20% speed up base my tests and reduces the time cost from 460s+ 
to  360s+.

I think this patch may be the best choice, [~sodonnell] can you help me test it 
on trunk.

 

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, HDFS-15493.004.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengwei Wang updated HDFS-15493:
-
Attachment: HDFS-15493.004.patch

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, HDFS-15493.004.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168560#comment-17168560
 ] 

Chengwei Wang edited comment on HDFS-15493 at 7/31/20, 9:21 AM:


Submit v003 patch [^HDFS-15493.003.patch]

Base on two single thread executors, removed update lock.

Tested this patch twice:
{code:java}
Test1.
20/07/31 16:12:17 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:12:36 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 18615
20/07/31 16:12:37 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
431 seconds.
Test2.
20/07/31 16:39:20 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:39:27 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 7151
20/07/31 16:39:28 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
425 seconds.
{code}
 

 


was (Author: smarthan):
Submit v003 patch [^HDFS-15493.003.patch]

Base on two single thread executors, removed update lock.

Tested this patch twice:

 
{code:java}
Test1.
20/07/31 16:12:17 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:12:36 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 18615
20/07/31 16:12:37 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
431 seconds.
Test2.
20/07/31 16:39:20 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:39:27 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 7151
20/07/31 16:39:28 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
425 seconds.
{code}
 

 

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168560#comment-17168560
 ] 

Chengwei Wang commented on HDFS-15493:
--

Submit v003 patch [^HDFS-15493.003.patch]

Base on two single thread executors, removed update lock.

Tested this patch twice:

 
{code:java}
Test1.
20/07/31 16:12:17 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:12:36 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 18615
20/07/31 16:12:37 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
431 seconds.
Test2.
20/07/31 16:39:20 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:39:27 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, total waiting duration: 7151
20/07/31 16:39:28 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
425 seconds.
{code}
 

 

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengwei Wang updated HDFS-15493:
-
Attachment: HDFS-15493.003.patch

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15393) Review of PendingReconstructionBlocks

2020-07-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168523#comment-17168523
 ] 

Hadoop QA commented on HDFS-15393:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 23 new + 126 unchanged - 3 fixed = 149 total (was 129) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m  
6s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 

[jira] [Comment Edited] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168347#comment-17168347
 ] 

Chengwei Wang edited comment on HDFS-15493 at 7/31/20, 8:08 AM:


{quote}Therefore setting it to 500 or 1000ms and logging a message each time 
around the loop should not give any time penalty, but should give us some 
information about what is happening.
{quote}
Yes, you are exactly right! The more waiting time and logging would be useful, 
I would add these.
{quote}How long does the shutdown take with the single 4 thread executor?
{quote}
I just assmued the waiting time was the time cost from `completed loading all 
INodeDirectory sub-sections` to loading fsimage finished.
{code:java}
20/07/31 10:25:59 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 10:26:22 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 431 
seconds.
{code}
{quote}Are you testing this on the trunk code + this patch, or a different 
version plus this patch?
{quote}
I tested this patch on our dev branch which was based on CDH5.10.0 with many 
patches, the version should be 2.6.0~2.8.0.
{quote}Could you try testing 2 executors with 2 threads each?
{quote}
I had tested this after tested two single thread executors, the time cost was 
betweent 420s and 430s.

I will submit 3 new patches:
 #  one executor with 4 threads with waiting time logging
 #  two single thread executor with waiting time logging and without lock
 #  two fixed 2 thread executors with lock and waiting time logging

Let's we test which one would preform best.


was (Author: smarthan):
{quote}Therefore setting it to 500 or 1000ms and logging a message each time 
around the loop should not give any time penalty, but should give us some 
information about what is happening.
{quote}
Yes, you are exactly right! The more waiting time and logging would be useful, 
I would add these.
{quote}How long does the shutdown take with the single 4 thread executor?
{quote}
I just assmued the waiting time was the time cost from `completed loading all 
INodeDirectory sub-sections` to loading fsimage finished.
{code:java}
20/07/31 10:25:59 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 10:26:22 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 431 
seconds.
{code}
{quote}Are you testing this on the trunk code + this patch, or a different 
version plus this patch?
{quote}
I tested this patch on our dev branch which was based on CDH5.10.0 with many 
patches, the version should be 2.6.0~2.8.0.
{quote}Could you try testing 2 executors with 2 threads each?
{quote}
I had tested this after tested two single thread executors, the time cost was 
betweent 420s and 430s.

 

I will submit 3 new patches:
 #  one executor with 4 threads with waiting time logging
 #  two single thread executor with waiting time logging and without lock
 #  two fixed 2 thread executors with lock and waiting time logging

Let's we test which one would preform best.

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168506#comment-17168506
 ] 

Chengwei Wang edited comment on HDFS-15493 at 7/31/20, 8:08 AM:


Submit v002 patch [^HDFS-15493.002.patch]. 

Base on one executor with 4 threads, added a unit test, refactor code to 
shutdown executor and added waiting time logging.

I had tested this patch twice:
{code:java}
Test 1.
20/07/31 14:21:17 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 14:21:22 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, waiting timeduration(ms): 5161
20/07/31 14:21:23 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
409 seconds.

Test 2.
20/07/31 16:00:03 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:00:16 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, waiting time  duration(ms): 12105
20/07/31 16:00:17 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
424 seconds.
{code}
 


was (Author: smarthan):
Submit v002 patch [^HDFS-15493.002.patch]. 

Base on one executor with 4 threads, added a unit test, refactor code to 
shutdown executor and added waiting time logging.

I had tested this patch twice:

 
{code:java}
Test 1.
20/07/31 14:21:17 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 14:21:22 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, waiting timeduration(ms): 5161
20/07/31 14:21:23 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
409 seconds.

Test 2.
20/07/31 16:00:03 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:00:16 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, waiting time  duration(ms): 12105
20/07/31 16:00:17 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
424 seconds.
{code}
 

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14950) missing libhdfspp libs in dist-package

2020-07-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168507#comment-17168507
 ] 

Hudson commented on HDFS-14950:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18483 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18483/])
HDFS-14950. fix missing libhdfspp lib in dist-package (#1947) (github: rev 
e756fe3590906bfd8ffe4ab5cc8b9b24a9b2b4b2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt


> missing libhdfspp libs in dist-package
> --
>
> Key: HDFS-14950
> URL: https://issues.apache.org/jira/browse/HDFS-14950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, libhdfs++
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: fix_libhdfspp_lib.patch
>
>
> In a Hadoop build like "mvn package -Pnative" will copy HDFS native libs to 
> target/lib/native. For now it will only copy the C client 
> libraries(libhdfs.\{a,so}). C++ based HDFS client libraies(libhdfspp.\{a,so}) 
> are missing there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168506#comment-17168506
 ] 

Chengwei Wang commented on HDFS-15493:
--

Submit v002 patch [^HDFS-15493.002.patch]. 

Base on one executor with 4 threads, added a unit test, refactor code to 
shutdown executor and added waiting time logging.

I had tested this patch twice:

 
{code:java}
Test 1.
20/07/31 14:21:17 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 14:21:22 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, waiting timeduration(ms): 5161
20/07/31 14:21:23 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
409 seconds.

Test 2.
20/07/31 16:00:03 INFO namenode.FSImageFormatPBINode: Completed loading all 
INodeDirectory sub-sections
20/07/31 16:00:16 INFO namenode.FSImageFormatPBINode: Completed update 
blocks map and name cache, waiting time  duration(ms): 12105
20/07/31 16:00:17 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 
424 seconds.
{code}
 

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13564) PreAllocator for DfsClientShm

2020-07-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13564:
-
Fix Version/s: (was: 3.0.2)

> PreAllocator for DfsClientShm
> -
>
> Key: HDFS-13564
> URL: https://issues.apache.org/jira/browse/HDFS-13564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.2
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>
> When we do a stress test against Short-Circuit Local Reads, and found a 
> bottleneck that allocating new DfsClientShm blocks a lot of slot allocatings 
> on it.
> Currently, there are 128 slots per shm which means at most, 128 reads could 
> be blocked by the shm allocation. Especially when stressed, the domain socket 
> communication to datanode gets slow, and datanode could also have GC, which 
> could cause some hundreds ms to allocate 1 shm, in turn, the reads. This is 
> bad for some latency sensitive service, like Hbase.
> I'm working on the prototype and will upload the code and test result later. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13564) PreAllocator for DfsClientShm

2020-07-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13564:
-
Target Version/s:   (was: 3.0.2)

> PreAllocator for DfsClientShm
> -
>
> Key: HDFS-13564
> URL: https://issues.apache.org/jira/browse/HDFS-13564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.2
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>
> When we do a stress test against Short-Circuit Local Reads, and found a 
> bottleneck that allocating new DfsClientShm blocks a lot of slot allocatings 
> on it.
> Currently, there are 128 slots per shm which means at most, 128 reads could 
> be blocked by the shm allocation. Especially when stressed, the domain socket 
> communication to datanode gets slow, and datanode could also have GC, which 
> could cause some hundreds ms to allocate 1 shm, in turn, the reads. This is 
> bad for some latency sensitive service, like Hbase.
> I'm working on the prototype and will upload the code and test result later. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11133) Ozone: Add allocateContainer RPC

2020-07-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11133:
-
Affects Version/s: (was: oz)

> Ozone: Add allocateContainer RPC
> 
>
> Key: HDFS-11133
> URL: https://issues.apache.org/jira/browse/HDFS-11133
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-11133-HDFS-7240.001.patch
>
>
> Add allocateContainer RPC in SCM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15330) Document the ViewFSOverloadScheme details in ViewFS guide

2020-07-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15330:
-
Fix Version/s: (was: 3.2.2.)
   3.2.2

> Document the ViewFSOverloadScheme details in ViewFS guide
> -
>
> Key: HDFS-15330
> URL: https://issues.apache.org/jira/browse/HDFS-15330
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
>
> This Jira to track for documentation of ViewFSOverloadScheme usage guide.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14950) missing libhdfspp libs in dist-package

2020-07-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14950:
-
Fix Version/s: 3.4.0
   3.3.1
   3.2.2
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk, branch-3.3, and branch-3.2. Thank you [~yuanzhou] for 
your contribution!

> missing libhdfspp libs in dist-package
> --
>
> Key: HDFS-14950
> URL: https://issues.apache.org/jira/browse/HDFS-14950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, libhdfs++
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: fix_libhdfspp_lib.patch
>
>
> In a Hadoop build like "mvn package -Pnative" will copy HDFS native libs to 
> target/lib/native. For now it will only copy the C client 
> libraries(libhdfs.\{a,so}). C++ based HDFS client libraies(libhdfspp.\{a,so}) 
> are missing there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-07-31 Thread Chengwei Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengwei Wang updated HDFS-15493:
-
Attachment: HDFS-15493.002.patch

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduc to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org