[
https://issues.apache.org/jira/browse/HDFS-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365450#comment-17365450
]
Hadoop QA commented on HDFS-16044:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m
22s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m
0s{color} | {color:green}{color} | {color:green} No case conflicting files
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green}{color} | {color:green} The patch does not contain any
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to
include any new or modified tests. Please justify why no new tests are needed
for this patch. Also please list what manual steps were performed to verify
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m
27s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
0s{color} | {color:green}{color} | {color:green} trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
53s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
30s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
58s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
14m 28s{color} | {color:green}{color} | {color:green} branch has no errors when
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
39s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 18m
15s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m
26s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m
36s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m
41s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 41s{color}
|
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m
35s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed with JDK Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 35s{color}
|
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed with JDK Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 19s{color} |
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt{color}
| {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The patch generated
14 new + 18 unchanged - 0 fixed = 32 total (was 18) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m
36s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green}{color} | {color:green} The patch has no whitespace
issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 2m
4s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-shadedclient.txt{color}
| {color:red} patch has errors when building and testing our client artifacts.
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m
26s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m
31s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
| {color:red}
hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 7 new
+ 0 unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:red}-1{color} | {color:red} spotbugs {color} | {color:red} 0m
36s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 36s{color}
|
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
28s{color} | {color:green}{color} | {color:green} The patch does not generate
ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 13s{color} |
{color:black}{color} | {color:black}{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/artifact/out/Dockerfile
|
| JIRA Issue | HDFS-16044 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/13027022/HDFS-16044.01.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite
unit shadedclient findbugs checkstyle spotbugs |
| uname | Linux a4ee463e8161 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 51991c49072 |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Test Results |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/testReport/ |
| Max. process+thread count | 566 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U:
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/632/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
> getListing call getLocatedBlocks even source is a directory
> -----------------------------------------------------------
>
> Key: HDFS-16044
> URL: https://issues.apache.org/jira/browse/HDFS-16044
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 3.1.1
> Reporter: ludun
> Assignee: ludun
> Priority: Major
> Attachments: HDFS-16044.00.patch, HDFS-16044.01.patch
>
>
> In production cluster when call getListing very frequent. The processing
> time of rpc request is very high. we try to optimize the performance of
> getListing request.
> After some check, we found that, even the source and child is dir, the
> getListing request also call getLocatedBlocks.
> the request is and needLocation is false
> {code:java}
> 2021-05-27 15:16:07,093 TRACE ipc.ProtobufRpcEngine: 1: Call ->
> 8-5-231-4/8.5.231.4:25000: getListing {src:
> "/data/connector/test/topics/102test" startAfter: "" needLocation: false}
> {code}
> but getListing request 1000 times getLocatedBlocks which not needed.
> {code:java}
> `---ts=2021-05-27 14:19:15;thread_name=IPC Server handler 86 on
> 25000;id=e6;is_daemon=true;priority=5;TCCL=sun.misc.Launcher$AppClassLoader@5fcfe4b2
> `---[35.068532ms]
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:getListing()
> +---[0.003542ms]
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getPathComponents() #214
> +---[0.003053ms]
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:isExactReservedName() #95
> +---[0.002938ms]
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:readLock() #218
> +---[0.00252ms]
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:isDotSnapshotDir() #220
> +---[0.002788ms]
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getPathSnapshotId() #223
> +---[0.002905ms]
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getLastINode() #224
> +---[0.002785ms]
> org.apache.hadoop.hdfs.server.namenode.INode:getStoragePolicyID() #230
> +---[0.002236ms]
> org.apache.hadoop.hdfs.server.namenode.INode:isDirectory() #233
> +---[0.002919ms]
> org.apache.hadoop.hdfs.server.namenode.INode:asDirectory() #242
> +---[0.003408ms]
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory:getChildrenList() #243
> +---[0.005942ms]
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory:nextChild() #244
> +---[0.002467ms] org.apache.hadoop.hdfs.util.ReadOnlyList:size() #245
> +---[0.005481ms]
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:getLsLimit() #247
> +---[0.002176ms]
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:getLsLimit() #248
> +---[min=0.00211ms,max=0.005157ms,total=2.247572ms,count=1000]
> org.apache.hadoop.hdfs.util.ReadOnlyList:get() #252
> +---[min=0.001946ms,max=0.005411ms,total=2.041715ms,count=1000]
> org.apache.hadoop.hdfs.server.namenode.INode:isSymlink() #253
> +---[min=0.002176ms,max=0.005426ms,total=2.264472ms,count=1000]
> org.apache.hadoop.hdfs.server.namenode.INode:getLocalStoragePolicyID() #254
> +---[min=0.002251ms,max=0.006849ms,total=2.351935ms,count=1000]
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:getStoragePolicyID()
> #95
> +---[min=0.006091ms,max=0.012333ms,total=6.439434ms,count=1000]
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:createFileStatus()
> #257
> +---[min=0.00269ms,max=0.004995ms,total=2.788194ms,count=1000]
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus:getLocatedBlocks() #265
> +---[0.003234ms]
> org.apache.hadoop.hdfs.protocol.DirectoryListing:<init>() #274
> `---[0.002457ms]
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:readUnlock() #277
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]