[
https://issues.apache.org/jira/browse/YARN-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16966409#comment-16966409
]
Hadoop QA commented on YARN-9947:
---------------------------------
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 2 new or modified test
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
13m 41s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 27s{color} | {color:orange}
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
The patch generated 10 new + 269 unchanged - 8 fixed = 279 total (was 277)
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
12m 47s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m
29s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed.
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
30s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 4s{color} |
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9947 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12984754/YARN-9947.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient findbugs checkstyle |
| uname | Linux 7e5d28bf1865 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d462308 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle |
https://builds.apache.org/job/PreCommit-YARN-Build/25085/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-YARN-Build/25085/testReport/ |
| Max. process+thread count | 464 (vs. ulimit of 5500) |
| modules | C:
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
U:
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
|
| Console output |
https://builds.apache.org/job/PreCommit-YARN-Build/25085/console |
| Powered by | Apache Yetus 0.8.0 http://yetus.apache.org |
This message was automatically generated.
> lazy init appLogAggregatorImpl when log aggregation
> ---------------------------------------------------
>
> Key: YARN-9947
> URL: https://issues.apache.org/jira/browse/YARN-9947
> Project: Hadoop YARN
> Issue Type: Improvement
> Components: nodemanager
> Affects Versions: 3.1.3
> Reporter: Hu Ziqian
> Assignee: Hu Ziqian
> Priority: Major
> Attachments: YARN-9947.001.patch
>
>
> This issue introduce an method to lazy init appLogAggregatorImpl, which let
> it access hdfs as later as possible (when the app finish usually), to avoid
> access hdfs at same time when restart NMs in a cluster and reduce hdfs
> pressure. Lets go into the details below.
> In current version, app log aggregator will check HDFS and try to create log
> app when init an app. This cause a problem when restart NMs in a large
> cluster with a heavy hdfs. Restart NM will init all apps on a NM and the NM
> will try to connect HDFS. If the HDFS is heavily loaded, many NMs restart at
> same time will let the hdfs not respond. The NM will wait for HDFS's response
> and RM can't get NM's heartbeat and treat all containers as timeout.
> In our product environment with 3500+ NMs, we find the NMs restart will put
> heavy pressure on HDFS and the init app's operation is blocked on accessing
> hdfs (stack attached blow), which causes all the container failed (we can
> find the container number in one NM fall to zero).
> !https://teambition-file.alibaba-inc.com/storage/011mcaf1aebf84f02a5d2c2c5fa85af80f5b?download=upload_tfs_by_description.png&Signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBcHBJRCI6IjVjZDkwOTdmYjNhNDMyMjk3OTBhN2EyZiIsIl9hcHBJZCI6IjVjZDkwOTdmYjNhNDMyMjk3OTBhN2EyZiIsIl9vcmdhbml6YXRpb25JZCI6IjVjNDA1N2YwYmU4MjViMzkwNjY3YWJlZSIsImV4cCI6MTU3MjgzNzQxMywiaWF0IjoxNTcyODM3MTEzLCJyZXNvdXJjZSI6Ii9zdG9yYWdlLzAxMW1jYWYxYWViZjg0ZjAyYTVkMmMyYzVmYTg1YWY4MGY1YiJ9.JJQoQvjWdAQItQkjtdxa1SnkqJWuij_w2xq2Unoaktg!
> !https://teambition-file.alibaba-inc.com/storage/011m873079212ee7fe507ddbe163a0c07fb1?download=upload_tfs_by_description.png&Signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBcHBJRCI6IjVjZDkwOTdmYjNhNDMyMjk3OTBhN2EyZiIsIl9hcHBJZCI6IjVjZDkwOTdmYjNhNDMyMjk3OTBhN2EyZiIsIl9vcmdhbml6YXRpb25JZCI6IjVjNDA1N2YwYmU4MjViMzkwNjY3YWJlZSIsImV4cCI6MTU3MjgzNzQxMywiaWF0IjoxNTcyODM3MTEzLCJyZXNvdXJjZSI6Ii9zdG9yYWdlLzAxMW04NzMwNzkyMTJlZTdmZTUwN2RkYmUxNjNhMGMwN2ZiMSJ9.kH73n6bdx8ETXsrWcBGgXGay2WP3z9nzuDlE8-RvQzs!
> We solve this problem by introduce lazy initialization in
> appLogAggregatorImpl. When init app, we just create appLogAggregatorImpl
> object with out verifyAndCreateRemoteLogDir(). We do the
> verifyAndCreateRemoteLogDir() when the app start aggregate logs. Because apps
> always are not finish or aggregate log at same time, the
> verifyAndCreateRemoteLogDir will execute dispersedly, which means NMs will
> not access hdfs at same time when they restart at same time.
>
> YARN-8418 solve the container logs' directory leaked problem by add a way to
> update credentials of NM. If we lazy init appLogAggregatorImpl, we don't need
> YARN-8418's logic because the lazy init logic happens after addCredentials
> logic, which means the credentials always refreshed before we use it.
>
> In summary, this issue do two things:
> # Introducing a lazy init logic to appLogAggregatorImpl to avoid centralized
> access HDFS when restart all NMs in a cluster.
> # Reverting YARN-8481 because the lazy init logic guarantee refreshing the
> credentials.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]