[jira] [Commented] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613506#comment-15613506
 ] 

Hudson commented on YARN-4710:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10711 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10711/])
YARN-4710. Reduce logging application reserved debug info in (templedf: rev 
b98fc8249f0576e7b4e230ffc3cec5a20eefc543)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java


> Reduce logging application reserved debug info in FSAppAttempt#assignContainer
> --
>
> Key: YARN-4710
> URL: https://issues.apache.org/jira/browse/YARN-4710
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: oct16-easy
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4710.001.patch, yarn-debug.log
>
>
> I found lots of unimportant records info in assigning container when I 
> prepared to debug the problem of container assigning.There are too many 
> records like this in yarn-resourcemanager.log, and it's difficiult for me to 
> directly to found the important info.
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,971 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,976 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,981 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,986 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,991 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,996 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,001 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,007 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,012 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,017 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,022 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,027 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,032 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,038 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,050 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,057 DEBUG
> {code}
> The reason why of so many records is that it always print this info first in 
> container assigning whether the assigned result is successful or failed.
> Can see the complete yarn log in updated log, and you can see how many 
> records there are.
> And in addition, too many these info logging will slow down process of 
> container assigning.Maybe we should change this logLevel to other level, like 
> {{trace}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613432#comment-15613432
 ] 

Hadoop QA commented on YARN-4710:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-4710 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788896/YARN-4710.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cb75152b7dcd 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dd4ed6a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13584/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13584/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13584/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce logging application reserved debug info in FSAppAttempt#assignContainer
> 

[jira] [Commented] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-10-27 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613230#comment-15613230
 ] 

Daniel Templeton commented on YARN-4710:


Change makes sense to me.  +1

> Reduce logging application reserved debug info in FSAppAttempt#assignContainer
> --
>
> Key: YARN-4710
> URL: https://issues.apache.org/jira/browse/YARN-4710
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-4710.001.patch, yarn-debug.log
>
>
> I found lots of unimportant records info in assigning container when I 
> prepared to debug the problem of container assigning.There are too many 
> records like this in yarn-resourcemanager.log, and it's difficiult for me to 
> directly to found the important info.
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,971 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,976 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,981 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,986 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,991 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,996 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,001 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,007 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,012 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,017 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,022 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,027 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,032 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,038 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,050 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,057 DEBUG
> {code}
> The reason why of so many records is that it always print this info first in 
> container assigning whether the assigned result is successful or failed.
> Can see the complete yarn log in updated log, and you can see how many 
> records there are.
> And in addition, too many these info logging will slow down process of 
> container assigning.Maybe we should change this logLevel to other level, like 
> {{trace}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156200#comment-15156200
 ] 

Hadoop QA commented on YARN-4710:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 47s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 17s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 153m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-02-21 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155970#comment-15155970
 ] 

Lin Yiqun commented on YARN-4710:
-

In my opinion, this reserved record can be printed when successfully assigning 
container and printing other concrete info.

> Reduce logging application reserved debug info in FSAppAttempt#assignContainer
> --
>
> Key: YARN-4710
> URL: https://issues.apache.org/jira/browse/YARN-4710
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>Priority: Minor
> Attachments: YARN-4710.001.patch, yarn-debug.log
>
>
> I found lots of unimportant records info in assigning container when I 
> prepared to debug the problem of container assigning.There are too many 
> records like this in yarn-resourcemanager.log, and it's difficiult for me to 
> directly to found the important info.
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,971 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,976 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,981 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,986 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,991 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,996 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,001 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,007 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,012 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,017 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,022 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,027 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,032 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,038 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,050 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,057 DEBUG
> {code}
> The reason why of so many records is that it always print this info first in 
> container assigning whether the assigned result is successful or failed.
> Can see the complete yarn log in updated log, and you can see how many 
> records there are.
> And in addition, too many these info logging will slow down process of 
> container assigning.Maybe we should change this logLevel to other level, like 
> {{trace}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)